Search results for: linear complexity
4348 Overhead Lines Induced Transient Overvoltage Analysis Using Finite Difference Time Domain Method
Authors: Abdi Ammar, Ouazir Youcef, Laissaoui Abdelmalek
Abstract:
In this work, an approach based on transmission lines theory is presented. It is exploited for the calculation of overvoltage created by direct impacts of lightning waves on a guard cable of an overhead high-voltage line. First, we show the theoretical developments leading to the propagation equation, its discretization by finite difference time domain method (FDTD), and the resulting linear algebraic equations, followed by the calculation of the linear parameters of the line. The second step consists of solving the transmission lines system of equations by the FDTD method. This enabled us to determine the spatio-temporal evolution of the induced overvoltage.Keywords: lightning surge, transient overvoltage, eddy current, FDTD, electromagnetic compatibility, ground wire
Procedia PDF Downloads 834347 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models
Authors: Benbiao Song, Yan Gao, Zhuo Liu
Abstract:
Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram
Procedia PDF Downloads 2644346 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm
Procedia PDF Downloads 1654345 Active Control Improvement of Smart Cantilever Beam by Piezoelectric Materials and On-Line Differential Artificial Neural Networks
Authors: P. Karimi, A. H. Khedmati Bazkiaei
Abstract:
The main goal of this study is to test differential neural network as a controller of smart structure and is to enumerate its advantages and disadvantages in comparison with other controllers. In this study, the smart structure has been considered as a Euler Bernoulli cantilever beam and it has been tried that it be under control with the use of vibration neural network resulting from movement. Also, a linear observer has been considered as a reference controller and has been compared its results. The considered vibration charts and the controlled state have been recounted in the final part of this text. The obtained result show that neural observer has better performance in comparison to the implemented linear observer.Keywords: smart material, on-line differential artificial neural network, active control, finite element method
Procedia PDF Downloads 2104344 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 1364343 Robust Variable Selection Based on Schwarz Information Criterion for Linear Regression Models
Authors: Shokrya Saleh A. Alshqaq, Abdullah Ali H. Ahmadini
Abstract:
The Schwarz information criterion (SIC) is a popular tool for selecting the best variables in regression datasets. However, SIC is defined using an unbounded estimator, namely, the least-squares (LS), which is highly sensitive to outlying observations, especially bad leverage points. A method for robust variable selection based on SIC for linear regression models is thus needed. This study investigates the robustness properties of SIC by deriving its influence function and proposes a robust SIC based on the MM-estimation scale. The aim of this study is to produce a criterion that can effectively select accurate models in the presence of vertical outliers and high leverage points. The advantages of the proposed robust SIC is demonstrated through a simulation study and an analysis of a real dataset.Keywords: influence function, robust variable selection, robust regression, Schwarz information criterion
Procedia PDF Downloads 1394342 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm
Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam
Abstract:
The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction
Procedia PDF Downloads 1394341 Response of Concrete Panels Subjected to Compression-Tension State of Stresses
Authors: Mohammed F. Almograbi
Abstract:
For reinforced concrete panels the risk of failure due to compression -tension state of stresses, results from pure shear or torsion, can be a major problem. The present calculation methods for such stresses from multiple influences are without taking into account the softening of cracked concrete remains conservative. The non-linear finite element method has become an important and increasingly used tool for the analysis and assessment of the structures by including cracking softening and tension-stiffening. The aim of this paper is to test a computer program refined recently and to simulate the compression response of cracked concrete element and to compare with the available experimental results.Keywords: reinforced concrete panels, compression-tension, shear, torsion, compression softening, tension stiffening, non-linear finite element analysis
Procedia PDF Downloads 3374340 Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm
Authors: Sundara Subramanian Karuppasamy, Che Hua Yang
Abstract:
In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution.Keywords: laser ultrasonics, linear phased array, nondestructive testing, synthetic aperture focusing technique, ultrasonic imaging
Procedia PDF Downloads 1334339 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.
Authors: Zabeehullah, Fahim Arif, Yawar Abbas
Abstract:
Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.Keywords: SDN, IoT, DL, ML, DRS
Procedia PDF Downloads 1104338 Challenges for Interface Designers in Designing Sensor Dashboards in the Context of Industry 4.0
Authors: Naveen Kumar, Shyambihari Prajapati
Abstract:
Industry 4.0 is the fourth industrial revolution that focuses on interconnectivity of machine to machine, human to machine and human to human via Internet of Things (IoT). Technologies of industry 4.0 facilitate communication between human and machine through IoT and forms Cyber-Physical Production System (CPPS). In CPPS, multiple shop floors sensor data are connected through IoT and displayed through sensor dashboard to the operator. These sensor dashboards have enormous amount of information to be presented which becomes complex for operators to perform monitoring, controlling and interpretation tasks. Designing handheld sensor dashboards for supervision task will become a challenge for the interface designers. This paper reports emerging technologies of industry 4.0, changing context of increasing information complexity in consecutive industrial revolutions and upcoming design challenges for interface designers in context of Industry 4.0. Authors conclude that information complexity of sensor dashboards design has increased with consecutive industrial revolutions and designs of sensor dashboard causes cognitive load on users. Designing such complex dashboards interfaces in Industry 4.0 context will become main challenges for the interface designers.Keywords: Industry4.0, sensor dashboard design, cyber-physical production system, Interface designer
Procedia PDF Downloads 1284337 Magneto-Solutal Convection in Newtonian Fluid Layer with Modulated Gravity
Authors: Om Prakash Keshri, Anand Kumar, Vinod K. Gupta
Abstract:
In the present study, the effect of gravity modulation on the onset of convection in viscous fluid layer under the influence of induced magnetic field, salted from above on the boundaries, has been investigated. Linear and nonlinear stability analysis has been performed. A linear stability analysis is performed to show that the gravity modulation can significantly affect the stability limits of the system. A method based on small amplitude of the modulation is used to compute the critical value of Rayleigh number and wave number. The effect of Smith number, salute Rayleigh number and magnetic Prandtl number on the stability of the system is investigated.Keywords: viscous fluid, induced magnetic field, gravity modulation, salute convection
Procedia PDF Downloads 1904336 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient
Procedia PDF Downloads 1354335 Optimization Technique for the Contractor’s Portfolio in the Bidding Process
Authors: Taha Anjamrooz, Sareh Rajabi, Salwa Bheiry
Abstract:
Selection between the available projects in bidding processes for the contractor is one of the essential areas to concentrate on. It is important for the contractor to choose the right projects within its portfolio during the tendering stage based on certain criteria. It should align the bidding process with its origination strategies and goals as a screening process to have the right portfolio pool to start with. Secondly, it should set the proper framework and use a suitable technique in order to optimize its selection process for concertation purpose and higher efforts during the tender stage with goals of success and winning. In this research paper, a two steps framework proposed to increase the efficiency of the contractor’s bidding process and the winning chance of getting the new projects awarded. In this framework, initially, all the projects pass through the first stage screening process, in which the portfolio basket will be evaluated and adjusted in accordance with the organization strategies to the reduced version of the portfolio pool, which is in line with organization activities. In the second stage, the contractor uses linear programming to optimize the portfolio pool based on available resources such as manpower, light equipment, heavy equipment, financial capability, return on investment, and success rate of winning the bid. Therefore, this optimization model will assist the contractor in utilizing its internal resource to its maximum and increase its winning chance for the new project considering past experience with clients, built-relation between two parties, and complexity in the exertion of the projects. The objective of this research will be to increase the contractor's winning chance in the bidding process based on the success rate and expected return on investment.Keywords: bidding process, internal resources, optimization, contracting portfolio management
Procedia PDF Downloads 1424334 Modeling Exponential Growth Activity Using Technology: A Research with Bachelor of Business Administration Students
Authors: V. Vargas-Alejo, L. E. Montero-Moguel
Abstract:
Understanding the concept of function has been important in mathematics education for many years. In this study, the models built by a group of five business administration and accounting undergraduate students when carrying out a population growth activity are analyzed. The theoretical framework is the Models and Modeling Perspective. The results show how the students included tables, graphics, and algebraic representations in their models. Using technology was useful to interpret, describe, and predict the situation. The first model, the students built to describe the situation, was linear. After that, they modified and refined their ways of thinking; finally, they created exponential growth. Modeling the activity was useful to deep on mathematical concepts such as covariation, rate of change, and exponential function also to differentiate between linear and exponential growth.Keywords: covariation reasoning, exponential function, modeling, representations
Procedia PDF Downloads 1204333 Alternative Mathematical form for Determining the Effectiveness of High-LET Radiations at Lower Doses Region
Authors: Abubaker A. Yousif, Muhamad S. Yasir
Abstract:
The Effectiveness of lower doses of high-LET radiations is not accurately determined by using energy-based physical parameters such as absorbed dose and radio-sensitivity parameters. Therefore, an attempt has been carried out in this research to propose alternative parameter that capable to quantify the effectiveness of these high LET radiations at lower doses regions. The linear energy transfer and mean free path are employed to achieve this objective. A new mathematical form of the effectiveness of high-LET radiations at lower doses region has been formulated. Based on this parameter, the optimized effectiveness of high-LET radiations occurs when the energy of charged particles is deposited at spacing of 2 nm for primary ionization.Keywords: effectiveness, low dose, radiation mean free path, linear energy transfer
Procedia PDF Downloads 4614332 Transient Hygrothermoelastic Behavior in an Infinite Annular Cylinder with Internal Heat Generation by Linear Dependence Theory of Coupled Heat and Moisture
Authors: Tasneem Firdous Islam, G. D. Kedar
Abstract:
The aim of this paper is to study the effect of internal heat generation in a transient infinitely long annular cylinder subjected to hygrothermal loadings. The linear dependence theory of moisture and temperature is derived based on Dufour and Soret effect. The meticulous solutions of temperature, moisture, and thermal stresses are procured by using the Hankel transform technique. The influence of the internal heat source on the radial aspect is examined for coupled and uncoupled cases. In the present study, the composite material T300/5208 is considered, and the coupled and uncoupled cases are analyzed. The results obtained are computed numerically and illustrated graphically.Keywords: temperature, moisture, hygrothermoelasticity, internal heat generation, annular cylinder
Procedia PDF Downloads 1154331 Managing Information Technology: An Overview of Information Technology Governance
Authors: Mehdi Asgarkhani
Abstract:
Today, investment on Information Technology (IT) solutions in most organizations is the largest component of capital expenditure. As capital investment on IT continues to grow, IT managers and strategists are expected to develop and put in practice effective decision making models (frameworks) that improve decision-making processes for the use of IT in organizations and optimize the investment on IT solutions. To be exact, there is an expectation that organizations not only maximize the benefits of adopting IT solutions but also avoid the many pitfalls that are associated with rapid introduction of technological change. Different organizations depending on size, complexity of solutions required and processes used for financial management and budgeting may use different techniques for managing strategic investment on IT solutions. Decision making processes for strategic use of IT within organizations are often referred to as IT Governance (or Corporate IT Governance). This paper examines IT governance - as a tool for best practice in decision making about IT strategies. Discussions in this paper represent phase I of a project which was initiated to investigate trends in strategic decision making on IT strategies. Phase I is concerned mainly with review of literature and a number of case studies, establishing that the practice of IT governance, depending on the complexity of IT solutions, organization's size and organization's stage of maturity, varies significantly – from informal approaches to sophisticated formal frameworks.Keywords: IT governance, corporate governance, IT governance frameworks, IT governance components, aligning IT with business strategies
Procedia PDF Downloads 4064330 Tensile strength and Elastic Modulus of Nanocomposites Based on Polypropylene/Linear Low Density Polyethylene/Titanium Dioxide Nanoparticles
Authors: Faramarz Ashenai Ghasemi, Ismail Ghasemi, Sajad Daneshpayeh
Abstract:
In this study, tensile strength and elastic modulus of nanocomposites based on polypropylene/ linear low density polyethylene/ nano titanium dioxide (PP/LLDPE/TiO2) were studied. The samples were produced using a co-rotating twin screw extruder including 0, 2, 4 Wt .% of nano particles, and 20, 40, 60 Wt.% of LLDPE. The styrene-ethylene-butylene-styrene (SEBS) was used as comptabiliser. Tensile strength and elastic modulus were evaluated. The results showed that modulus was increased by 7% with addition of nano particles in comparison to PP/LLDPE. In addition, tensile strength was decreased.Keywords: PP/LLDPE/TiO2, nanocomposites, elastic modulus, tensile strength
Procedia PDF Downloads 5284329 Fuzzy-Sliding Controller Design for Induction Motor Control
Authors: M. Bouferhane, A. Boukhebza, L. Hatab
Abstract:
In this paper, the position control of linear induction motor using fuzzy sliding mode controller design is proposed. First, the indirect field oriented control LIM is derived. Then, a designed sliding mode control system with an integral-operation switching surface is investigated, in which a simple adaptive algorithm is utilized for generalised soft-switching parameter. Finally, a fuzzy sliding mode controller is derived to compensate the uncertainties which occur in the control, in which the fuzzy logic system is used to dynamically control parameter settings of the SMC control law. The effectiveness of the proposed control scheme is verified by numerical simulation. The experimental results of the proposed scheme have presented good performances compared to the conventional sliding mode controller.Keywords: linear induction motor, vector control, backstepping, fuzzy-sliding mode control
Procedia PDF Downloads 4894328 Effect of Mica Content in Sand on Site Response Analyses
Authors: Volkan Isbuga, Joman M. Mahmood, Ali Firat Cabalar
Abstract:
This study presents the site response analysis of mica-sand mixtures available in certain parts of the world including Izmir, a highly populated city and located in a seismically active region in western part of Turkey. We performed site response analyses by employing SHAKE, an equivalent linear approach, for the micaceous soil deposits consisting of layers with different amount of mica contents and thicknesses. Dynamic behavior of micaceous sands such as shear modulus reduction and damping ratio curves are input for the ground response analyses. Micaceous sands exhibit a unique dynamic response under a scenario earthquake with a magnitude of Mw=6. Results showed that higher amount of mica caused higher spectral accelerations.Keywords: micaceous sands, site response, equivalent linear approach, SHAKE
Procedia PDF Downloads 3404327 Dry Relaxation Shrinkage Prediction of Bordeaux Fiber Using a Feed Forward Neural
Authors: Baeza S. Roberto
Abstract:
The knitted fabric suffers a deformation in its dimensions due to stretching and tension factors, transverse and longitudinal respectively, during the process in rectilinear knitting machines so it performs a dry relaxation shrinkage procedure and thermal action of prefixed to obtain stable conditions in the knitting. This paper presents a dry relaxation shrinkage prediction of Bordeaux fiber using a feed forward neural network and linear regression models. Six operational alternatives of shrinkage were predicted. A comparison of the results was performed finding neural network models with higher levels of explanation of the variability and prediction. The presence of different reposes are included. The models were obtained through a neural toolbox of Matlab and Minitab software with real data in a knitting company of Southern Guanajuato. The results allow predicting dry relaxation shrinkage of each alternative operation.Keywords: neural network, dry relaxation, knitting, linear regression
Procedia PDF Downloads 5854326 Influence of Internal Heat Source on Thermal Instability in a Horizontal Porous Layer with Mass Flow and Inclined Temperature Gradient
Authors: Anjanna Matta, P. A. L. Narayana
Abstract:
An investigation has been presented to analyze the effect of internal heat source on the onset of Hadley-Prats flow in a horizontal fluid saturated porous medium. We examine a better understanding of the combined influence of the heat source and mass flow effect by using linear stability analysis. The resultant eigenvalue problem is solved by using shooting and Runga-Kutta methods for evaluate critical thermal Rayleight number with respect to various flow governing parameters. It is identified that the flow is switch from stabilizing to destabilizing as the horizontal thermal Rayleigh number is enhanced. The heat source and mass flow increases resulting a stronger destabilizing effect.Keywords: linear stability analysis, heat source, porous medium, mass flow
Procedia PDF Downloads 7214325 Linear fractional differential equations for second kind modified Bessel functions
Authors: Jorge Olivares, Fernando Maass, Pablo Martin
Abstract:
Fractional derivatives have been considered recently as a way to solve different problems in Engineering. In this way, second kind modified Bessel functions are considered here. The order α fractional differential equations of second kind Bessel functions, Kᵥ(x), are studied with simple initial conditions. The Laplace transform and Caputo definition of fractional derivatives are considered. Solutions have been found for ν=1/3, 1/2, 2/3, -1/3, -1/2 and (-2/3). In these cases, the solutions are the sum of two hypergeometric functions. The α fractional derivatives have been for α=1/3, 1/2 and 2/3, and the above values of ν. No convergence has been found for the integer values of ν Furthermore when α has been considered as a rational found m/p, no general solution has been found. Clearly, this case is more difficult to treat than those of first kind Bessel Function.Keywords: Caputo, modified Bessel functions, hypergeometric, linear fractional differential equations, transform Laplace
Procedia PDF Downloads 3424324 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 944323 Evaluation of Quasi-Newton Strategy for Algorithmic Acceleration
Authors: T. Martini, J. M. Martínez
Abstract:
An algorithmic acceleration strategy based on quasi-Newton (or secant) methods is displayed for address the practical problem of accelerating the convergence of the Newton-Lagrange method in the case of convergence to critical multipliers. Since the Newton-Lagrange iteration converges locally at a linear rate, it is natural to conjecture that quasi-Newton methods based on the so called secant equation and some minimal variation principle, could converge superlinearly, thus restoring the convergence properties of Newton's method. This strategy can also be applied to accelerate the convergence of algorithms applied to fixed-points problems. Computational experience is reported illustrating the efficiency of this strategy to solve fixed-point problems with linear convergence rate.Keywords: algorithmic acceleration, fixed-point problems, nonlinear programming, quasi-newton method
Procedia PDF Downloads 4894322 Estimation of Harmonics in Three-Phase and Six-Phase-Phase (Multi-Phase) Load Circuits
Authors: Zakir Husain, Deepak Kumar
Abstract:
The harmonics are very harmful within an electrical system and can have serious consequences such as reducing the life of apparatus, stress on cable and equipment etc. This paper cites extensive analytical study of harmonic characteristics of multiphase (six-phase) and three-phase system equipped with two and three level inverters for non-linear loads. Multilevel inverter has elevated voltage capability with voltage limited devices, low harmonic distortion, abridged switching losses. Multiphase technology also pays a promising role in harmonic reduction. Matlab simulation is carried out to compare the advantage of multi-phase over three phase systems equipped with two or three level inverters for non-linear load harmonic reduction. The extensive simulation results are presented based on case studies.Keywords: fast Fourier transform (FFT), harmonics, inverter, ripples, total harmonic distortion (THD)
Procedia PDF Downloads 5524321 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach
Authors: Ju-Hong Lee, Yi-Lin Shieh
Abstract:
Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm
Procedia PDF Downloads 2254320 Testing a Flexible Manufacturing System Facility Production Capacity through Discrete Event Simulation: Automotive Case Study
Authors: Justyna Rybicka, Ashutosh Tiwari, Shane Enticott
Abstract:
In the age of automation and computation aiding manufacturing, it is clear that manufacturing systems have become more complex than ever before. Although technological advances provide the capability to gain more value with fewer resources, sometimes utilisation of the manufacturing capabilities available to organisations is difficult to achieve. Flexible manufacturing systems (FMS) provide a unique capability to manufacturing organisations where there is a need for product range diversification by providing line efficiency through production flexibility. This is very valuable in trend driven production set-ups or niche volume production requirements. Although FMS provides flexible and efficient facilities, its optimal set-up is key in achieving production performance. As many variables are interlinked due to the flexibility provided by the FMS, analytical calculations are not always sufficient to predict the FMS’ performance. Simulation modelling is capable of capturing the complexity and constraints associated with FMS. This paper demonstrates how discrete event simulation (DES) can address complexity in an FMS to optimise the production line performance. A case study of an automotive FMS is presented. The DES model demonstrates different configuration options depending on prioritising objectives: utilisation and throughput. Additionally, this paper provides insight into understanding the impact of system set-up constraints on the FMS performance and demonstrates the exploration into the optimal production set-up.Keywords: discrete event simulation, flexible manufacturing system, capacity performance, automotive
Procedia PDF Downloads 3274319 Methodology to Assess the Circularity of Industrial Processes
Authors: Bruna F. Oliveira, Teresa I. Gonçalves, Marcelo M. Sousa, Sandra M. Pimenta, Octávio F. Ramalho, José B. Cruz, Flávia V. Barbosa
Abstract:
The EU Circular Economy action plan, launched in 2020, is one of the major initiatives to promote the transition into a more sustainable industry. The circular economy is a popular concept used by many companies nowadays. Some industries are better forwarded to this reality than others, and the tannery industry is a sector that needs more attention due to its strong environmental impact caused by its dimension, intensive resources consumption, lack of recyclability, and second use of its products, as well as the industrial effluents generated by the manufacturing processes. For these reasons, the zero-waste goal and the European objectives are further being achieved. In this context, a need arises to provide an effective methodology that allows to determine the level of circularity of tannery companies. Regarding the complexity of the circular economy concept, few factories have a specialist in sustainability to assess the company’s circularity or have the ability to implement circular strategies that could benefit the manufacturing processes. Although there are several methodologies to assess circularity in specific industrial sectors, there is not an easy go-to methodology applied in factories aiming for cleaner production. Therefore, a straightforward methodology to assess the level of circularity, in this case of a tannery industry, is presented and discussed in this work, allowing any company to measure the impact of its activities. The methodology developed consists in calculating the Overall Circular Index (OCI) by evaluating the circularity of four key areas -energy, material, economy and social- in a specific factory. The index is a value between 0 and 1, where 0 means a linear economy, and 1 is a complete circular economy. Each key area has a sub-index, obtained through key performance indicators (KPIs) regarding each theme, and the OCI reflects the average of the four sub-indexes. Some fieldwork in the appointed company was required in order to obtain all the necessary data. By having separate sub-indexes, one can observe which areas are more linear than others. Thus, it is possible to work on the most critical areas by implementing strategies to increase the OCI. After these strategies are implemented, the OCI is recalculated to check the improvements made and any other changes in the remaining sub-indexes. As such, the methodology in discussion works through continuous improvement, constantly reevaluating and improving the circularity of the factory. The methodology is also flexible enough to be implemented in any industrial sector by adapting the KPIs. This methodology was implemented in a selected Portuguese small and medium-sized enterprises (SME) tannery industry and proved to be a relevant tool to measure the circularity level of the factory. It was witnessed that it is easier for non-specialists to evaluate circularity and identify possible solutions to increase its value, as well as learn how one action can impact their environment. In the end, energetic and environmental inefficiencies were identified and corrected, increasing the sustainability and circularity of the company. Through this work, important contributions were provided, helping the Portuguese SMEs to achieve the European and UN 2030 sustainable goals.Keywords: circular economy, circularity index, sustainability, tannery industry, zero-waste
Procedia PDF Downloads 68