Search results for: fling step
842 High-performance Second-Generation Controlled Current Conveyor CCCII and High Frequency Applications
Authors: Néjib Hassen, Thouraya Ettaghzouti, Kamel Besbes
Abstract:
In this paper, a modified CCCII is presented. We have used a current mirror with low supply voltage. This circuit is operated at low supply voltage of ±1V. Tspice simulations for TSMC 0.18μm CMOS Technology has shown that the current and voltage bandwidth are respectively 3.34GHz and 4.37GHz, and parasitic resistance at port X has a value of 169.320 for a control current of 120μA. In order to realize this circuit, we have implemented in this first step a universal current mode filter where the frequency can reach the 134.58MHz. In the second step, we have implemented two simulated inductors: one floating and the other grounded. These two inductors are operated in high frequency and variable depending on bias current I0. Finally, we have used the two last inductors respectively to implement two sinusoidal oscillators domains of frequencies respectively: [470MHz, 692MHz], and [358MHz, 572MHz] for bias currents I0 [80μA, 350μA].
Keywords: Current controlled current conveyor CCCII, floating inductor, grounded inductor, oscillator, universal filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2788841 GA Based Optimal Feature Extraction Method for Functional Data Classification
Authors: Jun Wan, Zehua Chen, Yingwu Chen, Zhidong Bai
Abstract:
Classification is an interesting problem in functional data analysis (FDA), because many science and application problems end up with classification problems, such as recognition, prediction, control, decision making, management, etc. As the high dimension and high correlation in functional data (FD), it is a key problem to extract features from FD whereas keeping its global characters, which relates to the classification efficiency and precision to heavens. In this paper, a novel automatic method which combined Genetic Algorithm (GA) and classification algorithm to extract classification features is proposed. In this method, the optimal features and classification model are approached via evolutional study step by step. It is proved by theory analysis and experiment test that this method has advantages in improving classification efficiency, precision and robustness whereas using less features and the dimension of extracted classification features can be controlled.Keywords: Classification, functional data, feature extraction, genetic algorithm, wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554840 Experimental Study on Two-Step Pyrolysis of Automotive Shredder Residue
Authors: Letizia Marchetti, Federica Annunzi, Federico Fiorini, Cristiano Nicolella
Abstract:
Automotive shredder residue (ASR) is a mixture of waste that makes up 20-25% of end-of-life vehicles. For many years, ASR was commonly disposed of in landfills or incinerated, causing serious environmental problems. Nowadays, thermochemical treatments are a promising alternative, although the heterogeneity of ASR still poses some challenges. One of the emerging thermochemical treatments for ASR is pyrolysis, which promotes the decomposition of long polymeric chains by providing heat in the absence of an oxidizing agent. In this way, pyrolysis promotes the conversion of ASR into solid, liquid, and gaseous phases. This work aims to improve the performance of a two-step pyrolysis process. After the characterization of the analysed ASR, the focus is on determining the effects of residence time on product yields and gas composition. A batch experimental setup that reproduces the entire process was used. The setup consists of three sections: the pyrolysis section (made of two reactors), the separation section, and the analysis section. Two different residence times were investigated to find suitable conditions for the first sample of ASR. These first tests showed that the products obtained were more sensitive to residence time in the second reactor. Indeed, slightly increasing residence time in the second reactor managed to raise the yield of gas and carbon residue and decrease the yield of liquid fraction. Then, to test the versatility of the setup, the same conditions were applied to a different sample of ASR coming from a different chemical plant. The comparison between the two ASR samples shows that similar product yields and compositions are obtained using the same setup.
Keywords: Automotive shredder residue, experimental tests, heterogeneity, product yields, two-step pyrolysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 50839 Intelligent Automatic Generation Control of Two Area Interconnected Power System using Hybrid Neuro Fuzzy Controller
Abstract:
This paper presents the development and application of an adaptive neuro fuzzy inference system (ANFIS) based intelligent hybrid neuro fuzzy controller for automatic generation control (AGC) of two-area interconnected thermal power system with reheat non linearity. The dynamic response of the system has been studied for 1% step load perturbation in area-1. The performance of the proposed neuro fuzzy controller is compared against conventional proportional-integral (PI) controller, state feedback linear quadratic regulator (LQR) controller and fuzzy gain scheduled proportionalintegral (FGSPI) controller. Comparative analysis demonstrates that the proposed intelligent neuro fuzzy controller is the most effective of all in improving the transients of frequency and tie-line power deviations against small step load disturbances. Simulations have been performed using Matlab®.
Keywords: Automatic generation control, ANFIS, LQR, Hybrid neuro fuzzy controller
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2683838 Unsteady Aerodynamics of Multiple Airfoils in Configuration
Authors: Hossain Aziz, Rinku Mukherjee
Abstract:
A potential flow model is used to study the unsteady flow past two airfoils in configuration, each of which is suddenly set into motion. The airfoil bound vortices are modeled using lumped vortex elements and the wake behind the airfoil is modeled by discrete vortices. This consists of solving a steady state flow problem at each time-step where unsteadiness is incorporated through the “zero normal flow on a solid surface" boundary condition at every time instant. Additionally, along with the “zero normal flow on a solid surface" boundary condition Kelvin-s condition is used to compute the strength of the latest wake vortex shed from the trailing edge of the airfoil. Location of the wake vortices is updated at each time-step to get the wake shape at each time instant. Results are presented to show the effect of airfoil-airfoil interaction and airfoil-wake interaction on the aerodynamic characteristics of each airfoil.Keywords: Aerodynamics, Airfoils, Configuration, Unsteady.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058837 New Efficient Method for Coding Color Images
Authors: Walaa M.Abd-Elhafiez, Wajeb Gharibi
Abstract:
In this paper a novel color image compression technique for efficient storage and delivery of data is proposed. The proposed compression technique started by RGB to YCbCr color transformation process. Secondly, the canny edge detection method is used to classify the blocks into the edge and non-edge blocks. Each color component Y, Cb, and Cr compressed by discrete cosine transform (DCT) process, quantizing and coding step by step using adaptive arithmetic coding. Our technique is concerned with the compression ratio, bits per pixel and peak signal to noise ratio, and produce better results than JPEG and more recent published schemes (like CBDCT-CABS and MHC). The provided experimental results illustrate the proposed technique that is efficient and feasible in terms of compression ratio, bits per pixel and peak signal to noise ratio.
Keywords: Image compression, color image, Q-coder, quantization, edge-detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671836 Synthesis of Aragonite Superstructure from Steelmaking Slag via Indirect CO2 Mineral Sequestration
Authors: Weijun Bao, Huiquan Li
Abstract:
Using steelmaking slag as a raw material, aragonite superstructure product had been synthesized via an indirect CO2 mineral sequestration rout. It mainly involved two separate steps, in which the element of calcium is first selectively leached from steelmaking slag by a novel leaching media consisting of organic solvent Tributyl phosphate (TBP), acetic acid, and ultra-purity water, followed by enhanced carbonation in a separate step for aragonite superstructure production as well as efficiency recovery of leaching media. Based on the different leaching medium employed in the steelmaking slag leaching process, two typical products were collected from the enhanced carbonation step. The products were characterized by X-ray powder diffraction (XRD) and scanning electron microscopy (SEM), respectively. It reveals that the needle-like aragonite crystals self-organized into aragonite superstructure particles including aragonite microspheres as well as dumbbell-like spherical particles, can be obtained from the steelmaking slag with the purity over 99%.
Keywords: Aragonite superstructure, Steelmaking slag, Indirect CO2 mineral sequestration, Selective leaching, Enhanced carbonation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2074835 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach
Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan
Abstract:
Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.
Keywords: Efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 846834 Optimal Analysis of Grounding System Design for Distribution Substation
Authors: T. Lantharthong, N. Rugthaicharoencheep, A. Phayomhom
Abstract:
This paper presents the electrical effect of two neighboring distribution substation during the construction phase. The size of auxiliary grounding grid have an effect on entire grounding system. The bigger the size of auxiliary grounding grid, the lower the GPR and maximum touch voltage, with the exception that when the two grids are unconnected, i.e. the bigger the size of auxiliary grounding grid, the higher the maximum step voltage. The results in this paper could be served as design guideline of grounding system, and perhaps remedy of some troublesome grounding grids in power distribution’s system. Modeling and simulation is carried out on the Current Distribution Electromagnetic interference Grounding and Soil structure (CDEGS) program. The simulation results exhibit the design and analysis of power system grounding and perhaps could be set as a standard in grounding system design and modification in distribution substations.
Keywords: Grounding System, Touch Voltage, Step Voltage, Safety Criteria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2681833 Analysis of Sequence Moves in Successful Chess Openings Using Data Mining with Association Rules
Authors: R.M.Rani
Abstract:
Chess is one of the indoor games, which improves the level of human confidence, concentration, planning skills and knowledge. The main objective of this paper is to help the chess players to improve their chess openings using data mining techniques. Budding Chess Players usually do practices by analyzing various existing openings. When they analyze and correlate thousands of openings it becomes tedious and complex for them. The work done in this paper is to analyze the best lines of Blackmar- Diemer Gambit(BDG) which opens with White D4... using data mining analysis. It is carried out on the collection of winning games by applying association rules. The first step of this analysis is assigning variables to each different sequence moves. In the second step, the sequence association rules were generated to calculate support and confidence factor which help us to find the best subsequence chess moves that may lead to winning position.Keywords: Blackmar-Diemer Gambit(BDG), Confidence, sequence Association Rules, Support.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3090832 A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
Authors: Tun Myat Aung, Ni Ni Hla
Abstract:
This paper begins by describing basic properties of finite field and elliptic curve cryptography over prime field and binary field. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the general common attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.cKeywords: Discrete logarithm problem, general attacks, elliptic curves, strong curves, prime field, binary field, attack experiments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1178831 Relative Mapping Errors of Linear Time Invariant Systems Caused By Particle Swarm Optimized Reduced Order Model
Authors: G. Parmar, S. Mukherjee, R. Prasad
Abstract:
The authors present an optimization algorithm for order reduction and its application for the determination of the relative mapping errors of linear time invariant dynamic systems by the simplified models. These relative mapping errors are expressed by means of the relative integral square error criterion, which are determined for both unit step and impulse inputs. The reduction algorithm is based on minimization of the integral square error by particle swarm optimization technique pertaining to a unit step input. The algorithm is simple and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. Two numerical examples are solved to illustrate the superiority of the algorithm over some existing methods.Keywords: Order reduction, Particle swarm optimization, Relative mapping error, Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573830 Investigating the Effect of Velocity Inlet and Carrying Fluid on the Flow inside Coronary Artery
Authors: Mohammadreza Nezamirad, Nasim Sabetpour, Azadeh Yazdi, Amirmasoud Hamedi
Abstract:
In this study OpenFOAM 4.4.2 was used to investigate flow inside the coronary artery of the heart. This step is the first step of our future project, which is to include conjugate heat transfer of the heart with three main coronary arteries. Three different velocities were used as inlet boundary conditions to see the effect of velocity increase on velocity, pressure, and wall shear of the coronary artery. Also, three different fluids, namely the University of Wisconsin solution, gelatin, and blood was used to investigate the effect of different fluids on flow inside the coronary artery. A code based on Reynolds Stress Navier Stokes (RANS) equations was written and implemented with the real boundary condition that was calculated based on MRI images. In order to improve the accuracy of the current numerical scheme, hex dominant mesh is utilized. When the inlet velocity increases to 0.5 m/s, velocity, wall shear stress, and pressure increase at the narrower parts.
Keywords: CFD, heart, simulation, OpenFOAM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 457829 Mining Image Features in an Automatic Two-Dimensional Shape Recognition System
Authors: R. A. Salam, M.A. Rodrigues
Abstract:
The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.Keywords: Image mining, feature selection, shape recognition, peak measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457828 Identifying the Kinematic Parameters of Hexapod Machine Tool
Authors: M. M. Agheli, M. J. Nategh
Abstract:
Hexapod Machine Tool (HMT) is a parallel robot mostly based on Stewart platform. Identification of kinematic parameters of HMT is an important step of calibration procedure. In this paper an algorithm is presented for identifying the kinematic parameters of HMT using inverse kinematics error model. Based on this algorithm, the calibration procedure is simulated. Measurement configurations with maximum observability are decided as the first step of this algorithm for a robust calibration. The errors occurring in various configurations are illustrated graphically. It has been shown that the boundaries of the workspace should be searched for the maximum observability of errors. The importance of using configurations with sufficient observability in calibrating hexapod machine tools is verified by trial calibration with two different groups of randomly selected configurations. One group is selected to have sufficient observability and the other is in disregard of the observability criterion. Simulation results confirm the validity of the proposed identification algorithm.Keywords: Calibration, Hexapod Machine Tool (HMT), InverseKinematics Error Model, Observability, Parallel Robot, ParameterIdentification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2366827 Analysis of the Gait Characteristics of Soldier between the Normal and Loaded Gait
Authors: Ji-il Park, Min Kyu Yu, Jong-woo Lee, Sam-hyeon Yoo
Abstract:
The purpose of this research is to analyze the gait strategy between the normal and loaded gait. To this end, five male participants satisfied two conditions: the normal and loaded gait (backpack load 25.2 kg). As expected, results showed that additional loads elicited not a proportional increase in vertical and shear ground reaction force (GRF) parameters but also increase of the impulse, momentum and mechanical work. However, in case of the loaded gait, the time duration of the double support phase was increased unexpectedly. It is because the double support phase which is more stable than the single support phase can reduce instability of the loaded gait. Also, the directions of the pre-collision and after-collision were moved upward and downward compared to the normal gait. As a result, regardless of the additional backpack load, the impulse-momentum diagram during the step-to-step transition was maintained such as the normal gait. It means that human walk efficiently to keep stability and minimize total net works in case of the loaded gait.Keywords: Normal gait, loaded gait, impulse, collision, gait analysis, mechanical work, backpack load.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264826 Modeling and Simulation of Delaminations in FML Using Step Pulsed Active Thermography
Authors: S. Sundaravalli, M. C. Majumder, G. K. Vijayaraghavan
Abstract:
The study focuses to investigate the thermal response of delaminations and develop mathematical models using numerical results to obtain the optimum heat requirement and time to identify delaminations in GLARE type of Fibre Metal Laminates (FML) in both reflection mode and through-transmission (TT) mode of step pulsed active thermography (SPAT) method in the type of nondestructive testing and evaluation (NDTE) technique. The influence of applied heat flux and time on various sizes and depth of delaminations in FML is analyzed to investigate the thermal response through numerical simulations. A finite element method (FEM) is applied to simulate SPAT through ANSYS software based on 3D transient heat transfer principle with the assumption of reflection mode and TT mode of observation individually.
The results conclude that the numerical approach based on SPAT in reflection mode is more suitable for analysing smaller size of near-surface delaminations located at the thermal stimulator side and TT mode is more suitable for analysing smaller size of deeper delaminations located far from thermal stimulator side or near thermal detector/Infrared camera side. The mathematical models provide the optimum q and T at the required MRTD to identify unidentified delamination 7 with 25015.0022W/m2 at 2.531sec and delamination 8 with 16663.3356 W/m2 at 1.37857sec in reflection mode. In TT mode, the delamination 1 with 34954W/m2 at 13.0399sec, delamination 2 with 20002.67W/m2 at 1.998sec and delamination 7 with 20010.87 W/m2 at 0.6171sec could be identified.
Keywords: Step pulsed active thermography (SPAT), NDTE, FML, Delaminations, Finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2550825 Community Detection-based Analysis of the Human Interactome Network
Authors: Razvan Bocu, Sabin Tabirca
Abstract:
The study of proteomics reached unexpected levels of interest, as a direct consequence of its discovered influence over some complex biological phenomena, such as problematic diseases like cancer. This paper presents a new technique that allows for an accurate analysis of the human interactome network. It is basically a two-step analysis process that involves, at first, the detection of each protein-s absolute importance through the betweenness centrality computation. Then, the second step determines the functionallyrelated communities of proteins. For this purpose, we use a community detection technique that is based on the edge betweenness calculation. The new technique was thoroughly tested on real biological data and the results prove some interesting properties of those proteins that are involved in the carcinogenesis process. Apart from its experimental usefulness, the novel technique is also computationally effective in terms of execution times. Based on the analysis- results, some topological features of cancer mutated proteins are presented and a possible optimization solution for cancer drugs design is suggested.Keywords: Betweenness centrality, interactome networks, proteinprotein interactions, protein communities, cancer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292824 A Serializability Condition for Multi-step Transactions Accessing Ordered Data
Authors: Rafat Alshorman, Walter Hussak
Abstract:
In mobile environments, unspecified numbers of transactions arrive in continuous streams. To prove correctness of their concurrent execution a method of modelling an infinite number of transactions is needed. Standard database techniques model fixed finite schedules of transactions. Lately, techniques based on temporal logic have been proposed as suitable for modelling infinite schedules. The drawback of these techniques is that proving the basic serializability correctness condition is impractical, as encoding (the absence of) conflict cyclicity within large sets of transactions results in prohibitively large temporal logic formulae. In this paper, we show that, under certain common assumptions on the graph structure of data items accessed by the transactions, conflict cyclicity need only be checked within all possible pairs of transactions. This results in formulae of considerably reduced size in any temporal-logic-based approach to proving serializability, and scales to arbitrary numbers of transactions.Keywords: multi-step transactions, serializability, directed graph.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357823 Control Improvement of a C Sugar Cane Crystallization Using an Auto-Tuning PID Controller Based on Linearization of a Neural Network
Authors: S. Beyou, B. Grondin-Perez, M. Benne, C. Damour, J.-P. Chabriat
Abstract:
The industrial process of the sugar cane crystallization produces a residual that still contains a lot of soluble sucrose and the objective of the factory is to improve its extraction. Therefore, there are substantial losses justifying the search for the optimization of the process. Crystallization process studied on the industrial site is based on the “three massecuites process". The third step of this process constitutes the final stage of exhaustion of the sucrose dissolved in the mother liquor. During the process of the third step of crystallization (Ccrystallization), the phase that is studied and whose control is to be improved, is the growing phase (crystal growth phase). The study of this process on the industrial site is a problem in its own. A control scheme is proposed to improve the standard PID control law used in the factory. An auto-tuning PID controller based on instantaneous linearization of a neural network is then proposed.
Keywords: Auto-tuning, PID, Instantaneous linearization, Neural network, Non linear process, C-crystallisation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465822 A Study on Application of Elastic Theory for Computing Flexural Stresses in Preflex Beam
Authors: Nasiri Ahmadullah, Shimozato Tetsuhiro, Masayuki Tai
Abstract:
This paper presents the step-by-step procedure for using Elastic Theory to calculate the internal stresses in composite bridge girders prestressed by the Preflexing Technology, called Prebeam in Japan and Preflex beam worldwide. Elastic Theory approaches preflex beams the same way as it does the conventional composite girders. Since preflex beam undergoes different stages of construction, calculations are made using different sectional and material properties. Stresses are calculated in every stage using the properties of the specific section. Stress accumulation gives the available stress in a section of interest. Concrete presence in the section implies prestress loss due to creep and shrinkage, however; more work is required to be done in this field. In addition to the graphical presentation of this application, this paper further discusses important notes of graphical comparison between the results of an experimental-only research carried out on a preflex beam, with the results of simulation based on the elastic theory approach, for an identical beam using Finite Element Modeling (FEM) by the author.
Keywords: Composite girder, elastic theory, preflex beam, prestressing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 908821 Identifying Key Success Factor For Supply Chain Management System in the Semiconductor Industry - A Focus Group Approach
Authors: T. P. Lu, B. N. Hwang, T. Z. Liou, Y. L. Lin
Abstract:
Developing a supply chain management (SCM) system is costly, but important. However, because of its complicated nature, not many of such projects are considered successful. Few research publications directly relate to key success factors (KSFs) for implementing a SCM system. Motivated by the above, this research proposes a hierarchy of KSFs for SCM system implementation in the semiconductor industry by using a two-step approach. First, the literature review indicates the initial hierarchy. The second step includes a focus group approach to finalize the proposed KSF hierarchy by extracting valuable experiences from executives and managers that actively participated in a project, which successfully establish a seamless SCM integration between the world's largest semiconductor foundry manufacturing company and the world's largest assembly and testing company. Future project executives may refer the resulting KSF hierarchy as a checklist for SCM system implementation in semiconductor or related industries.
Keywords: Focus group, key success factors, supply chain management, semiconductor industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536820 Activation Parameters of the Low Temperature Creep Controlling Mechanism in Martensitic Steels
Abstract:
Martensitic steels with an ultimate tensile strength beyond 2000 MPa are applied in the powertrain of vehicles due to their excellent fatigue strength and high creep resistance. However, the creep controlling mechanism in martensitic steels at ambient temperatures up to 423 K is not evident. The purpose of this study is to review the low temperature creep (LTC) behavior of martensitic steels at temperatures from 363 K to 523 K. Thus, the validity of a logarithmic creep law is reviewed and the stress and temperature dependence of the creep parameters α and β are revealed. Furthermore, creep tests are carried out, which include stepped changes in temperature or stress, respectively. On one hand, the change of the creep rate due to a temperature step provides information on the magnitude of the activation energy of the LTC controlling mechanism and on the other hand, the stress step approach provides information on the magnitude of the activation volume. The magnitude, the temperature dependency, and the stress dependency of both material specific activation parameters may deliver a significant contribution to the disclosure of the nature of the LTC rate controlling mechanism.
Keywords: Activation parameters, creep mechanisms, high strength steels, low temperature creep.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713819 Robust Camera Calibration using Discrete Optimization
Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck
Abstract:
Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813818 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.
Keywords: Base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582817 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia
Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca
Abstract:
This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.Keywords: Transshipment model, mixed integer programming, saving algorithm, dry freight transportation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913816 On-line and Off-line POD Assisted Projective Integral for Non-linear Problems: A Case Study with Burgers-Equation
Authors: Montri Maleewong, Sirod Sirisup
Abstract:
The POD-assisted projective integration method based on the equation-free framework is presented in this paper. The method is essentially based on the slow manifold governing of given system. We have applied two variants which are the “on-line" and “off-line" methods for solving the one-dimensional viscous Bergers- equation. For the on-line method, we have computed the slow manifold by extracting the POD modes and used them on-the-fly along the projective integration process without assuming knowledge of the underlying slow manifold. In contrast, the underlying slow manifold must be computed prior to the projective integration process for the off-line method. The projective step is performed by the forward Euler method. Numerical experiments show that for the case of nonperiodic system, the on-line method is more efficient than the off-line method. Besides, the online approach is more realistic when apply the POD-assisted projective integration method to solve any systems. The critical value of the projective time step which directly limits the efficiency of both methods is also shown.
Keywords: Projective integration, POD method, equation-free.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1354815 A Simplified Adaptive Decision Feedback Equalization Technique for π/4-DQPSK Signals
Authors: V. Prapulla, A. Mitra, R. Bhattacharjee, S. Nandi
Abstract:
We present a simplified equalization technique for a π/4 differential quadrature phase shift keying ( π/4 -DQPSK) modulated signal in a multipath fading environment. The proposed equalizer is realized as a fractionally spaced adaptive decision feedback equalizer (FS-ADFE), employing exponential step-size least mean square (LMS) algorithm as the adaptation technique. The main advantage of the scheme stems from the usage of exponential step-size LMS algorithm in the equalizer, which achieves similar convergence behavior as that of a recursive least squares (RLS) algorithm with significantly reduced computational complexity. To investigate the finite-precision performance of the proposed equalizer along with the π/4 -DQPSK modem, the entire system is evaluated on a 16-bit fixed point digital signal processor (DSP) environment. The proposed scheme is found to be attractive even for those cases where equalization is to be performed within a restricted number of training samples.Keywords: Adaptive decision feedback equalizer, Fractionally spaced equalizer, π/4 DQPSK signal, Digital signal processor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5736814 Enhancement of Biogas Production from Bakery Waste by Pseudomonas aeruginosa
Authors: S. Potivichayanon, T. Sungmon, W. Chaikongmao, S. Kamvanin
Abstract:
Production of biogas from bakery waste was enhanced by additional bacterial cell. This study was divided into 2 steps. First step, grease waste from bakery industry-s grease trap was initially degraded by Pseudomonas aeruginosa. The concentration of byproduct, especially glycerol, was determined and found that glycerol concentration increased from 12.83% to 48.10%. Secondary step, 3 biodigesters were set up in 3 different substrates: non-degraded waste as substrate in first biodigester, degraded waste as substrate in secondary biodigester, and degraded waste mixed with swine manure in ratio 1:1 as substrate in third biodigester. The highest concentration of biogas was found in third biodigester that was 44.33% of methane and 63.71% of carbon dioxide. The lower concentration at 24.90% of methane and 18.98% of carbon dioxide was exhibited in secondary biodigester whereas the lowest was found in non-degraded waste biodigester. It was demonstrated that the biogas production was greatly increased with the initial grease waste degradation by Pseudomonas aeruginosa.Keywords: Biogas production, carbon dioxide, methane, Pseudomonas aeruginosa
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3485813 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.Keywords: Base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059