Search results for: simulation-based optimization
1081 Model Reduction of Linear Systems by Conventional and Evolutionary Techniques
Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil
Abstract:
Reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM), using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Mihailov stability criterion and continued fraction expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. In the evolutionary technique method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
Keywords: Reduced Order Modeling, Stability, Continued Fraction Expansions, Mihailov Stability Criterion, Particle Swarm Optimization, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19271080 Proposing a Pareto-based Multi-Objective Evolutionary Algorithm to Flexible Job Shop Scheduling Problem
Authors: Seyed Habib A. Rahmati
Abstract:
During last decades, developing multi-objective evolutionary algorithms for optimization problems has found considerable attention. Flexible job shop scheduling problem, as an important scheduling optimization problem, has found this attention too. However, most of the multi-objective algorithms that are developed for this problem use nonprofessional approaches. In another words, most of them combine their objectives and then solve multi-objective problem through single objective approaches. Of course, except some scarce researches that uses Pareto-based algorithms. Therefore, in this paper, a new Pareto-based algorithm called controlled elitism non-dominated sorting genetic algorithm (CENSGA) is proposed for the multi-objective FJSP (MOFJSP). Our considered objectives are makespan, critical machine work load, and total work load of machines. The proposed algorithm is also compared with one the best Pareto-based algorithms of the literature on some multi-objective criteria, statistically.Keywords: Scheduling, Flexible job shop scheduling problem, controlled elitism non-dominated sorting genetic algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19361079 Optimum Time Coordination of Overcurrent Relays using Two Phase Simplex Method
Authors: Prashant P. Bedekar, Sudhir R. Bhide, Vijay S. Kale
Abstract:
Overcurrent (OC) relays are the major protection devices in a distribution system. The operating time of the OC relays are to be coordinated properly to avoid the mal-operation of the backup relays. The OC relay time coordination in ring fed distribution networks is a highly constrained optimization problem which can be stated as a linear programming problem (LPP). The purpose is to find an optimum relay setting to minimize the time of operation of relays and at the same time, to keep the relays properly coordinated to avoid the mal-operation of relays. This paper presents two phase simplex method for optimum time coordination of OC relays. The method is based on the simplex algorithm which is used to find optimum solution of LPP. The method introduces artificial variables to get an initial basic feasible solution (IBFS). Artificial variables are removed using iterative process of first phase which minimizes the auxiliary objective function. The second phase minimizes the original objective function and gives the optimum time coordination of OC relays.Keywords: Constrained optimization, LPP, Overcurrent relaycoordination, Two-phase simplex method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30071078 Seismic Control of Tall Building Using a New Optimum Controller Based on GA
Authors: A. Shayeghi, H. Eimani Kalasar, H. Shayeghi
Abstract:
This paper emphasizes on the application of genetic algorithm (GA) to optimize the parameters of the TMD for achieving the best results in the reduction of the building response under earthquake excitations. The Integral of the Time multiplied Absolute value of the Error (ITAE) based on relative displacement of all floors in the building is taken as a performance index of the optimization criterion. The problem of robustly TMD controller design is formatted as an optimization problem based on the ITAE performance index to be solved using GA that has a story ability to find the most optimistic results. An 11–story realistic building, located in the city of Rasht, Iran is considered as a test system to demonstrate effectiveness of the proposed GA based TMD (GATMD) controller without specifying which mode should be controlled. The results of the proposed GATMD controller are compared with the uncontrolled structure through timedomain simulation and some performance indices. The results analysis reveals that the designed GA based TMD controller has an excellent capability in reduction of the seismically excited example building and the ITAE performance, that is so for remains as unknown, can be introduced a new criteria - method for structural dynamic design.
Keywords: Tuned Mass Damper, Genetic Algorithm, TallBuildings, Structural Dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17981077 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach
Authors: Imen Dhaou
Abstract:
This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.
Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9971076 Optimization of Process Parameters using Response Surface Methodology for the Removal of Zinc(II) by Solvent Extraction
Authors: B. Guezzen, M.A. Didi, B. Medjahed
Abstract:
A factorial design of experiments and a response surface methodology were implemented to investigate the liquid-liquid extraction process of zinc (II) from acetate medium using the 1-Butyl-imidazolium di(2-ethylhexyl) phosphate [BIm+][D2EHP-]. The optimization process of extraction parameters such as the initial pH effect (2.5, 4.5, and 6.6), ionic liquid concentration (1, 5.5, and 10 mM) and salt effect (0.01, 5, and 10 mM) was carried out using a three-level full factorial design (33). The results of the factorial design demonstrate that all these factors are statistically significant, including the square effects of pH and ionic liquid concentration. The results showed that the order of significance: IL concentration > salt effect > initial pH. Analysis of variance (ANOVA) showing high coefficient of determination (R2 = 0.91) and low probability values (P < 0.05) signifies the validity of the predicted second-order quadratic model for Zn (II) extraction. The optimum conditions for the extraction of zinc (II) at the constant temperature (20 °C), initial Zn (II) concentration (1mM) and A/O ratio of unity were: initial pH (4.8), extractant concentration (9.9 mM), and NaCl concentration (8.2 mM). At the optimized condition, the metal ion could be quantitatively extracted.
Keywords: Ionic liquid, response surface methodology, solvent extraction, zinc acetate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11521075 Real Power Generation Scheduling to Improve Steady State Stability Limit in the Java-Bali 500kV Interconnection Power System
Authors: Indar Chaerah Gunadin, Adi Soeprijanto, Ontoseno Penangsang
Abstract:
This paper will discuss about an active power generator scheduling method in order to increase the limit level of steady state systems. Some power generator optimization methods such as Langrange, PLN (Indonesian electricity company) Operation, and the proposed Z-Thevenin-based method will be studied and compared in respect of their steady state aspects. A method proposed in this paper is built upon the thevenin equivalent impedance values between each load respected to each generator. The steady state stability index obtained with the REI DIMO method. This research will review the 500kV-Jawa-Bali interconnection system. The simulation results show that the proposed method has the highest limit level of steady state stability compared to other optimization techniques such as Lagrange, and PLN operation. Thus, the proposed method can be used to create the steady state stability limit of the system especially in the peak load condition.
Keywords: generation scheduling, steady-state stability limit, REI Dimo, margin stability
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22841074 Scheduling Maintenance Actions for Gas Turbines Aircraft Engines
Authors: Anis Gharbi
Abstract:
This paper considers the problem of scheduling maintenance actions for identical aircraft gas turbine engines. Each one of the turbines consists of parts which frequently require replacement. A finite inventory of spare parts is available and all parts are ready for replacement at any time. The inventory consists of both new and refurbished parts. Hence, these parts have different field lives. The goal is to find a replacement part sequencing that maximizes the time that the aircraft will keep functioning before the inventory is replenished. The problem is formulated as an identical parallel machine scheduling problem where the minimum completion time has to be maximized. Two models have been developed. The first one is an optimization model which is based on a 0-1 linear programming formulation, while the second one is an approximate procedure which consists in decomposing the problem into several two-machine subproblems. Each subproblem is optimally solved using the first model. Both models have been implemented using Lingo and have been tested on two sets of randomly generated data with up to 150 parts and 10 turbines. Experimental results show that the optimization model is able to solve only instances with no more than 4 turbines, while the decomposition procedure often provides near-optimal solutions within a maximum CPU time of 3 seconds.
Keywords: Aircraft turbines, Scheduling, Identical parallel machines, 0-1 linear programming, Heuristic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20021073 Cost Analysis of Hybrid Wind Energy Generating System Considering CO2 Emissions
Authors: M. A. Badr, M.N. El Kordy, A. N. Mohib, M. M. Ibrahim
Abstract:
The basic objective of the research is to study the effect of hybrid wind energy on the cost of generated electricity considering the cost of reduction CO2 emissions. The system consists of small wind turbine(s), storage battery bank and a diesel generator (W/D/B). Using an optimization software package, different system configurations are investigated to reach optimum configuration based on the net present cost (NPC) and cost of energy (COE) as economic optimization criteria. The cost of avoided CO2 is taken into consideration. The system is intended to supply the electrical load of a small community (gathering six families) in a remote Egyptian area. The investigated system is not connected to the electricity grid and may replace an existing conventional diesel powered electric supply system to reduce fuel consumption and CO2 emissions. The simulation results showed that W/D energy system is more economic than diesel alone. The estimated COE is 0.308$/kWh and extracting the cost of avoided CO2, the COE reached 0.226 $/kWh which is an external benefit of wind turbine, as there are no pollutant emissions through operational phase.Keywords: Hybrid wind turbine systems, remote areas electrification, simulation of hybrid energy systems, techno-economic study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11951072 Optimization of Control Parameters for MRR in Injection Flushing Type of EDM on Stainless Steel 304 Workpiece
Authors: M. S. Reza, M. Hamdi, A.S. Hadi
Abstract:
The operating control parameters of injection flushing type of electrical discharge machining process on stainless steel 304 workpiece with copper tools are being optimized according to its individual machining characteristic i.e. material removal rate (MRR). Lower MRR during EDM machining process may decrease its- machining productivity. Hence, the quality characteristic for MRR is set to higher-the-better to achieve the optimum machining productivity. Taguchi method has been used for the construction, layout and analysis of the experiment for each of the machining characteristic for the MRR. The use of Taguchi method in the experiment saves a lot of time and cost of preparing and machining the experiment samples. Therefore, an L18 Orthogonal array which was the fundamental component in the statistical design of experiments has been used to plan the experiments and Analysis of Variance (ANOVA) is used to determine the optimum machining parameters for this machining characteristic. The control parameters selected for this optimization experiments are polarity, pulse on duration, discharge current, discharge voltage, machining depth, machining diameter and dielectric liquid pressure. The result had shown that the higher the discharge voltage, the higher will be the MRR.Keywords: ANOVA, EDM, Injection Flushing, L18 OrthogonalArray, MRR, Stainless Steel 304
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18211071 Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments
Authors: Sunita Dhingra, Satinder Bal Gupta, Ranjit Biswas
Abstract:
Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.
Keywords: Multiprocessor task scheduling, Design of experiments, Genetic Algorithm, Makespan, Total completion time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28461070 Statistical Optimization of Medium Components for Biomass Production of Chlorella pyrenoidosa under Autotrophic Conditions and Evaluation of Its Biochemical Composition under Stress Conditions
Authors: N. P. Dhull, K. Gupta, R. Soni, D. K. Rahi, S. K. Soni
Abstract:
The aim of the present work was to statistically design an autotrophic medium for maximum biomass production by Chlorella pyrenoidosa using response surface methodology. After evaluating one factor at a time approach, K2HPO4, KNO3, MgSO4.7H2O and NaHCO3 were preferred over the other components of the fog’s medium as most critical autotrophic medium components. The study showed that the maximum biomass yield was achieved while the concentrations of MgSO4.7H2O, K2HPO4, KNO3 and NaHCO3 were 0.409 g/L, 0.24 g/L, 1.033 g/L, and 3.265 g/L, respectively. The study reported that the biomass productivity of C. pyrenoidosa improved from 0.14 g/L in defined fog’s medium to 1.40 g/L in modified fog’s medium resulting 10 fold increase. The biochemical composition biosynthesis of C. pyrenoidosa was altered using nitrogen limiting stress bringing about 5.23 fold increase in lipid content than control (cell without stress), as analyzed by FTIR integration method.
Keywords: Autotrophic condition, Chlorella pyrenoidosa, FTIR, Response Surface Methodology, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24401069 Evolving a Fuzzy Rule-Base for Image Segmentation
Abstract:
A new method for color image segmentation using fuzzy logic is proposed in this paper. Our aim here is to automatically produce a fuzzy system for color classification and image segmentation with least number of rules and minimum error rate. Particle swarm optimization is a sub class of evolutionary algorithms that has been inspired from social behavior of fishes, bees, birds, etc, that live together in colonies. We use comprehensive learning particle swarm optimization (CLPSO) technique to find optimal fuzzy rules and membership functions because it discourages premature convergence. Here each particle of the swarm codes a set of fuzzy rules. During evolution, a population member tries to maximize a fitness criterion which is here high classification rate and small number of rules. Finally, particle with the highest fitness value is selected as the best set of fuzzy rules for image segmentation. Our results, using this method for soccer field image segmentation in Robocop contests shows 89% performance. Less computational load is needed when using this method compared with other methods like ANFIS, because it generates a smaller number of fuzzy rules. Large train dataset and its variety, makes the proposed method invariant to illumination noiseKeywords: Comprehensive learning Particle Swarmoptimization, fuzzy classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19571068 A Particle Swarm Optimal Control Method for DC Motor by Considering Energy Consumption
Authors: Yingjie Zhang, Ming Li, Ying Zhang, Jing Zhang, Zuolei Hu
Abstract:
In the actual start-up process of DC motors, the DC drive system often faces a conflict between energy consumption and acceleration performance. To resolve the conflict, this paper proposes a comprehensive performance index that energy consumption index is added on the basis of classical control performance index in the DC motor starting process. Taking the comprehensive performance index as the cost function, particle swarm optimization algorithm is designed to optimize the comprehensive performance. Then it conducts simulations on the optimization of the comprehensive performance of the DC motor on condition that the weight coefficient of the energy consumption index should be properly designed. The simulation results show that as the weight of energy consumption increased, the energy efficiency was significantly improved at the expense of a slight sacrifice of fastness indicators with the comprehensive performance index method. The energy efficiency was increased from 63.18% to 68.48% and the response time reduced from 0.2875s to 0.1736s simultaneously compared with traditional proportion integrals differential controller in energy saving.
Keywords: Comprehensive performance index, energy consumption, acceleration performance, particle swarm optimal control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6431067 Optimizing Logistics for Courier Organizations with Considerations of Congestions and Pickups: A Courier Delivery System in Amman as Case Study
Authors: Nader A. Al Theeb, Zaid Abu Manneh, Ibrahim Al-Qadi
Abstract:
Traveling salesman problem (TSP) is a combinatorial integer optimization problem that asks "What is the optimal route for a vehicle to traverse in order to deliver requests to a given set of customers?”. It is widely used by the package carrier companies’ distribution centers. The main goal of applying the TSP in courier organizations is to minimize the time that it takes for the courier in each trip to deliver or pick up the shipments during a day. In this article, an optimization model is constructed to create a new TSP variant to optimize the routing in a courier organization with a consideration of congestion in Amman, the capital of Jordan. Real data were collected by different methods and analyzed. Then, concert technology - CPLEX was used to solve the proposed model for some random generated data instances and for the real collected data. At the end, results have shown a great improvement in time compared with the current trip times, and an economic study was conducted afterwards to figure out the impact of using such models.
Keywords: Travel salesman problem, congestions, pick-up, integer programming, package carriers, service engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9421066 Near-Field Robust Adaptive Beamforming Based on Worst-Case Performance Optimization
Authors: Jing-ran Lin, Qi-cong Peng, Huai-zong Shao
Abstract:
The performance of adaptive beamforming degrades substantially in the presence of steering vector mismatches. This degradation is especially severe in the near-field, for the 3-dimensional source location is more difficult to estimate than the 2-dimensional direction of arrival in far-field cases. As a solution, a novel approach of near-field robust adaptive beamforming (RABF) is proposed in this paper. It is a natural extension of the traditional far-field RABF and belongs to the class of diagonal loading approaches, with the loading level determined based on worst-case performance optimization. However, different from the methods solving the optimal loading by iteration, it suggests here a simple closed-form solution after some approximations, and consequently, the optimal weight vector can be expressed in a closed form. Besides simplicity and low computational cost, the proposed approach reveals how different factors affect the optimal loading as well as the weight vector. Its excellent performance in the near-field is confirmed via a number of numerical examples.Keywords: Robust adaptive beamforming (RABF), near-field, steering vector mismatches, diagonal loading, worst-case performanceoptimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18811065 Effect of Process Parameters on the Proximate Composition, Functional and Sensory Properties
Authors: C. I. Omohimi, O. P. Sobukola, K. O. Sarafadeen, L.O. Sanni
Abstract:
Flour from Mucuna beans (Mucuna pruriens) were used in producing texturized meat analogue using a single screw extruder to monitor modifications on the proximate composition and the functional properties at high moisture level. Response surface methodology based on Box Behnken design at three levels of barrel temperature (110, 120, 130°C), screw speed (100,120,140rpm) and feed moisture (44, 47, 50%) were used in 17 runs. Regression models describing the effect of variables on the product responses were obtained. Descriptive profile analyses and consumer acceptability test were carried out on optimized flavoured extruded meat analogue. Responses were mostly affected by barrel temperature and moisture level and to a lesser extent by screw speed. Optimization results based on desirability concept indicated that a barrel temperature of 120.15°C, feed moisture of 47% and screw speed of 119.19 rpm would produce meat analogue of preferable proximate composition, functional and sensory properties which reveals consumers` likeness for the product.Keywords: Functional properties, mucuna bean flour, optimization, proximate composition, texturized meat analogue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30111064 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination
Authors: N. Santatriniaina, J. Deseure, T.Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana
Abstract:
Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 [mm] is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.
Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31761063 Mining Correlated Bicluster from Web Usage Data Using Discrete Firefly Algorithm Based Biclustering Approach
Authors: K. Thangavel, R. Rathipriya
Abstract:
For the past one decade, biclustering has become popular data mining technique not only in the field of biological data analysis but also in other applications like text mining, market data analysis with high-dimensional two-way datasets. Biclustering clusters both rows and columns of a dataset simultaneously, as opposed to traditional clustering which clusters either rows or columns of a dataset. It retrieves subgroups of objects that are similar in one subgroup of variables and different in the remaining variables. Firefly Algorithm (FA) is a recently-proposed metaheuristic inspired by the collective behavior of fireflies. This paper provides a preliminary assessment of discrete version of FA (DFA) while coping with the task of mining coherent and large volume bicluster from web usage dataset. The experiments were conducted on two web usage datasets from public dataset repository whereby the performance of FA was compared with that exhibited by other population-based metaheuristic called binary Particle Swarm Optimization (PSO). The results achieved demonstrate the usefulness of DFA while tackling the biclustering problem.
Keywords: Biclustering, Binary Particle Swarm Optimization, Discrete Firefly Algorithm, Firefly Algorithm, Usage profile Web usage mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21331062 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31621061 Capacity Optimization for Local and Cooperative Spectrum Sensing in Cognitive Radio Networks
Authors: Ayman A. El-Saleh, Mahamod Ismail, Mohd. A. M. Ali, Ahmed N. H. Alnuaimy
Abstract:
The dynamic spectrum allocation solutions such as cognitive radio networks have been proposed as a key technology to exploit the frequency segments that are spectrally underutilized. Cognitive radio users work as secondary users who need to constantly and rapidly sense the presence of primary users or licensees to utilize their frequency bands if they are inactive. Short sensing cycles should be run by the secondary users to achieve higher throughput rates as well as to provide low level of interference to the primary users by immediately vacating their channels once they have been detected. In this paper, the throughput-sensing time relationship in local and cooperative spectrum sensing has been investigated under two distinct scenarios, namely, constant primary user protection (CPUP) and constant secondary user spectrum usability (CSUSU) scenarios. The simulation results show that the design of sensing slot duration is very critical and depends on the number of cooperating users under CPUP scenario whereas under CSUSU, cooperating more users has no effect if the sensing time used exceeds 5% of the total frame duration.Keywords: Capacity, cognitive radio, optimization, spectrumsensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16151060 Vibration Attenuation Using Functionally Graded Material
Authors: Saeed Asiri, Hassan Hedia, Wael Eissa
Abstract:
The aim of the work was to attenuate the vibration amplitude in CESNA 172 airplane wing by using Functionally Graded Material instead of uniform or composite material. Wing strength was achieved by means of stress analysis study, while wing vibration amplitudes and shapes were achieved by means of Modal and Harmonic analysis. Results were verified by applying the methodology in a simple cantilever plate to the simple model and the results were promising and the same methodology can be applied to the airplane wing model. Aluminum models, Titanium models, and functionally graded materials of Aluminum and titanium results were compared to show a great vibration attenuation after using the FGM. Optimization in FGM gradation satisfied our objective of reducing and attenuating the vibration amplitudes to show the effect of using FGM in vibration behavior. Testing the Aluminum rich models, and comparing it with the titanium rich model was an optimization in this paper. Results have shown a significant attenuation in vibration magnitudes when using FGM instead of Titanium Plate, and Aluminium wing with FGM Spurs instead of Aluminium wings. It was also recommended that in future, changing the graphical scale to 1:10 or even 1:1 when the computers- capabilities allow.
Keywords: Vibration, Attenuation, FGM, ANSYS2011, FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31341059 Network Reconfiguration for Load Balancing in Distribution System with Distributed Generation and Capacitor Placement
Authors: T. Lantharthong, N. Rugthaicharoencheep
Abstract:
This paper presents an efficient algorithm for optimization of radial distribution systems by a network reconfiguration to balance feeder loads and eliminate overload conditions. The system load-balancing index is used to determine the loading conditions of the system and maximum system loading capacity. The index value has to be minimum in the optimal network reconfiguration of load balancing. A method based on Tabu search algorithm, The Tabu search algorithm is employed to search for the optimal network reconfiguration. The basic idea behind the search is a move from a current solution to its neighborhood by effectively utilizing a memory to provide an efficient search for optimality. It presents low computational effort and is able to find good quality configurations. Simulation results for a radial 69-bus system with distributed generations and capacitors placement. The study results show that the optimal on/off patterns of the switches can be identified to give the best network reconfiguration involving balancing of feeder loads while respecting all the constraints.Keywords: Network reconfiguration, Distributed generation Capacitor placement, Load balancing, Optimization technique
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42201058 Thermodynamic Modeling of the High Temperature Shift Converter Reactor Using Minimization of Gibbs Free Energy
Authors: H. Zare Aliabadi
Abstract:
The equilibrium chemical reactions taken place in a converter reactor of the Khorasan Petrochemical Ammonia plant was studied using the minimization of Gibbs free energy method. In the minimization of the Gibbs free energy function the Davidon– Fletcher–Powell (DFP) optimization procedure using the penalty terms in the well-defined objective function was used. It should be noted that in the DFP procedure along with the corresponding penalty terms the Hessian matrices for the composition of constituents in the Converter reactor can be excluded. This, in fact, can be considered as the main advantage of the DFP optimization procedure. Also the effect of temperature and pressure on the equilibrium composition of the constituents was investigated. The results obtained in this work were compared with the data collected from the converter reactor of the Khorasan Petrochemical Ammonia plant. It was concluded that the results obtained from the method used in this work are in good agreement with the industrial data. Notably, the algorithm developed in this work, in spite of its simplicity, takes the advantage of short computation and convergence time.
Keywords: Gibbs free energy, converter reactors, Chemical equilibrium
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25631057 Motor Imagery Signal Classification for a Four State Brain Machine Interface
Authors: Hema C. R., Paulraj M. P., S. Yaacob, A. H. Adom, R. Nagarajan
Abstract:
Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Keywords: Motor Imagery, Brain Machine Interfaces, Neural Networks, Particle Swarm Optimization, EEG signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24561056 Daylightophil Approach towards High-Performance Architecture for Hybrid-Optimization of Visual Comfort and Daylight Factor in BSk
Authors: Mohammadjavad Mahdavinejad, Hadi Yazdi
Abstract:
The greatest influence we have from the world is shaped through the visual form, thus light is an inseparable element in human life. The use of daylight in visual perception and environment readability is an important issue for users. With regard to the hazards of greenhouse gas emissions from fossil fuels, and in line with the attitudes on the reduction of energy consumption, the correct use of daylight results in lower levels of energy consumed by artificial lighting, heating and cooling systems. Windows are usually the starting points for analysis and simulations to achieve visual comfort and energy optimization; therefore, attention should be paid to the orientation of buildings to minimize electrical energy and maximize the use of daylight. In this paper, by using the Design Builder Software, the effect of the orientation of an 18m2(3m*6m) room with 3m height in city of Tehran has been investigated considering the design constraint limitations. In these simulations, the dimensions of the building have been changed with one degree and the window is located on the smaller face (3m*3m) of the building with 80% ratio. The results indicate that the orientation of building has a lot to do with energy efficiency to meet high-performance architecture and planning goals and objectives.
Keywords: Daylight, window, orientation, energy consumption, design builder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10861055 A Nondominated Sorting Genetic Algorithm for Shortest Path Routing Problem
Authors: C. Chitra, P. Subbaraj
Abstract:
The shortest path routing problem is a multiobjective nonlinear optimization problem with constraints. This problem has been addressed by considering Quality of service parameters, delay and cost objectives separately or as a weighted sum of both objectives. Multiobjective evolutionary algorithms can find multiple pareto-optimal solutions in one single run and this ability makes them attractive for solving problems with multiple and conflicting objectives. This paper uses an elitist multiobjective evolutionary algorithm based on the Non-dominated Sorting Genetic Algorithm (NSGA), for solving the dynamic shortest path routing problem in computer networks. A priority-based encoding scheme is proposed for population initialization. Elitism ensures that the best solution does not deteriorate in the next generations. Results for a sample test network have been presented to demonstrate the capabilities of the proposed approach to generate well-distributed pareto-optimal solutions of dynamic routing problem in one single run. The results obtained by NSGA are compared with single objective weighting factor method for which Genetic Algorithm (GA) was applied.
Keywords: Multiobjective optimization, Non-dominated Sorting Genetic Algorithm, Routing, Weighted sum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19271054 Small Signal Stability Assessment Employing PSO Based TCSC Controller with Comparison to GA Based Design
Authors: D. Mondal, A. Chakrabarti, A. Sengupta
Abstract:
This paper aims to select the optimal location and setting parameters of TCSC (Thyristor Controlled Series Compensator) controller using Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) to mitigate small signal oscillations in a multimachine power system. Though Power System Stabilizers (PSSs) are prime choice in this issue, installation of FACTS device has been suggested here in order to achieve appreciable damping of system oscillations. However, performance of any FACTS devices highly depends upon its parameters and suitable location in the power network. In this paper PSO as well as GA based techniques are used separately and compared their performances to investigate this problem. The results of small signal stability analysis have been represented employing eigenvalue as well as time domain response in face of two common power system disturbances e.g., varying load and transmission line outage. It has been revealed that the PSO based TCSC controller is more effective than GA based controller even during critical loading condition.Keywords: Genetic Algorithm, Particle Swarm Optimization, Small Signal Stability, Thyristor Controlled Series Compensator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19561053 Temporal Analysis of Magnetic Nerve Stimulation–Towards Enhanced Systems via Virtualisation
Authors: Stefan M. Goetz, Thomas Weyh, Hans-Georg Herzog
Abstract:
The triumph of inductive neuro-stimulation since its rediscovery in the 1980s has been quite spectacular. In lots of branches ranging from clinical applications to basic research this system is absolutely indispensable. Nevertheless, the basic knowledge about the processes underlying the stimulation effect is still very rough and rarely refined in a quantitative way. This seems to be not only an inexcusable blank spot in biophysics and for stimulation prediction, but also a fundamental hindrance for technological progress. The already very sophisticated devices have reached a stage where further optimization requires better strategies than provided by simple linear membrane models of integrate-and-fire style. Addressing this problem for the first time, we suggest in the following text a way for virtual quantitative analysis of a stimulation system. Concomitantly, this ansatz seems to provide a route towards a better understanding by using nonlinear signal processing and taking the nerve as a filter that is adapted for neuronal magnetic stimulation. The model is compact and easy to adjust. The whole setup behaved very robustly during all performed tests. Exemplarily a recent innovative stimulator design known as cTMS is analyzed and dimensioned with this approach in the following. The results show hitherto unforeseen potentials.
Keywords: Theory of magnetic stimulation, inversion, optimization, high voltage oscillator, TMS, cTMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13781052 Least-Squares Support Vector Machine for Characterization of Clusters of Microcalcifications
Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha
Abstract:
Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.
Keywords: Clusters of Microcalcifications, Ductal Carcinoma in Situ, Least-Square Support Vector Machine, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812