Search results for: Powell optimization.
964 Simultaneous Saccharification and Fermentation(SSF) of Sugarcane Bagasse - Kinetics and Modeling
Authors: E.Sasikumar, T.Viruthagiri
Abstract:
Simultaneous Saccharification and Fermentation (SSF) of sugarcane bagasse by cellulase and Pachysolen tannophilus MTCC *1077 were investigated in the present study. Important process variables for ethanol production form pretreated bagasse were optimized using Response Surface Methodology (RSM) based on central composite design (CCD) experiments. A 23 five level CCD experiments with central and axial points was used to develop a statistical model for the optimization of process variables such as incubation temperature (25–45°) X1, pH (5.0–7.0) X2 and fermentation time (24–120 h) X3. Data obtained from RSM on ethanol production were subjected to the analysis of variance (ANOVA) and analyzed using a second order polynomial equation and contour plots were used to study the interactions among three relevant variables of the fermentation process. The fermentation experiments were carried out using an online monitored modular fermenter 2L capacity. The processing parameters setup for reaching a maximum response for ethanol production was obtained when applying the optimum values for temperature (32°C), pH (5.6) and fermentation time (110 h). Maximum ethanol concentration (3.36 g/l) was obtained from 50 g/l pretreated sugarcane bagasse at the optimized process conditions in aerobic batch fermentation. Kinetic models such as Monod, Modified Logistic model, Modified Logistic incorporated Leudeking – Piret model and Modified Logistic incorporated Modified Leudeking – Piret model have been evaluated and the constants were predicted.
Keywords: Sugarcane bagasse, ethanol, optimization, Pachysolen tannophilus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302963 Structural Damage Detection via Incomplete Modal Data Using Output Data Only
Authors: Ahmed Noor Al-Qayyim, Barlas Ozden Caglayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on to obtain very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. The study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using ‘Two Points Condensation (TPC) technique’. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices obtain from optimization the equation of motion using the measured test data. The current stiffness matrices compare with original (undamaged) stiffness matrices. The large percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element, where two cases consider. The method detects the damage and determines its location accurately in both cases. In addition, the results illustrate these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can be used also for big structures.Keywords: Damage detection, two points–condensation, structural health monitoring, signals processing, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2699962 An Assessment of Software Process Optimization Compared to International Best Practice in Bangladesh
Authors: Mohammad Shahadat Hossain Chowdhury, Tania Taharima Chowdhary, Hasan Sarwar
Abstract:
The challenge for software development house in Bangladesh is to find a path of using minimum process rather than CMMI or ISO type gigantic practice and process area. The small and medium size organization in Bangladesh wants to ensure minimum basic Software Process Improvement (SPI) in day to day operational activities. Perhaps, the basic practices will ensure to realize their company's improvement goals. This paper focuses on the key issues in basic software practices for small and medium size software organizations, who are unable to effort the CMMI, ISO, ITIL etc. compliance certifications. This research also suggests a basic software process practices model for Bangladesh and it will show the mapping of our suggestions with international best practice. In this IT competitive world for software process improvement, Small and medium size software companies that require collaboration and strengthening to transform their current perspective into inseparable global IT scenario. This research performed some investigations and analysis on some projects- life cycle, current good practice, effective approach, reality and pain area of practitioners, etc. We did some reasoning, root cause analysis, comparative analysis of various approach, method, practice and justifications of CMMI and real life. We did avoid reinventing the wheel, where our focus is for minimal practice, which will ensure a dignified satisfaction between organizations and software customer.Keywords: Compare with CMMI practices, Key success factors, Small and medium software house, Software process improvement; Software process optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857961 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite
Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar
Abstract:
This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts grey relational analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.
Keywords: Metal matrix composite, Drilling, Optimization, step drill, Surface roughness, burr height, hole diameter error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3255960 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation
Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu
Abstract:
This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.
Keywords: machine learning, neural network, pressurized water reactor, supervisory controller
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 515959 Mean-Variance Optimization of Portfolios with Return of Premium Clauses in a DC Pension Plan with Multiple Contributors under Constant Elasticity of Variance Model
Authors: Bright O. Osu, Edikan E. Akpanibah, Chidinma Olunkwa
Abstract:
In this paper, mean-variance optimization of portfolios with the return of premium clauses in a defined contribution (DC) pension plan with multiple contributors under constant elasticity of variance (CEV) model is studied. The return clauses which permit death members to claim their accumulated wealth are considered, the remaining wealth is not equally distributed by the remaining members as in literature. We assume that before investment, the surplus which includes funds of members who died after retirement adds to the total wealth. Next, we consider investments in a risk-free asset and a risky asset to meet up the expected returns of the remaining members and obtain an optimized problem with the help of extended Hamilton Jacobi Bellman equation. We obtained the optimal investment strategies for the two assets and the efficient frontier of the members by using a stochastic optimal control technique. Furthermore, we studied the effect of the various parameters of the optimal investment strategies and the effect of the risk-averse level on the efficient frontier. We observed that the optimal investment strategy is the same as in literature, secondly, we observed that the surplus decreases the proportion of the wealth invested in the risky asset.
Keywords: DC pension fund, Hamilton Jacobi Bellman equation, optimal investment strategies, stochastic optimal control technique, return of premiums clauses, mean-variance utility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775958 Optimization of Some Process Parameters to Produce Raisin Concentrate in Khorasan Region of Iran
Authors: Peiman Ariaii, Hamid Tavakolipour, Mohsen Pirdashti, Rabehe Izadi Amoli
Abstract:
Raisin Concentrate (RC) are the most important products obtained in the raisin processing industries. These RC products are now used to make the syrups, drinks and confectionery productions and introduced as natural substitute for sugar in food applications. Iran is a one of the biggest raisin exporter in the world but unfortunately despite a good raw material, no serious effort to extract the RC has been taken in Iran. Therefore, in this paper, we determined and analyzed affected parameters on extracting RC process and then optimizing these parameters for design the extracting RC process in two types of raisin (round and long) produced in Khorasan region. Two levels of solvent (1:1 and 2:1), three levels of extraction temperature (60°C, 70°C and 80°C), and three levels of concentration temperature (50°C, 60°C and 70°C) were the treatments. Finally physicochemical characteristics of the obtained concentrate such as color, viscosity, percentage of reduction sugar, acidity and the microbial tests (mould and yeast) were counted. The analysis was performed on the basis of factorial in the form of completely randomized design (CRD) and Duncan's multiple range test (DMRT) was used for the comparison of the means. Statistical analysis of results showed that optimal conditions for production of concentrate is round raisins when the solvent ratio was 2:1 with extraction temperature of 60°C and then concentration temperature of 50°C. Round raisin is cheaper than the long one, and it is more economical to concentrate production. Furthermore, round raisin has more aromas and the less color degree with increasing the temperature of concentration and extraction. Finally, according to mentioned factors the concentrate of round raisin is recommended.Keywords: Raisin concentrate, optimization, process parameters, round raisin, Iran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600957 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods
Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar
Abstract:
Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.
Keywords: Bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018956 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing
Authors: S. Aziz, B. Alexander, C. Gengnagel, S. Weinzierl
Abstract:
This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the Building Information Modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.
Keywords: Acoustical design, additive manufacturing, computational design, multimodal optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 603955 Service Flow in Multilayer Networks: A Method for Evaluating the Layout of Urban Medical Resources
Authors: Guanglin Song
Abstract:
Situated within the context of China's tiered medical treatment system, this study aims to analyze spatial causes of urban healthcare access difficulties from the perspective of the configuration of healthcare facilities. A social network analysis approach is employed to construct a healthcare demand and supply flow network between major residential clusters and various tiers of hospitals in the city. The findings reveal that: 1) There exists overall maldistribution and over-concentration of healthcare resources in the study area, characterized by structural imbalance. 2) The low rate of primary care utilization in the study area is a key factor contributing to congestion at higher-tier hospitals, as excessive reliance on these institutions by neighboring communities exacerbates the problem. 3) Gradual optimization of the healthcare facility layout in the study area, encompassing holistic, local, and individual institutional levels, can enhance systemic efficiency and resource balance. This research proposes a method for evaluating urban healthcare resource distribution structures based on service flows within hierarchical networks. It offers spatially targeted optimization suggestions for promoting the implementation of the tiered healthcare system and alleviating challenges related to accessibility and congestion in seeking medical care. In addition, the study provides some new ideas for researchers and healthcare managers in countries, cities, and healthcare management around the world with similar challenges.
Keywords: Flow of public services, healthcare facilities, spatial planning, urban networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 86954 Determining Optimum Time Multiplier Setting of Overcurrent Relays Using Mixed Integer Linear Programming
Authors: P. N. Korde, P. P. Bedekar
Abstract:
The time coordination of overcurrent relays (OCR) in a power distribution network is of great importance, as it reduces the power outages by avoiding the mal-operation of the backup relays. For this, the optimum value of the time multiplier setting (TMS) of OCRs should be chosen. The problem of determining the optimum value of TMS of OCRs in power distribution networks is formulated as a constrained optimization problem. The objective is to find the optimum value of TMS of OCRs to minimize the time of operation of relays under the constraint of maintaining the coordination of relays. A power distribution network can have a combination of numerical and electromechanical relays. The TMS of numerical relays can be set to any real value (which satisfies the constraints of the problem), whereas the TMS of electromechanical relays can be set in fixed step (0 to 1 in steps of 0.05). The main contribution of this paper is a formulation of the problem as a mixed-integer linear programming (MILP) problem and application of Gomory's cutting plane method to find the optimum value of TMS of OCRs. The TMS of electromechanical relays are taken as integers in the range 1 to 20 in the step of 1, and these values are mapped to 0.05 to 1 in the step of 0.05. The results obtained are compared with those obtained using a simplex method and its variants. It has been shown that the mixed-integer linear programming method outperforms the simplex method (and its variants) in the case of a system having a combination of numerical and electromechanical relays.
Keywords: Backup protection, constrained optimization, Gomory's cutting plane method, mixed-integer linear programming, overcurrent relay coordination, simplex method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 423953 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421952 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.
Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2480951 Enhancing Hand Efficiency of Smart Glass Cleaning Robot through Generative Design Module
Authors: Pankaj Gupta, Amit Kumar Srivastava, Nitesh Pandey
Abstract:
This article explores the domain of generative design in order to enhance the development of robot designs for innovative and efficient maintenance approaches for tall buildings. This study aims to optimize the design of robotic hands by focusing on minimizing mass and volume while ensuring they can withstand the specified pressure with equal strength. The research procedure is structured and systematic. The purpose of optimization is to enhance the efficiency of the robot and reduce the manufacturing expenses. The project seeks to investigate the application of generative design in order to optimize products. Autodesk Fusion 360 offers the capability to immediately apply the generative design functionality to the solid model. The effort involved creating a solid model of the Smart Glass Cleaning Robot and optimizing one of its components, the Hand, using generative techniques. The article has thoroughly examined the designs, outcomes, and procedure. These loads serve as a benchmark for creating designs that can endure the necessary level of pressure and preserve their structural integrity. The efficacy of the generative design process is contingent upon the selection of materials, as different materials possess distinct physical attributes. The study utilizes five different materials, namely Steel, Stainless Steel, Titanium, Aluminum, and CFRP (Carbon Fiber Reinforced Polymer), in order to investigate a range of design possibilities.
Keywords: Generative design, mass and volume optimization, material strength analysis, generative design, smart glass cleaning robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204950 An Intelligent Water Drop Algorithm for Solving Economic Load Dispatch Problem
Authors: S. Rao Rayapudi
Abstract:
Economic Load Dispatch (ELD) is a method of determining the most efficient, low-cost and reliable operation of a power system by dispatching available electricity generation resources to supply load on the system. The primary objective of economic dispatch is to minimize total cost of generation while honoring operational constraints of available generation resources. In this paper an intelligent water drop (IWD) algorithm has been proposed to solve ELD problem with an objective of minimizing the total cost of generation. Intelligent water drop algorithm is a swarm-based natureinspired optimization algorithm, which has been inspired from natural rivers. A natural river often finds good paths among lots of possible paths in its ways from source to destination and finally find almost optimal path to their destination. These ideas are embedded into the proposed algorithm for solving economic load dispatch problem. The main advantage of the proposed technique is easy is implement and capable of finding feasible near global optimal solution with less computational effort. In order to illustrate the effectiveness of the proposed method, it has been tested on 6-unit and 20-unit test systems with incremental fuel cost functions taking into account the valve point-point loading effects. Numerical results shows that the proposed method has good convergence property and better in quality of solution than other algorithms reported in recent literature.Keywords: Economic load dispatch, Transmission loss, Optimization, Valve point loading, Intelligent Water Drop Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3630949 Reversible Binary Arithmetic for Integrated Circuit Design
Authors: D. Krishnaveni, M. Geetha Priya
Abstract:
Application of reversible logic in integrated circuits results in the improved optimization of power consumption. This technology can be put into use in a variety of low power applications such as quantum computing, optical computing, nano-technology, and Complementary Metal Oxide Semiconductor (CMOS) Very Large Scale Integrated (VLSI) design etc. Logic gates are the basic building blocks in the design of any logic network and thus integrated circuits. In this paper, reversible Dual Key Gate (DKG) and Dual key Gate Pair (DKGP) gates that work singly as full adder/full subtractor are used to realize the basic building blocks of logic circuits. Reversible full adder/subtractor and parallel adder/ subtractor are designed using other reversible gates available in the literature and compared with that of DKG & DKGP gates. Efficient performance of reversible logic circuits relies on the optimization of the key parameters viz number of constant inputs, garbage outputs and number of reversible gates. The full adder/subtractor and parallel adder/subtractor design with reversible DKGP and DKG gates results in least number of constant inputs, garbage outputs, and number of reversible gates compared to the other designs. Thus, this paper provides a threshold to build more complex arithmetic systems using these reversible logic gates, leading to the enhanced performance of computing systems.
Keywords: Low power CMOS, quantum computing, reversible logic gates, full adder, full subtractor, parallel adder/subtractor, basic gates, universal gates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438948 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities
Authors: Yu-Cheng Chou, Po Ting Lin
Abstract:
The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.
Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450947 A Hybrid Fuzzy AGC in a Competitive Electricity Environment
Authors: H. Shayeghi, A. Jalili
Abstract:
This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.
Keywords: AGC, Hybrid Fuzzy Controller, Deregulated Power System, Power System Control, GAs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737946 A β-mannanase from Fusarium oxysporum SS-25 via Solid State Fermentation on Brewer’s Spent Grain: Medium Optimization by Statistical Tools, Kinetic Characterization and Its Applications
Authors: S. S. Rana, C. Janveja, S. K. Soni
Abstract:
This study is concerned with the optimization of fermentation parameters for the hyper production of mannanase from Fusarium oxysporum SS-25 employing two step statistical strategy and kinetic characterization of crude enzyme preparation. The Plackett-Burman design used to screen out the important factors in the culture medium revealed 20% (w/w) wheat bran, 2% (w/w) each of potato peels, soyabean meal and malt extract, 1% tryptone, 0.14% NH4SO4, 0.2% KH2PO4, 0.0002% ZnSO4, 0.0005% FeSO4, 0.01% MnSO4, 0.012% SDS, 0.03% NH4Cl, 0.1% NaNO3 in brewer’s spent grain based medium with 50% moisture content, inoculated with 2.8×107 spores and incubated at 30oC for 6 days to be the main parameters influencing the enzyme production. Of these factors, four variables including soyabean meal, FeSO4, MnSO4 and NaNO3 were chosen to study the interactive effects and their optimum levels in central composite design of response surface methodology with the final mannanase yield of 193 IU/gds. The kinetic characterization revealed the crude enzyme to be active over broader temperature and pH range. This could result in 26.6% reduction in kappa number with 4.93% higher tear index and 1% increase in brightness when used to treat the wheat straw based kraft pulp. The hydrolytic potential of enzyme was also demonstrated on both locust bean gum and guar gum.
Keywords: Brewer’s Spent Grain, Fusarium oxysporum, Mannanase, Response Surface Methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5174945 Solar Tracking System: More Efficient Use of Solar Panels
Abstract:
This paper shows the potential system benefits of simple tracking solar system using a stepper motor and light sensor. This method is increasing power collection efficiency by developing a device that tracks the sun to keep the panel at a right angle to its rays. A solar tracking system is designed, implemented and experimentally tested. The design details and the experimental results are shown.Keywords: Renewable Energy, Power Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7792944 Stability Analysis of a Tricore
Authors: C. M. De Marco Muscat-Fenech, A.M. Grech La Rosa
Abstract:
The application of stability theory has led to detailed studies of different types of vessels; however, the shortage of information relating to multihull vessels demanded further investigation. This study shows that the position of the hulls has a very influential effect on both the transverse and longitudinal stability of the tricore. HSC stability code is applied for the optimisation of the hull configurations. Such optimization criteria would undoubtedly aid the performance of the vessel for both commercial or leisure purposes
Keywords: Stability, Multihull, Tricore
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2903943 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties
Authors: G. Martino, F. Silva, E. Marchal
Abstract:
The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.Keywords: Clusterization and classification algorithms, integrated planning, optimization, mathematical modeling, penalty minimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 645942 Normalization and Constrained Optimization of Measures of Fuzzy Entropy
Authors: K.C. Deshmukh, P.G. Khot, Nikhil
Abstract:
In the literature of information theory, there is necessity for comparing the different measures of fuzzy entropy and this consequently, gives rise to the need for normalizing measures of fuzzy entropy. In this paper, we have discussed this need and hence developed some normalized measures of fuzzy entropy. It is also desirable to maximize entropy and to minimize directed divergence or distance. Keeping in mind this idea, we have explained the method of optimizing different measures of fuzzy entropy.Keywords: Fuzzy set, Uncertainty, Fuzzy entropy, Normalization, Membership function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473941 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.
Keywords: CO2 emissions, performance based design, optimization, sustainable design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868940 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.
Keywords: Harmonics, passive filter, power factor, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191939 Optimal Design of Selective Excitation Pulses in Magnetic Resonance Imaging using Genetic Algorithms
Authors: Mohammed A. Alolfe, Abou-Bakr M. Youssef, Yasser M. Kadah
Abstract:
The proper design of RF pulses in magnetic resonance imaging (MRI) has a direct impact on the quality of acquired images, and is needed for many applications. Several techniques have been proposed to obtain the RF pulse envelope given the desired slice profile. Unfortunately, these techniques do not take into account the limitations of practical implementation such as limited amplitude resolution. Moreover, implementing constraints for special RF pulses on most techniques is not possible. In this work, we propose to develop an approach for designing optimal RF pulses under theoretically any constraints. The new technique will pose the RF pulse design problem as a combinatorial optimization problem and uses efficient techniques from this area such as genetic algorithms (GA) to solve this problem. In particular, an objective function will be proposed as the norm of the difference between the desired profile and the one obtained from solving the Bloch equations for the current RF pulse design values. The proposed approach will be verified using analytical solution based RF simulations and compared to previous methods such as Shinnar-Le Roux (SLR) method, and analysis, selected, and tested the options and parameters that control the Genetic Algorithm (GA) can significantly affect its performance to get the best improved results and compared to previous works in this field. The results show a significant improvement over conventional design techniques, select the best options and parameters for GA to get most improvement over the previous works, and suggest the practicality of using of the new technique for most important applications as slice selection for large flip angles, in the area of unconventional spatial encoding, and another clinical use.
Keywords: Selective excitation, magnetic resonance imaging, combinatorial optimization, pulse design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612938 Novel Hybrid Approaches For Real Coded Genetic Algorithm to Compute the Optimal Control of a Single Stage Hybrid Manufacturing Systems
Authors: M. Senthil Arumugam, M.V.C. Rao
Abstract:
This paper presents a novel two-phase hybrid optimization algorithm with hybrid genetic operators to solve the optimal control problem of a single stage hybrid manufacturing system. The proposed hybrid real coded genetic algorithm (HRCGA) is developed in such a way that a simple real coded GA acts as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method is next employed to do fine tuning. The hybrid genetic operators involved in the proposed algorithm improve both the quality of the solution and convergence speed. The phase–1 uses conventional real coded genetic algorithm (RCGA), while optimisation by direct search and systematic reduction of the size of search region is employed in the phase – 2. A typical numerical example of an optimal control problem with the number of jobs varying from 10 to 50 is included to illustrate the efficacy of the proposed algorithm. Several statistical analyses are done to compare the validity of the proposed algorithm with the conventional RCGA and PSO techniques. Hypothesis t – test and analysis of variance (ANOVA) test are also carried out to validate the effectiveness of the proposed algorithm. The results clearly demonstrate that the proposed algorithm not only improves the quality but also is more efficient in converging to the optimal value faster. They can outperform the conventional real coded GA (RCGA) and the efficient particle swarm optimisation (PSO) algorithm in quality of the optimal solution and also in terms of convergence to the actual optimum value.
Keywords: Hybrid systems, optimal control, real coded genetic algorithm (RCGA), Particle swarm optimization (PSO), Hybrid real coded GA (HRCGA), and Hybrid genetic operators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899937 Optimization of Springback Prediction in U-Channel Process Using Response Surface Methodology
Authors: Muhamad Sani Buang, Shahrul Azam Abdullah, Juri Saedon
Abstract:
There is not much effective guideline on development of design parameters selection on spring back for advanced high strength steel sheet metal in U-channel process during cold forming process. This paper presents the development of predictive model for spring back in U-channel process on advanced high strength steel sheet employing Response Surface Methodology (RSM). The experimental was performed on dual phase steel sheet, DP590 in Uchannel forming process while design of experiment (DoE) approach was used to investigates the effects of four factors namely blank holder force (BHF), clearance (C) and punch travel (Tp) and rolling direction (R) were used as input parameters using two level values by applying Full Factorial design (24 ). From a statistical analysis of variant (ANOVA), result showed that blank holder force (BHF), clearance (C) and punch travel (Tp) displayed significant effect on spring back of flange angle (β2 ) and wall opening angle (β1 ), while rolling direction (R) factor is insignificant. The significant parameters are optimized in order to reduce the spring back behavior using Central Composite Design (CCD) in RSM and the optimum parameters were determined. A regression model for spring back was developed. The effect of individual parameters and their response was also evaluated. The results obtained from optimum model are in agreement with the experimental values.
Keywords: Advance high strength steel, U-channel process, Springback, Design of Experiment, Optimization, Response Surface Methodology (RSM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2298936 Transportation Under the Threat of Influenza
Authors: Yujun Zheng, Qin Song, Haihe Shi, and Jinyun Xue
Abstract:
There are a number of different cars for transferring hundreds of close contacts of swine influenza patients to hospital, and we need to carefully assign the passengers to those cars in order to minimize the risk of influenza spreading during transportation. The paper presents an approach to straightforward obtain the optimal solution of the relaxed problems, and develops two iterative improvement algorithms to effectively tackle the general problem.
Keywords: Influenza spread, discrete optimization, stationary point, iterative improvement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1179935 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems
Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer
Abstract:
This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.Keywords: Cascade control, multi-loop control systems, multi-objective optimization, optimal control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 923