Search results for: optimization/inverse mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4633

Search results for: optimization/inverse mapping

3703 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow

Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite

Abstract:

The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.

Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms

Procedia PDF Downloads 407
3702 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 63
3701 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 410
3700 Demographic Bomb or Bonus in All Provinces in 100 Years after Indonesian Independence

Authors: Fitri CaturLestari

Abstract:

According to National Population and Family Planning Board (BKKBN), demographic bonus will occur in 2025-2035, when the number of people within the productive age bracket is higher than the number of elderly people and children. This time will be a gold moment for Indonesia to achieve maximum productivity and prosperity. But it will be a demographic bomb if it isn’t balanced by economic and social aspect considerations. Therefore it is important to make a prediction mapping of all provinces in Indonesia whether in demographic bomb or bonus condition after 100 years Indonesian independence. The purpose of this research were to make the demographic mapping based on the economic and social aspects of the provinces in Indonesia and categorizing them into demographic bomb and bonus condition. The research data are gained from Statistics Indonesia (BPS) as the secondary data. The multiregional component method, regression and quadrant analysis were used to predict the number of people, economic growth, Human Development Index (HDI), and gender equality in education and employment. There were different characteristic of provinces in Indonesia from economic aspect and social aspect. The west Indonesia was already better developed than the east one. The prediction result, many provinces in Indonesia will get demographic bonus but the others will get demographic bomb. It is important to prepare particular strategy to particular provinces with all of their characteristic based on the prediction result so the demographic bomb can be minimalized.

Keywords: demography, economic growth, gender, HDI

Procedia PDF Downloads 314
3699 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance

Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian

Abstract:

Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.

Keywords: identification, Hammerstein-Wiener, optimization, quantization

Procedia PDF Downloads 247
3698 ACOPIN: An ACO Algorithm with TSP Approach for Clustering Proteins in Protein Interaction Networks

Authors: Jamaludin Sallim, Rozlina Mohamed, Roslina Abdul Hamid

Abstract:

In this paper, we proposed an Ant Colony Optimization (ACO) algorithm together with Traveling Salesman Problem (TSP) approach to investigate the clustering problem in Protein Interaction Networks (PIN). We named this combination as ACOPIN. The purpose of this work is two-fold. First, to test the efficacy of ACO in clustering PIN and second, to propose the simple generalization of the ACO algorithm that might allow its application in clustering proteins in PIN. We split this paper to three main sections. First, we describe the PIN and clustering proteins in PIN. Second, we discuss the steps involved in each phase of ACO algorithm. Finally, we present some results of the investigation with the clustering patterns.

Keywords: ant colony optimization algorithm, searching algorithm, protein functional module, protein interaction network

Procedia PDF Downloads 592
3697 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community

Authors: Mohamed Ghorab

Abstract:

Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.  

Keywords: distributed energy resources, network energy system, optimization, microgeneration system

Procedia PDF Downloads 179
3696 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling

Authors: Ghita Benayad

Abstract:

Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.

Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market

Procedia PDF Downloads 30
3695 An Integrated Approach for Optimizing Drillable Parameters to Increase Drilling Performance: A Real Field Case Study

Authors: Hamidoddin Yousife

Abstract:

Drilling optimization requires a prediction of drilling rate of penetration (ROP) since it provides a significant reduction in drilling costs. There are several factors that can have an impact on the ROP, both controllable and uncontrollable. Numerous drilling penetration rate models have been considered based on drilling parameters. This papers considered the effect of proper drilling parameter selection such as bit, Mud Type, applied weight on bit (WOB), Revolution per minutes (RPM), and flow rate on drilling optimization and drilling cost reduction. A predicted analysis is used in real-time drilling performance to determine the optimal drilling operation. As a result of these modeling studies, the real data collected from three directional wells at Azadegan oil fields, Iran, was verified and adjusted to determine the drillability of a specific formation. Simulation results and actual drilling results show significant improvements in inaccuracy. Once simulations had been validated, optimum drilling parameters and equipment specifications were determined by varying weight on bit (WOB), rotary speed (RPM), hydraulics (hydraulic pressure), and bit specification for each well until the highest drilling rate was achieved. To evaluate the potential operational and economic benefits of optimizing results, a qualitative and quantitative analysis of the data was performed.

Keywords: drlling, cost, optimization, parameters

Procedia PDF Downloads 150
3694 Enhancing Cultural Heritage Data Retrieval by Mapping COURAGE to CIDOC Conceptual Reference Model

Authors: Ghazal Faraj, Andras Micsik

Abstract:

The CIDOC Conceptual Reference Model (CRM) is an extensible ontology that provides integrated access to heterogeneous and digital datasets. The CIDOC-CRM offers a “semantic glue” intended to promote accessibility to several diverse and dispersed sources of cultural heritage data. That is achieved by providing a formal structure for the implicit and explicit concepts and their relationships in the cultural heritage field. The COURAGE (“Cultural Opposition – Understanding the CultuRal HeritAGE of Dissent in the Former Socialist Countries”) project aimed to explore methods about socialist-era cultural resistance during 1950-1990 and planned to serve as a basis for further narratives and digital humanities (DH) research. This project highlights the diversity of flourished alternative cultural scenes in Eastern Europe before 1989. Moreover, the dataset of COURAGE is an online RDF-based registry that consists of historical people, organizations, collections, and featured items. For increasing the inter-links between different datasets and retrieving more relevant data from various data silos, a shared federated ontology for reconciled data is needed. As a first step towards these goals, a full understanding of the CIDOC CRM ontology (target ontology), as well as the COURAGE dataset, was required to start the work. Subsequently, the queries toward the ontology were determined, and a table of equivalent properties from COURAGE and CIDOC CRM was created. The structural diagrams that clarify the mapping process and construct queries are on progress to map person, organization, and collection entities to the ontology. Through mapping the COURAGE dataset to CIDOC-CRM ontology, the dataset will have a common ontological foundation with several other datasets. Therefore, the expected results are: 1) retrieving more detailed data about existing entities, 2) retrieving new entities’ data, 3) aligning COURAGE dataset to a standard vocabulary, 4) running distributed SPARQL queries over several CIDOC-CRM datasets and testing the potentials of distributed query answering using SPARQL. The next plan is to map CIDOC-CRM to other upper-level ontologies or large datasets (e.g., DBpedia, Wikidata), and address similar questions on a wide variety of knowledge bases.

Keywords: CIDOC CRM, cultural heritage data, COURAGE dataset, ontology alignment

Procedia PDF Downloads 130
3693 Finite Element Modelling and Optimization of Post-Machining Distortion for Large Aerospace Monolithic Components

Authors: Bin Shi, Mouhab Meshreki, Grégoire Bazin, Helmi Attia

Abstract:

Large monolithic components are widely used in the aerospace industry in order to reduce airplane weight. Milling is an important operation in manufacturing of the monolithic parts. More than 90% of the material could be removed in the milling operation to obtain the final shape. This results in low rigidity and post-machining distortion. The post-machining distortion is the deviation of the final shape from the original design after releasing the clamps. It is a major challenge in machining of the monolithic parts, which costs billions of economic losses every year. Three sources are directly related to the part distortion, including initial residual stresses (RS) generated from previous manufacturing processes, machining-induced RS and thermal load generated during machining. A finite element model was developed to simulate a milling process and predicate the post-machining distortion. In this study, a rolled-aluminum plate AA7175 with a thickness of 60 mm was used for the raw block. The initial residual stress distribution in the block was measured using a layer-removal method. A stress-mapping technique was developed to implement the initial stress distribution into the part. It is demonstrated that this technique significantly accelerates the simulation time. Machining-induced residual stresses on the machined surface were measured using MTS3000 hole-drilling strain-gauge system. The measured RS was applied on the machined surface of a plate to predict the distortion. The predicted distortion was compared with experimental results. It is found that the effect of the machining-induced residual stress on the distortion of a thick plate is very limited. The distortion can be ignored if the wall thickness is larger than a certain value. The RS generated from the thermal load during machining is another important factor causing part distortion. Very limited number of research on this topic was reported in literature. A coupled thermo-mechanical FE model was developed to evaluate the thermal effect on the plastic deformation of a plate. A moving heat source with a feed rate was used to simulate the dynamic cutting heat in a milling process. When the heat source passed the part surface, a small layer was removed to simulate the cutting operation. The results show that for different feed rates and plate thicknesses, the plastic deformation/distortion occurs only if the temperature exceeds a critical level. It was found that the initial residual stress has a major contribution to the part distortion. The machining-induced stress has limited influence on the distortion for thin-wall structure when the wall thickness is larger than a certain value. The thermal load can also generate part distortion when the cutting temperature is above a critical level. The developed numerical model was employed to predict the distortion of a frame part with complex structures. The predictions were compared with the experimental measurements, showing both are in good agreement. Through optimization of the position of the part inside the raw plate using the developed numerical models, the part distortion can be significantly reduced by 50%.

Keywords: modelling, monolithic parts, optimization, post-machining distortion, residual stresses

Procedia PDF Downloads 36
3692 Optimisation of the Input Layer Structure for Feedforward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: correlation analysis, F-ratio, levenberg-marquardt, MSE, NARX, neural network, optimisation

Procedia PDF Downloads 352
3691 A New Perspective: The Use of Low-Cost Phase Change Material in Building Envelope System

Authors: Andrey A. Chernousov, Ben Y. B. Chan

Abstract:

The use of the low-cost paraffinic phase change material can be rather effective in smart building envelopes in the South China region. Particular attention has to be paid to the PCM optimization as an exploitation conditions and the envelope insulation changes its thermal characteristics. The studied smart building envelope consists of a reinforced aluminum exterior, polymeric insulation foam, phase change material and reinforced interior gypsum board. A prototype sample was tested to validate the numerical scheme using EnergryPlus software. Three scenarios of insulation thermal resistance loss (ΔR/R = 0%, 25%, 50%) were compared with the different PCM thicknesses (tP=0, 1, 2.5, 5 mm). The comparisons were carried out for a west facing enveloped office building (50 storey). PCM optimization was applied to find the maximum efficiency for the different ΔR/R cases. It was found, during the optimization, that the PCM is an important smart component, lowering the peak energy demand up to 2.7 times. The results are not influenced by the insulation aging in terms of ΔR/R during long-term exploitation. In hot and humid climates like Hong Kong, the insulation core of the smart systems is recommended to be laminated completely. This can be very helpful in achieving an acceptable payback period.

Keywords: smart building envelope, thermal performance, phase change material, energy efficiency, large-scale sandwich panel

Procedia PDF Downloads 710
3690 Balancing a Rotary Inverted Pendulum System Using Robust Generalized Dynamic Inverse: Design and Experiment

Authors: Ibrahim M. Mehedi, Uzair Ansari, Ubaid M. Al-Saggaf, Abdulrahman H. Bajodah

Abstract:

This paper presents a methodology for balancing a rotary inverted pendulum system using Robust Generalized Dynamic Inversion (RGDI) under influence of parametric variations and external disturbances. In GDI control, dynamic constraints are formulated in the form of asymptotically stable differential equation which encapsulates the control objectives. The constraint differential equations are based on the deviation function of the angular position and its rates from their reference values. The constraint dynamics are inverted using Moore-Penrose Generalized Inverse (MPGI) to realize the control expression. The GDI singularity problem is addressed by augmenting a dynamic scale factor in the interpretation of MPGI which guarantee asymptotically stable position tracking. An additional term based on Sliding Mode Control is appended within GDI control to make it robust against parametric variations, disturbances and tracking performance deterioration due to generalized inversion scaling. The stability of the closed loop system is ensured by using positive definite Lyapunov energy function that guarantees semi-global practically stable position tracking. Numerical simulations are conducted on the dynamic model of rotary inverted pendulum system to analyze the efficiency of proposed RGDI control law. The comparative study is also presented, in which the performance of RGDI control is compared with Linear Quadratic Regulator (LQR) and is verified through experiments. Numerical simulations and real-time experiments demonstrate better tracking performance abilities and robustness features of RGDI control in the presence of parametric uncertainties and disturbances.

Keywords: generalized dynamic inversion, lyapunov stability, rotary inverted pendulum system, sliding mode control

Procedia PDF Downloads 157
3689 Minimum-Fuel Optimal Trajectory for Reusable First-Stage Rocket Landing Using Particle Swarm Optimization

Authors: Kevin Spencer G. Anglim, Zhenyu Zhang, Qingbin Gao

Abstract:

Reusable launch vehicles (RLVs) present a more environmentally-friendly approach to accessing space when compared to traditional launch vehicles that are discarded after each flight. This paper studies the recyclable nature of RLVs by presenting a solution method for determining minimum-fuel optimal trajectories using principles from optimal control theory and particle swarm optimization (PSO). This problem is formulated as a minimum-landing error powered descent problem where it is desired to move the RLV from a fixed set of initial conditions to three different sets of terminal conditions. However, unlike other powered descent studies, this paper considers the highly nonlinear effects caused by atmospheric drag, which are often ignored for studies on the Moon or on Mars. Rather than optimizing the controls directly, the throttle control is assumed to be bang-off-bang with a predetermined thrust direction for each phase of flight. The PSO method is verified in a one-dimensional comparison study, and it is then applied to the two-dimensional cases, the results of which are illustrated.

Keywords: minimum-fuel optimal trajectory, particle swarm optimization, reusable rocket, SpaceX

Procedia PDF Downloads 264
3688 Optimum Design of Dual-Purpose Outriggers in Tall Buildings

Authors: Jiwon Park, Jihae Hur, Kukjae Kim, Hansoo Kim

Abstract:

In this study, outriggers, which are horizontal structures connecting a building core to distant columns to increase the lateral stiffness of a tall building, are used to reduce differential axial shortening in a tall building. Therefore, the outriggers in tall buildings are used to serve the dual purposes of reducing the lateral displacement and reducing the differential axial shortening. Since the location of the outrigger greatly affects the effectiveness of the outrigger in terms of the lateral displacement at the top of the tall building and the maximum differential axial shortening, the optimum locations of the dual-purpose outriggers can be determined by an optimization method. Because the floors where the outriggers are installed are given as integer numbers, the conventional gradient-based optimization methods cannot be directly used. In this study, a piecewise quadratic interpolation method is used to resolve the integrality requirement posed by the optimum locations of the dual-purpose outriggers. The optimal solutions for the dual-purpose outriggers are searched by linear scalarization which is a popular method for multi-objective optimization problems. It was found that increasing the number of outriggers reduced the maximum lateral displacement and the maximum differential axial shortening. It was also noted that the optimum locations for reducing the lateral displacement and reducing the differential axial shortening were different. Acknowledgment: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (NRF-2017R1A2B4010043) and financially supported by Korea Ministry of Land, Infrastructure and Transport(MOLIT) as U-City Master and Doctor Course Grant Program.

Keywords: concrete structure, optimization, outrigger, tall building

Procedia PDF Downloads 158
3687 Reliability Enhancement by Parameter Design in Ferrite Magnet Process

Authors: Won Jung, Wan Emri

Abstract:

Ferrite magnet is widely used in many automotive components such as motors and alternators. Magnets used inside the components must be in good quality to ensure the high level of performance. The purpose of this study is to design input parameters that optimize the ferrite magnet production process to ensure the quality and reliability of manufactured products. Design of Experiments (DOE) and Statistical Process Control (SPC) are used as mutual supplementations to optimize the process. DOE and SPC are quality tools being used in the industry to monitor and improve the manufacturing process condition. These tools are practically used to maintain the process on target and within the limits of natural variation. A mixed Taguchi method is utilized for optimization purpose as a part of DOE analysis. SPC with proportion data is applied to assess the output parameters to determine the optimal operating conditions. An example of case involving the monitoring and optimization of ferrite magnet process was presented to demonstrate the effectiveness of this approach. Through the utilization of these tools, reliable magnets can be produced by following the step by step procedures of proposed framework. One of the main contributions of this study was producing the crack free magnets by applying the proposed parameter design.

Keywords: ferrite magnet, crack, reliability, process optimization, Taguchi method

Procedia PDF Downloads 503
3686 Automating and Optimization Monitoring Prognostics for Rolling Bearing

Authors: H. Hotait, X. Chiementin, L. Rasolofondraibe

Abstract:

This paper presents a continuous work to detect the abnormal state in the rolling bearing by studying the vibration signature analysis and calculation of the remaining useful life. To achieve these aims, two methods; the first method is the classification to detect the degradation state by the AOM-OPTICS (Acousto-Optic Modulator) method. The second one is the prediction of the degradation state using least-squares support vector regression and then compared with the linear degradation model. An experimental investigation on ball-bearing was conducted to see the effectiveness of the used method by applying the acquired vibration signals. The proposed model for predicting the state of bearing gives us accurate results with the experimental and numerical data.

Keywords: bearings, automatization, optimization, prognosis, classification, defect detection

Procedia PDF Downloads 106
3685 Optimal Design of 3-Way Reversing Valve Considering Cavitation Effect

Authors: Myeong-Gon Lee, Yang-Gyun Kim, Tae-Young Kim, Seung-Ho Han

Abstract:

The high-pressure valve uses one set of 2-way valves for the purpose of reversing fluid direction. If there is no accurate control device for the 2-way valves, lots of surging can be generated. The surging is a kind of pressure ripple that occurs in rapid changes of fluid motions under inaccurate valve control. To reduce the surging effect, a 3-way reversing valve can be applied which provides a rapid and precise change of water flow directions without any accurate valve control system. However, a cavitation occurs due to a complicated internal trim shape of the 3-way reversing valve. The cavitation causes not only noise and vibration but also decreasing the efficiency of valve-operation, in which the bubbles generated below the saturated vapor pressure are collapsed rapidly at higher pressure zone. The shape optimization of the 3-way reversing valve to minimize the cavitation effect is necessary. In this study, the cavitation index according to the international standard ISA was introduced to estimate macroscopically the occurrence of the cavitation effect. Computational fluid dynamic analysis was carried out, and the cavitation effect was quantified by means of the percent of cavitation converted from calculated results of vapor volume fraction. In addition, the shape optimization of the 3-way reversing valve was performed by taking into account of the percent of cavitation.

Keywords: 3-Way reversing valve, cavitation, shape optimization, vapor volume fraction

Procedia PDF Downloads 355
3684 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 310
3683 A Framework for Automated Nuclear Waste Classification

Authors: Seonaid Hume, Gordon Dobie, Graeme West

Abstract:

Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.

Keywords: nuclear decommissioning, radiation detection, object detection, waste classification

Procedia PDF Downloads 184
3682 The Possibility of Solving a 3x3 Rubik’s Cube under 3 Seconds

Authors: Chung To Kong, Siu Ming Yiu

Abstract:

Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing, including turns per second (tps), algorithm, finger trick, hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement techniques. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.

Keywords: Rubik's Cube, speed, finger trick, optimization

Procedia PDF Downloads 189
3681 Optimization of Thermopile Sensor Performance of Polycrystalline Silicon Film

Authors: Li Long, Thomas Ortlepp

Abstract:

A theoretical model for the optimization of thermopile sensor performance is developed for thermoelectric-based infrared radiation detection. It is shown that the performance of polycrystalline silicon film thermopile sensor can be optimized according to the thermoelectric quality factor, sensor layer structure factor, and sensor layout geometrical form factor. Based on the properties of electrons, phonons, grain boundaries, and their interactions, the thermoelectric quality factor of polycrystalline silicon is analyzed with the relaxation time approximation of the Boltzmann transport equation. The model includes the effect of grain structure, grain boundary trap properties, and doping concentration. The layer structure factor is analyzed with respect to the infrared absorption coefficient. The optimization of layout design is characterized by the form factor, which is calculated for different sensor designs. A double-layer polycrystalline silicon thermopile infrared sensor on a suspended membrane has been designed and fabricated with a CMOS-compatible process. The theoretical approach is confirmed by measurement results.

Keywords: polycrystalline silicon, relaxation time approximation, specific detectivity, thermal conductivity, thermopile infrared sensor

Procedia PDF Downloads 117
3680 Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling

Authors: Xiaoming Jiang, Jinqiao Shi, Qingfeng Tan, Wentao Zhang, Xuebin Wang, Muqian Chen

Abstract:

Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.

Keywords: proxy server, priority queue, optimization algorithm, distributed web crawling

Procedia PDF Downloads 198
3679 Energy Efficient Clustering with Adaptive Particle Swarm Optimization

Authors: KumarShashvat, ArshpreetKaur, RajeshKumar, Raman Chadha

Abstract:

Wireless sensor networks have principal characteristic of having restricted energy and with limitation that energy of the nodes cannot be replenished. To increase the lifetime in this scenario WSN route for data transmission is opted such that utilization of energy along the selected route is negligible. For this energy efficient network, dandy infrastructure is needed because it impinges the network lifespan. Clustering is a technique in which nodes are grouped into disjoints and non–overlapping sets. In this technique data is collected at the cluster head. In this paper, Adaptive-PSO algorithm is proposed which forms energy aware clusters by minimizing the cost of locating the cluster head. The main concern is of the suitability of the swarms by adjusting the learning parameters of PSO. Particle Swarm Optimization converges quickly at the beginning stage of the search but during the course of time, it becomes stable and may be trapped in local optima. In suggested network model swarms are given the intelligence of the spiders which makes them capable enough to avoid earlier convergence and also help them to escape from the local optima. Comparison analysis with traditional PSO shows that new algorithm considerably enhances the performance where multi-dimensional functions are taken into consideration.

Keywords: Particle Swarm Optimization, adaptive – PSO, comparison between PSO and A-PSO, energy efficient clustering

Procedia PDF Downloads 228
3678 Uncertainty and Optimization Analysis Using PETREL RE

Authors: Ankur Sachan

Abstract:

The ability to make quick yet intelligent and value-added decisions to develop new fields has always been of great significance. In situations where the capital expenses and subsurface risk are high, carefully analyzing the inherent uncertainties in the reservoir and how they impact the predicted hydrocarbon accumulation and production becomes a daunting task. The problem is compounded in offshore environments, especially in the presence of heavy oils and disconnected sands where the margin for error is small. Uncertainty refers to the degree to which the data set may be in error or stray from the predicted values. To understand and quantify the uncertainties in reservoir model is important when estimating the reserves. Uncertainty parameters can be geophysical, geological, petrophysical etc. Identification of these parameters is necessary to carry out the uncertainty analysis. With so many uncertainties working at different scales, it becomes essential to have a consistent and efficient way of incorporating them into our analysis. Ranking the uncertainties based on their impact on reserves helps to prioritize/ guide future data gathering and uncertainty reduction efforts. Assigning probabilistic ranges to key uncertainties also enables the computation of probabilistic reserves. With this in mind, this paper, with the help the uncertainty and optimization process in petrel RE shows how the most influential uncertainties can be determined efficiently and how much impact so they have on the reservoir model thus helping in determining a cost effective and accurate model of the reservoir.

Keywords: uncertainty, reservoir model, parameters, optimization analysis

Procedia PDF Downloads 603
3677 Optimization of Wear during Dry Sliding Wear of AISI 1042 Steel Using Response Surface Methodology

Authors: Sukant Mehra, Parth Gupta, Varun Arora, Sarvoday Singh, Amit Kohli

Abstract:

The study was emphasised on dry sliding wear behavior of AISI 1042 steel. Dry sliding wear tests were performed using pin-on-disk apparatus under normal loads of 5, 7.5 and 10 kgf and at speeds 600, 750 and 900 rpm. Response surface methodology (RSM) was utilized for finding optimal values of process parameter and experiment was based on rotatable, central composite design (CCD). It was found that the wear followed linear pattern with the load and rpm. The obtained optimal process parameters have been predicted and verified by confirmation experiments.

Keywords: central composite design (CCD), optimization, response surface methodology (RSM), wear

Procedia PDF Downloads 561
3676 Multi Objective Optimization for Two-Sided Assembly Line Balancing

Authors: Srushti Bhatt, M. B. Kiran

Abstract:

Two-sided assembly line balancing problem is yet to be addressed simply to compete for the global market for manufacturers. The task assigned in an ordered sequence to get optimum performance of the system is known as assembly line balancing problem mainly classified as single and two sided. It is very challenging in manufacturing industries to balance two-sided assembly line, wherein the set of sequential workstations the task operations are performed in two sides of the line. The conflicting major objective in two-sided assembly line balancing problem is either to maximize /minimize the performance parameters. The present study emphases on combining different evolutionary algorithm; ant colony, Tabu search and petri net method; and compares their results of an algorithm for solving two-sided assembly line balancing problem. The concept of multi objective optimization of performance parameters is now a day adopted to make a decision involving more than one objective function to be simultaneously optimized. The optimum result can be expected among the selected methods using multi-objective optimization. The performance parameters considered in the present study are a number of workstation, slickness and smoothness index. The simulation of the assembly line balancing problem provides optimal results of classical and practical problems.

Keywords: Ant colony, petri net, tabu search, two sided ALBP

Procedia PDF Downloads 263
3675 Utilization of Online Risk Mapping Techniques versus Desktop Geospatial Tools in Making Multi-Hazard Risk Maps for Italy

Authors: Seyed Vahid Kamal Alavi

Abstract:

Italy has experienced a notable quantity and impact of disasters due to natural hazards and technological accidents caused by diverse risk sources on its physical, technological, and human/sociological infrastructures during past decade. This study discusses the frequency and impacts of the most three physical devastating natural hazards in Italy for the period 2000–2013. The approach examines the reliability of a range of open source WebGIS techniques versus a proposed multi-hazard risk management methodology. Spatial and attribute data which include USGS publically available hazard data and thirteen years Munich RE recorded data for Italy with different severities have been processed, visualized in a GIS (Geographic Information System) framework. Comparison of results from the study showed that the multi-hazard risk maps generated using open source techniques do not provide a reliable system to analyze the infrastructures losses in respect to national risk sources while they can be adopted for general international risk management purposes. Additionally, this study establishes the possibility to critically examine and calibrate different integrated techniques in evaluating what better protection measures can be taken in an area.

Keywords: multi-hazard risk mapping, risk management, GIS, Italy

Procedia PDF Downloads 353
3674 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm

Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali

Abstract:

Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.

Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir

Procedia PDF Downloads 250