Search results for: teaching-learning based optimization
29035 Patient Scheduling Improvement in a Cancer Treatment Clinic Using Optimization Techniques
Authors: Maryam Haghi, Ivan Contreras, Nadia Bhuiyan
Abstract:
Chemotherapy is one of the most popular and effective cancer treatments offered to patients in outpatient oncology centers. In such clinics, patients first consult with an oncologist and the oncologist may prescribe a chemotherapy treatment plan for the patient based on the blood test results and the examination of the health status. Then, when the plan is determined, a set of chemotherapy and consultation appointments should be scheduled for the patient. In this work, a comprehensive mathematical formulation for planning and scheduling different types of chemotherapy patients over a planning horizon considering blood test, consultation, pharmacy and treatment stages has been proposed. To be more realistic and to provide an applicable model, this study is focused on a case study related to a major outpatient cancer treatment clinic in Montreal, Canada. Comparing the results of the proposed model with the current practice of the clinic under study shows significant improvements regarding different performance measures. These major improvements in the patients’ schedules reveal that using optimization techniques in planning and scheduling of patients in such highly demanded cancer treatment clinics is an essential step to provide a good coordination between different involved stages which ultimately increases the efficiency of the entire system and promotes the staff and patients' satisfaction.Keywords: chemotherapy patients scheduling, integer programming, integrated scheduling, staff balancing
Procedia PDF Downloads 17529034 The Design, Development, and Optimization of a Capacitive Pressure Sensor Utilizing an Existing 9DOF Platform
Authors: Andrew Randles, Ilker Ocak, Cheam Daw Don, Navab Singh, Alex Gu
Abstract:
Nine Degrees of Freedom (9 DOF) systems are already in development in many areas. In this paper, an integrated pressure sensor is proposed that will make use of an already existing monolithic 9 DOF inertial MEMS platform. Capacitive pressure sensors can suffer from limited sensitivity for a given size of membrane. This novel pressure sensor design increases the sensitivity by over 5 times compared to a traditional array of square diaphragms while still fitting within a 2 mm x 2 mm chip and maintaining a fixed static capacitance. The improved design uses one large diaphragm supported by pillars with fixed electrodes placed above the areas of maximum deflection. The design optimization increases the sensitivity from 0.22 fF/kPa to 1.16 fF/kPa. Temperature sensitivity was also examined through simulation.Keywords: capacitive pressure sensor, 9 DOF, 10 DOF, sensor, capacitive, inertial measurement unit, IMU, inertial navigation system, INS
Procedia PDF Downloads 54729033 Optimization of Fenton Process for the Treatment of Young Municipal Leachate
Authors: Bouchra Wassate, Younes Karhat, Khadija El Falaki
Abstract:
Leachate is a source of surface water and groundwater contamination if it has not been pretreated. Indeed, due to its complex structure and its pollution load make its treatment extremely difficult to achieve the standard limits required. The objective of this work is to show the interest of advanced oxidation processes on leachate treatment of urban waste containing high concentrations of organic pollutants. The efficiency of Fenton (Fe2+ +H2O2 + H+) reagent for young leachate recovered from collection trucks household waste in the city of Casablanca, Morocco, was evaluated with the objectives of chemical oxygen demand (COD) and discoloration reductions. The optimization of certain physicochemical parameters (initial pH value, reaction time, and [Fe2+], [H2O2]/ [Fe2+] ratio) has yielded good results in terms of reduction of COD and discoloration of the leachate.Keywords: COD removal, color removal, Fenton process, oxidation process, leachate
Procedia PDF Downloads 28629032 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore
Authors: Ronal Muresano, Andrea Pagano
Abstract:
Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool
Procedia PDF Downloads 36929031 Heat Transfer Process Parameter Optimization in SI/Ge Using TAGUCHI Method
Authors: Evln Ranga Charyulu, S. P. Venu Madhavarao, S. Udaya kumar, S. V. S. S. N. V. G. Krishna Murthy
Abstract:
With the advent of new nanometer process technologies, it is possible to integrate billion transistors on a single substrate. When more and more functionality included there is the possibility of multi-million transistors switching simultaneously consuming more power and dissipating more power along with more leakage of current into the substrate of porous silicon or germanium material. These results in substrate heating and thermal noise generation coupled to signals of interest. The heating process is represented by coupled nonlinear partial differential equations in porous silicon and germanium. By identifying heat sources and heat fluxes may results in designing of ultra-low power circuits. The PDEs are solved by finite difference scheme assuming that boundary layer equations in porous silicon and germanium. Local heat fluxes along the vertical isothermal surface immersed in porous SI/Ge are considered. The parameters considered for optimization are thermal diffusivity, thermal expansion coefficient, thermal diffusion ratio, permeability, specific heat at constant temperatures, Rayleigh number, amplitude of wavy surface, mass expansion coefficient. The diffusion of heat was caused by the concentration gradient. Thermal physical properties are homogeneous and isotropic. By using L8, TAGUCHI method the parameters are optimized.Keywords: heat transfer, pde, taguchi optimization, SI/Ge
Procedia PDF Downloads 33929030 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 52629029 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 22829028 Numerical Investigation of a Supersonic Ejector for Refrigeration System
Authors: Karima Megdouli, Bourhan Taschtouch
Abstract:
Supersonic ejectors have many applications in refrigeration systems. And improving ejector performance is the key to improve the efficiency of these systems. One of the main advantages of the ejector is its geometric simplicity and the absence of moving parts. This paper presents a theoretical model for evaluating the performance of a new supersonic ejector configuration for refrigeration system applications. The relationship between the flow field and the key parameters of the new configuration has been illustrated by analyzing the Mach number and flow velocity contours. The method of characteristics (MOC) is used to design the supersonic nozzle of the ejector. The results obtained are compared with those obtained by CFD. The ejector is optimized by minimizing exergy destruction due to irreversibility and shock waves. The optimization converges to an efficient optimum solution, ensuring improved and stable performance over the whole considered range of uncertain operating conditions.Keywords: supersonic ejector, theoretical model, CFD, optimization, performance
Procedia PDF Downloads 7629027 Developing a Comprehensive Green Building Rating System Tailored for Nigeria: Analyzing International Sustainable Rating Systems to Create Environmentally Responsible Standards for the Nigerian Construction Industry and Built Environment
Authors: Azeez Balogun
Abstract:
Inexperienced building score practices are continually evolving and vary across areas. Yet, a few middle ideas stay steady, such as website selection, design, energy efficiency, water and material conservation, indoor environmental great, operational optimization, and waste discount. The essence of green building lies inside the optimization of 1 or more of those standards. This paper conducts a comparative analysis of 7 extensively recognized sustainable score structures—BREEAM, CASBEE, green GLOBES, inexperienced superstar, HK-BEAM, IGBC green homes, and LEED—based totally on the perceptions and opinions of stakeholders in Nigeria certified in green constructing rating systems. The purpose is to pick out and adopt an appropriate green building rating device for Nigeria. Numerous components of those systems had been tested to determine the high-quality health of the Nigerian built environment. The findings imply that LEED, the important machine within the USA and Canada, is the most suitable for Nigeria due to its sturdy basis, extensive funding, and confirmed blessings. LEED obtained the highest rating of eighty out of one hundred points on this assessment.Keywords: structure, built surroundings, inexperienced building score gadget, Nigeria Inexperienced Constructing Council, sustainability
Procedia PDF Downloads 2829026 Ficus Carica as Adsorbent for Removal of Phenol from Aqueous Solutions: Modelling and Optimization
Authors: Tizi Hayet, Berrama Tarek, Bounif Nadia
Abstract:
Phenol and its derivatives are organic compounds utilized in the chemical industry. They are introduced into the environment by accidental spills and illegal release of industrial and municipal wastewater. Phenols are organic intermediaries that considered as potential pollutants. Adsorption is one of the purification and separation techniques used in this area. Algeria produces annually 131000 tones of fig; therefore, a large amount of fig leaves is generated, and the conversion of this waste into adsorbent allows the valorization of agricultural residue. The main purpose of this present work is to describe an application of the statistical method for modeling and optimization of the conditions of the phenol (Ph) adsorption from agricultural by-product locally available (fig leaves). The best experimental performance of Ph elimination on the adsorbent was obtained with: Adsorbent concentration (X2) = 0.2 g L-1; Initial concentration (X3) = 150 mg L-1; Speed agitation (X1) = 300 rpm.Keywords: low-cost adsorbents, fig leaves, full factorial design, phenol, biosorption
Procedia PDF Downloads 9729025 Control of Oil Content of Fried Zucchini Slices by Partial Predrying and Process Optimization
Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner
Abstract:
Main concern about deep-fat-fried food materials is their high final oil contents absorbed during frying process and/or after cooling period, since diet including high content of oil is accepted unhealthy by consumers. Different methods have been evaluated to decrease oil content of fried food stuffs. One promising method is partially drying of food material before frying. In the present study it was aimed to control and decrease the final oil content of zucchini slices by means of partial drying and to optimize process conditions. Conventional oven drying was used to decrease moisture content of zucchini slices at a certain extent. Process performance in terms of oil uptake was evaluated by comparing oil content of predried and then fried zucchini slices with those determined for directly fried ones. For predrying and frying processes, oven temperature and weight loss and frying oil temperature and time pairs were controlled variables, respectively. Zucchini slices were also directly fried for sensory evaluations revealing preferred properties of final product in terms of surface color, moisture content, texture and taste. These properties of directly fried zucchini slices taking the highest score at the end of sensory evaluation were determined and used as targets in optimization procedure. Response surface methodology was used for process optimization. The properties, determined after sensory evaluation, were selected as targets; meanwhile oil content was aimed to be minimized. Results indicated that final oil content of zucchini slices could be reduced from 58% to 46% by controlling conditions of predrying and frying processes. As a result, it was suggested that predrying could be one choose to reduce oil content of fried zucchini slices for health diet. This project (113R015) has been supported by TUBITAK.Keywords: health process, optimization, response surface methodology, oil uptake, conventional oven
Procedia PDF Downloads 36629024 The Design Optimization for Sound Absorption Material of Multi-Layer Structure
Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Tae-Hyeon Oh, Dae-Kyu Park
Abstract:
Sound absorbing material is used as automotive interior material. Sound absorption coefficient should be predicted to design it. But it is difficult to predict sound absorbing coefficient because it is comprised of several material layers. So, its targets are achieved through many experimental tunings. It causes a lot of cost and time. In this paper, we propose the process to estimate the sound absorption coefficient with multi-layer structure. In order to estimate the coefficient, physical properties of each material are used. These properties also use predicted values by Foam-X software using the sound absorption coefficient data measured by impedance tube. Since there are many physical properties and the measurement equipment is expensive, the values predicted by software are used. Through the measurement of the sound absorption coefficient of each material, its physical properties are calculated inversely. The properties of each material are used to calculate the sound absorption coefficient of the multi-layer material. Since the absorption coefficient of multi-layer can be calculated, optimization design is possible through simulation. Then, we will compare and analyze the calculated sound absorption coefficient with the data measured by scaled reverberation chamber and impedance tubes for a prototype. If this method is used when developing automotive interior materials with multi-layer structure, the development effort can be reduced because it can be optimized by simulation. So, cost and time can be saved.Keywords: sound absorption material, sound impedance tube, sound absorption coefficient, optimization design
Procedia PDF Downloads 28929023 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: chromosomes, cropping, genetic algorithm, genes
Procedia PDF Downloads 42829022 Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market
Authors: Ioannis P. Panapakidis, Marios N. Moschakis
Abstract:
The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.Keywords: deregulated energy market, forecasting, machine learning, system marginal price
Procedia PDF Downloads 21529021 SMART: Solution Methods with Ants Running by Types
Authors: Nicolas Zufferey
Abstract:
Ant algorithms are well-known metaheuristics which have been widely used since two decades. In most of the literature, an ant is a constructive heuristic able to build a solution from scratch. However, other types of ant algorithms have recently emerged: the discussion is thus not limited by the common framework of the constructive ant algorithms. Generally, at each generation of an ant algorithm, each ant builds a solution step by step by adding an element to it. Each choice is based on the greedy force (also called the visibility, the short term profit or the heuristic information) and the trail system (central memory which collects historical information of the search process). Usually, all the ants of the population have the same characteristics and behaviors. In contrast in this paper, a new type of ant metaheuristic is proposed, namely SMART (for Solution Methods with Ants Running by Types). It relies on the use of different population of ants, where each population has its own personality.Keywords: ant algorithms, evolutionary procedures, metaheuristics, optimization, population-based methods
Procedia PDF Downloads 36529020 A New OvS Approach in Assembly Line Balancing Problem
Authors: P. Azimi, B. Behtoiy, A. A. Najafi, H. R. Charmchi
Abstract:
According to the previous studies, one of the most famous techniques which affect the efficiency of a production line is the assembly line balancing (ALB) technique. This paper examines the balancing effect of a whole production line of a real auto glass manufacturer in three steps. In the first step, processing time of each activity in the workstations is generated according to a practical approach. In the second step, the whole production process is simulated and the bottleneck stations have been identified, and finally in the third step, several improvement scenarios are generated to optimize the system throughput, and the best one is proposed. The main contribution of the current research is the proposed framework which combines two famous approaches including Assembly Line Balancing and Optimization via Simulation technique (OvS). The results show that the proposed framework could be applied in practical environments, easily.Keywords: assembly line balancing problem, optimization via simulation, production planning
Procedia PDF Downloads 52629019 Generalized Limit Equilibrium Solution for the Lateral Pile Capacity Problem
Authors: Tomer Gans-Or, Shmulik Pinkert
Abstract:
The determination of lateral pile capacity per unit length is a key aspect in geotechnical engineering. Traditional approaches for assessing piles lateral capacity in cohesive soils involve the application of upper-bound and lower-bound plasticity theorems. However, a comprehensive solution encompassing the entire spectrum of soil strength parameters, particularly in frictional soils with or without cohesion, is still lacking. This research introduces an innovative implementation of the slice method limit equilibrium solution for lateral capacity assessment. For any given numerical discretization of the soil's domain around the pile, the lateral capacity evaluation is based on mobilized strength concept. The critical failure geometry is then found by a unique optimization procedure which includes both factor of safety minimization and geometrical optimization. The robustness of this suggested methodology is that the solution is independent of any predefined assumptions. Validation of the solution is accomplished through a comparison with established plasticity solutions for cohesive soils. Furthermore, the study demonstrates the applicability of the limit equilibrium method to address unresolved cases related to frictional and cohesive-frictional soils. Beyond providing capacity values, the method enables the utilization of the mobilized strength concept to generate safety-factor distributions for scenarios representing pre-failure states.Keywords: lateral pile capacity, slice method, limit equilibrium, mobilized strength
Procedia PDF Downloads 6129018 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer
Authors: Aprajeeta Jha, Punyadarshini P. Tripathy
Abstract:
Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer
Procedia PDF Downloads 15029017 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces
Authors: Shweta Singh, Sudaman Katti
Abstract:
The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity
Procedia PDF Downloads 13629016 RBF Modelling and Optimization Control for Semi-Batch Reactors
Authors: Magdi M. Nabi, Ding-Li Yu
Abstract:
This paper presents a neural network based model predictive control (MPC) strategy to control a strongly exothermic reaction with complicated nonlinear kinetics given by Chylla-Haase polymerization reactor that requires a very precise temperature control to maintain product uniformity. In the benchmark scenario, the operation of the reactor must be guaranteed under various disturbing influences, e.g., changing ambient temperatures or impurity of the monomer. Such a process usually controlled by conventional cascade control, it provides a robust operation, but often lacks accuracy concerning the required strict temperature tolerances. The predictive control strategy based on the RBF neural model is applied to solve this problem to achieve set-point tracking of the reactor temperature against disturbances. The result shows that the RBF based model predictive control gives reliable result in the presence of some disturbances and keeps the reactor temperature within a tight tolerance range around the desired reaction temperature.Keywords: Chylla-Haase reactor, RBF neural network modelling, model predictive control, semi-batch reactors
Procedia PDF Downloads 46829015 Chemical Oxygen Demand Fractionation of Primary Wastewater Effluent for Process Optimization and Modelling
Authors: Thandeka Y. S. Jwara, Paul Musonge
Abstract:
Traditionally, the complexity associated with implementing and controlling biological nutrient removal (BNR) in wastewater works (WWW) has been primarily in terms of balancing competing requirements for nitrogen and phosphorus removal, particularly with respect to the use of influent chemical oxygen demand (COD) as a carbon source for the microorganisms. Successful BNR optimization and modelling using WEST (Worldwide Engine for Simulation and Training) depend largely on the accurate fractionation of the influent COD. The different COD fractions have differing effects on the BNR process, and therefore, the influent characteristics need to be well understood. This study presents the fractionation results of primary wastewater effluent COD at one of South Africa’s wastewater works treating 65ML/day of mixed industrial and domestic effluent. The method used for COD fractionation was the oxygen uptake rate/respirometry method. The breakdown of the results of the analysis is as follows: 70.5% biodegradable COD (bCOD) and 29.5% of non-biodegradable COD (iCOD) in terms of the total COD. Further fractionation led to a readily biodegradable soluble fraction (SS) of 75%, a slowly degradable particulate fraction (XS) of 24%, a particulate non-biodegradable fraction (XI) of 50.8% and a non-biodegradable soluble fraction (SI) of 49.2%. The fractionation results demonstrate that the primary effluent has good COD characteristics, as shown by the high level of the bCOD fraction with Ss being higher than Xs. This means that the microorganisms have sufficient substrate for the BNR process and that these components can now serve as inputs to the WEST Model for the plant under study.Keywords: chemical oxygen demand, COD fractionation, wastewater modelling, wastewater optimization
Procedia PDF Downloads 14329014 Fuzzy Approach for the Evaluation of Feasibility Levels of Vehicle Movement on the Disaster-Streaking Zone’s Roads
Authors: Gia Sirbiladze
Abstract:
Route planning problems are among the activities that have the highest impact on logistical planning, transportation, and distribution because of their effects on efficiency in resource management, service levels, and client satisfaction. In extreme conditions, the difficulty of vehicle movement between different customers causes the imprecision of time of movement and the uncertainty of the feasibility of movement. A feasibility level of vehicle movement on the closed route of the disaster-streaking zone is defined for the construction of an objective function. Experts’ evaluations of the uncertain parameters in q-rung ortho-pair fuzzy numbers (q-ROFNs) are presented. A fuzzy bi-objective combinatorial optimization problem of fuzzy vehicle routine problem (FVRP) is constructed based on the technique of possibility theory. The FVRP is reduced to the bi-criteria partitioning problem for the so-called “promising” routes which were selected from the all-admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in real-time computing. For the numerical solution of the bi-criteria partitioning problem, the -constraint approach is used. The main results' support software is designed. The constructed model is illustrated with a numerical example.Keywords: q-rung ortho-pair fuzzy sets, facility location selection problem, multi-objective combinatorial optimization problem, partitioning problem
Procedia PDF Downloads 13429013 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 13629012 Optimization of Multistage Extractor for the Butanol Separation from Aqueous Solution Using Ionic Liquids
Authors: Dharamashi Rabari, Anand Patel
Abstract:
n-Butanol can be regarded as a potential biofuel. Being resistive to corrosion and having high calorific value, butanol is a very attractive energy source as opposed to ethanol. By fermentation process called ABE (acetone, butanol, ethanol), bio-butanol can be produced. ABE carried out mostly by bacteria Clostridium acetobutylicum. The major drawback of the process is the butanol concentration higher than 10 g/L, delays the growth of microbes resulting in a low yield. It indicates the simultaneous separation of butanol from the fermentation broth. Two hydrophobic Ionic Liquids (ILs) 1-butyl-1-methylpiperidinium bis (trifluoromethylsulfonyl)imide [bmPIP][Tf₂N] and 1-hexyl-3-methylimidazolium bis (trifluoromethylsulfonyl)imide [hmim][Tf₂N] were chosen. The binary interaction parameters for both ternary systems i.e. [bmPIP][Tf₂N] + water + n-butanol and [hmim][Tf₂N] + water +n-butanol were taken from the literature that was generated by NRTL model. Particle swarm optimization (PSO) with the isothermal sum rate (ISR) method was used to optimize the cost of liquid-liquid extractor. For [hmim][Tf₂N] + water +n-butanol system, PSO shows 84% success rate with the number of stages equal to eight and solvent flow rate equal to 461 kmol/hr. The number of stages was three with 269.95 kmol/hr solvent flow rate for [bmPIP][Tf₂N] + water + n-butanol system. Moreover, both ILs were very efficient as the loss of ILs in raffinate phase was negligible.Keywords: particle swarm optimization, isothermal sum rate method, success rate, extraction
Procedia PDF Downloads 12229011 Sensitivity Analysis Optimization of a Horizontal Axis Wind Turbine from Its Aerodynamic Profiles
Authors: Kevin Molina, Daniel Ortega, Manuel Martinez, Andres Gonzalez-Estrada, William Pinto
Abstract:
Due to the increasing environmental impact, the wind energy is getting strong. This research studied the relationship between the power produced by a horizontal axis wind turbine (HAWT) and the aerodynamic profiles used for its construction. The analysis is studied using the Computational Fluid Dynamic (CFD), presenting the parallel between the energy generated by a turbine designed with selected profiles and another one optimized. For the study, a selection process was carried out from profile NACA 6 digits recommended by the National Renewable Energy Laboratory (NREL) for the construction of this type of turbines. The selection was taken into account different characteristics of the wind (speed and density) and the profiles (aerodynamic coefficients Cl and Cd to different Reynolds and incidence angles). From the selected profiles, was carried out a sensitivity analysis optimization process between its geometry and the aerodynamic forces that are induced on it. The 3D model of the turbines was realized using the Blade Element Momentum method (BEM) and both profiles. The flow fields on the turbines were simulated, obtaining the forces induced on the blade, the torques produced and an increase of 3% in power due to the optimized profiles. Therefore, the results show that the sensitivity analysis optimization process can assist to increment the wind turbine power.Keywords: blade element momentum, blade, fluid structure interaction, horizontal axis wind turbine, profile design
Procedia PDF Downloads 25929010 Julia-Based Computational Tool for Composite System Reliability Assessment
Authors: Josif Figueroa, Kush Bubbar, Greg Young-Morris
Abstract:
The reliability evaluation of composite generation and bulk transmission systems is crucial for ensuring a reliable supply of electrical energy to significant system load points. However, evaluating adequacy indices using probabilistic methods like sequential Monte Carlo Simulation can be computationally expensive. Despite this, it is necessary when time-varying and interdependent resources, such as renewables and energy storage systems, are involved. Recent advances in solving power network optimization problems and parallel computing have improved runtime performance while maintaining solution accuracy. This work introduces CompositeSystems, an open-source Composite System Reliability Evaluation tool developed in Julia™, to address the current deficiencies of commercial and non-commercial tools. This work introduces its design, validation, and effectiveness, which includes analyzing two different formulations of the Optimal Power Flow problem. The simulations demonstrate excellent agreement with existing published studies while improving replicability and reproducibility. Overall, the proposed tool can provide valuable insights into the performance of transmission systems, making it an important addition to the existing toolbox for power system planning.Keywords: open-source software, composite system reliability, optimization methods, Monte Carlo methods, optimal power flow
Procedia PDF Downloads 7429009 Performance Analysis of Arithmetic Units for IoT Applications
Authors: Nithiya C., Komathi B. J., Praveena N. G., Samuda Prathima
Abstract:
At present, the ultimate aim in digital system designs, especially at the gate level and lower levels of design abstraction, is power optimization. Adders are a nearly universal component of today's integrated circuits. Most of the research was on the design of high-speed adders to execute addition based on various adder structures. This paper discusses the ideal path for selecting an arithmetic unit for IoT applications. Based on the analysis of eight types of 16-bit adders, we found out Carry Look-ahead (CLA) produces low power. Additionally, multiplier and accumulator (MAC) unit is implemented with the Booth multiplier by using the low power adders in the order of preference. The design is synthesized and verified using Synopsys Design Compiler and VCS. Then it is implemented by using Cadence Encounter. The total power consumed by the CLA based booth multiplier is 0.03527mW, the total area occupied is 11260 um², and the speed is 2034 ps.Keywords: carry look-ahead, carry select adder, CSA, internet of things, ripple carry adder, design rule check, power delay product, multiplier and accumulator
Procedia PDF Downloads 11829008 Multi-Objective Optimization and Effect of Surface Conditions on Fatigue Performance of Burnished Components Made of AISI 52100 Steel
Authors: Ouahiba Taamallah, Tarek Litim
Abstract:
The study deals with the burnishing effect of AISI 52100 steel and parameters influence (Py, i and f on surface integrity. The results show that the optimal effects are closely related to the treatment parameters. With a 92% improvement in roughness, SB can be defined as a finishing operation within the machining range. Due to 85% gain in consolidation rate, this treatment constitutes an efficient process for work-hardening of material. In addition, a statistical study based on regression and Taguchi's design has made it possible to develop mathematical models to predict output responses according to the studied burnishing parameters. Response Surface Methodology RSM showed a simultaneous influence of the burnishing parameters and to observe the optimal parameters of the treatment. ANOVA Analysis of results led to validate the prediction model with a determination coefficient R2=94.60% and R2=93.41% for surface roughness and micro-hardness, respectively. Furthermore, a multi-objective optimization allowed to identify a regime characterized by P=20 Kgf, i=5 passes and f=0.08 mm.rev-1, which favors minimum surface roughness and a maximum of micro-hardness. The result was validated by a composite desirability D_i=1 for both surface roughness and microhardness, respectively. Applying optimal parameters, burnishing showed its beneficial effects in fatigue resistance, especially for imposed loading in the low cycle fatigue of the material where the lifespan increased by 90%.Keywords: AISI 52100 steel, burnishing, Taguchi, fatigue
Procedia PDF Downloads 18829007 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs
Authors: Junaid Mahmood Alam
Abstract:
Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.Keywords: revalidation, standardized, IFCC, CAP, harmonized
Procedia PDF Downloads 26929006 Optimization of Air Pollution Control Model for Mining
Authors: Zunaira Asif, Zhi Chen
Abstract:
The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.Keywords: air pollution, linear programming, mining, optimization, treatment technologies
Procedia PDF Downloads 208