Search results for: Combinatorial Optimization
2010 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 1202009 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 4082008 Construction and Optimization of Green Infrastructure Network in Mountainous Counties Based on Morphological Spatial Pattern Analysis and Minimum Cumulative Resistance Models: A Case Study of Shapingba District, Chongqing
Authors: Yuning Guan
Abstract:
Under the background of rapid urbanization, mountainous counties need to break through mountain barriers for urban expansion due to undulating topography, resulting in ecological problems such as landscape fragmentation and reduced biodiversity. Green infrastructure networks are constructed to alleviate the contradiction between urban expansion and ecological protection, promoting the healthy and sustainable development of urban ecosystems. This study applies the MSPA model, the MCR model and Linkage Mapper Tools to identify eco-sources and eco-corridors in the Shapingba District of Chongqing and combined with landscape connectivity assessment and circuit theory to delineate the importance levels to extract ecological pinch point areas on the corridors. The results show that: (1) 20 ecological sources are identified, with a total area of 126.47 km², accounting for 31.88% of the study area, and showing a pattern of ‘one core, three corridors, multi-point distribution’. (2) 37 ecological corridors are formed in the area, with a total length of 62.52km, with a ‘more in the west, less in the east’ pattern. (3) 42 ecological pinch points are extracted, accounting for 25.85% of the length of the corridors, which are mainly distributed in the eastern new area. Accordingly, this study proposes optimization strategies for sub-area protection of ecological sources, grade-level construction of ecological corridors, and precise restoration of ecological pinch points.Keywords: green infrastructure network, morphological spatial pattern, minimal cumulative resistance, mountainous counties, circuit theory, shapingba district
Procedia PDF Downloads 472007 Enhancing Algal Bacterial Photobioreactor Efficiency: Nutrient Removal and Cost Analysis Comparison for Light Source Optimization
Authors: Shahrukh Ahmad, Purnendu Bose
Abstract:
Algal-Bacterial photobioreactors (ABPBRs) have emerged as a promising technology for sustainable biomass production and wastewater treatment. Nutrient removal is seldom done in sewage treatment plants and large volumes of wastewater which still have nutrients are being discharged and that can lead to eutrophication. That is why ABPBR plays a vital role in wastewater treatment. However, improving the efficiency of ABPBR remains a significant challenge. This study aims to enhance ABPBR efficiency by focusing on two key aspects: nutrient removal and cost-effective optimization of the light source. By integrating nutrient removal and cost analysis for light source optimization, this study proposes practical strategies for improving ABPBR efficiency. To reduce organic carbon and convert ammonia to nitrates, domestic wastewater from a 130 MLD sewage treatment plant (STP) was aerated with a hydraulic retention time (HRT) of 2 days. The treated supernatant had an approximate nitrate and phosphate values of 16 ppm as N and 6 ppm as P, respectively. This supernatant was then fed into the ABPBR, and the removal of nutrients (nitrate as N and phosphate as P) was observed using different colored LED bulbs, namely white, blue, red, yellow, and green. The ABPBR operated with a 9-hour light and 3-hour dark cycle, using only one color of bulbs per cycle. The study found that the white LED bulb, with a photosynthetic photon flux density (PPFD) value of 82.61 µmol.m-2 .sec-1 , exhibited the highest removal efficiency. It achieved a removal rate of 91.56% for nitrate and 86.44% for phosphate, surpassing the other colored bulbs. Conversely, the green LED bulbs showed the lowest removal efficiencies, with 58.08% for nitrate and 47.48% for phosphate at an HRT of 5 days. The quantum PAR (Photosynthetic Active Radiation) meter measured the photosynthetic photon flux density for each colored bulb setting inside the photo chamber, confirming that white LED bulbs operated at a wider wavelength band than the others. Furthermore, a cost comparison was conducted for each colored bulb setting. The study revealed that the white LED bulb had the lowest average cost (Indian Rupee)/light intensity (µmol.m-2 .sec-1 ) value at 19.40, while the green LED bulbs had the highest average cost (INR)/light intensity (µmol.m-2 .sec-1 ) value at 115.11. Based on these comparative tests, it was concluded that the white LED bulbs were the most efficient and costeffective light source for an algal photobioreactor. They can be effectively utilized for nutrient removal from secondary treated wastewater which helps in improving the overall wastewater quality before it is discharged back into the environment.Keywords: algal bacterial photobioreactor, domestic wastewater, nutrient removal, led bulbs
Procedia PDF Downloads 812006 Approximate Spring Balancing for the Arm of a Humanoid Robot to Reduce Actuator Torque
Authors: Apurva Patil, Ashay Aswale, Akshay Kulkarni, Shubham Bharadiya
Abstract:
The potential benefit of gravity compensation of linkages in mechanisms using springs to reduce actuator requirements is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs with child–parent connections and no auxiliary links. Application of this method to the developed arm of a humanoid robot is presented here. Spring balancing is particularly important in this case because the serial chain of linkages has to work against gravity.This work involves approximate spring balancing of the open-loop chain of linkages using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Reduced actuator torque facilitates the use of lower end actuators which are generally smaller in weight and volume thereby lowering the space requirements and the total weight of the arm. This is particularly important for humanoid robots where the parent actuator has to handle the weight of the subsequent actuators as well. Actuators with lower actuation requirements are more energy efficient, thereby reduce the energy consumption of the mechanism. Lower end actuators are lower in cost and facilitate the development of low-cost devices. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free-length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached to the preceding parent link, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant arm of the humanoid robot is compact. The cost benefits and reduced complexity can be significant advantages in the development of this arm of the humanoid robot.Keywords: actuator torque, child-parent connections, spring balancing, the arm of a humanoid robot
Procedia PDF Downloads 2462005 Preparation and Properties of Gelatin-Bamboo Fibres Foams for Packaging Applications
Authors: Luo Guidong, Song Hang, Jim Song, Virginia Martin Torrejon
Abstract:
Due to their excellent properties, polymer packaging foams have become increasingly essential in our current lifestyles. They are cost-effective and lightweight, with excellent mechanical and thermal insulation properties. However, they constitute a major environmental and health concern due to litter generation, ocean pollution, and microplastic contamination of the food chain. In recent years, considerable efforts have been made to develop more sustainable alternatives to conventional polymer packaging foams. As a result, biobased and compostable foams are increasingly becoming commercially available, such as starch-based loose-fill or PLA trays. However, there is still a need for bulk manufacturing of bio-foams planks for packaging applications as a viable alternative to their fossil fuel counterparts (i.e., polystyrene, polyethylene, and polyurethane). Gelatin is a promising biopolymer for packaging applications due to its biodegradability, availability, and biocompatibility, but its mechanical properties are poor compared to conventional plastics. However, as widely reported for other biopolymers, such as starch, the mechanical properties of gelatin-based bioplastics can be enhanced by formulation optimization, such as the incorporation of fibres from different crops, such as bamboo. This research aimed to produce gelatin-bamboo fibre foams by mechanical foaming and to study the effect of fibre content on the foams' properties and structure. As a result, foams with virtually no shrinkage, low density (<40 kg/m³), low thermal conductivity (<0.044 W/m•K), and mechanical properties comparable to conventional plastics were produced. Further work should focus on developing formulations suitable for the packaging of water-sensitive products and processing optimization, especially the reduction of the drying time.Keywords: biobased and compostable foam, sustainable packaging, natural polymer hydrogel, cold chain packaging
Procedia PDF Downloads 1082004 A Study in Optimization of FSI(Floor Space Index) in Kerala
Authors: Anjali Suresh
Abstract:
Kerala is well known for its unique settlement pattern; comprising the most part, a continuous spread of habitation. The notable urbanization trend in Kerala is urban spread rather than concentration which points out the increasing urbanization of peripheral areas of existing urban centers. This has thrown a challenge for the authorities to cater the needs of the urban population like to provide affordable housing and infrastructure facilities to sustain their livelihood; which is a matter of concern that needs policy attention in fixing the optimum FSI value. Based on recent reports (Post Disaster Need Analysis –PDNA) from the UN, addressing the unsafe situation of the carpet FAR/FSI practice in the state showcasing the varying geological & climatic conditions should also be the matter of concern. The FSI (Floor space index- the ratio of the built-up space on a plot to the area of the plot) value is certainly one of the key regulation factors in checking the land utilization for the varying occupancies desired for the overall development of a state with limitation in land availability when compared to its neighbors. The pattern of urbanization, physical conditions, topography, etc., varies within the state and can change remarkably over time which identifies that the practicing FSI norms in Kerala does not fulfils the intended function. Thus the FSI regulation is expected to change dynamically from location to location. So for determining the optimum value of FSI /FAR of a region in the state of Kerala, the government agencies should consider the optimum land utilization for the growing urbanization. On the other hand, shall keep in check the overutilization of the same in par with environmental and geographic nature. Therefore the study identifies parameters that should be considered for assigning FSI within the Kerala context, and through expert surveys; opinions arrive at a methodology for assigning an optimum FSI value of a region in the state of Kerala.Keywords: floor space index, urbanization, density, civic pressure, optimization
Procedia PDF Downloads 1052003 Solving a Micromouse Maze Using an Ant-Inspired Algorithm
Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira
Abstract:
This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking
Procedia PDF Downloads 1262002 Response Surface Methodology for the Optimization of Radioactive Wastewater Treatment with Chitosan-Argan Nutshell Beads
Authors: Fatima Zahra Falah, Touria El. Ghailassi, Samia Yousfi, Ahmed Moussaif, Hasna Hamdane, Mouna Latifa Bouamrani
Abstract:
The management and treatment of radioactive wastewater pose significant challenges to environmental safety and public health. This study presents an innovative approach to optimizing radioactive wastewater treatment using a novel biosorbent: chitosan-argan nutshell beads. By employing Response Surface Methodology (RSM), we aimed to determine the optimal conditions for maximum removal efficiency of radioactive contaminants. Chitosan, a biodegradable and non-toxic biopolymer, was combined with argan nutshell powder to create composite beads. The argan nutshell, a waste product from argan oil production, provides additional adsorption sites and mechanical stability to the biosorbent. The beads were characterized using Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), and X-ray Diffraction (XRD) to confirm their structure and composition. A three-factor, three-level Box-Behnken design was utilized to investigate the effects of pH (3-9), contact time (30-150 minutes), and adsorbent dosage (0.5-2.5 g/L) on the removal efficiency of radioactive isotopes, primarily focusing on cesium-137. Batch adsorption experiments were conducted using synthetic radioactive wastewater with known concentrations of these isotopes. The RSM analysis revealed that all three factors significantly influenced the adsorption process. A quadratic model was developed to describe the relationship between the factors and the removal efficiency. The model's adequacy was confirmed through analysis of variance (ANOVA) and various diagnostic plots. Optimal conditions for maximum removal efficiency were pH 6.8, a contact time of 120 minutes, and an adsorbent dosage of 0.8 g/L. Under these conditions, the experimental removal efficiency for cesium-137 was 94.7%, closely matching the model's predictions. Adsorption isotherms and kinetics were also investigated to elucidate the mechanism of the process. The Langmuir isotherm and pseudo-second-order kinetic model best described the adsorption behavior, indicating a monolayer adsorption process on a homogeneous surface. This study demonstrates the potential of chitosan-argan nutshell beads as an effective and sustainable biosorbent for radioactive wastewater treatment. The use of RSM allowed for the efficient optimization of the process parameters, potentially reducing the time and resources required for large-scale implementation. Future work will focus on testing the biosorbent's performance with real radioactive wastewater samples and investigating its regeneration and reusability for long-term applications.Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology
Procedia PDF Downloads 392001 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 3082000 The Sub-Optimality of the Electricity Subsidy on Tube Wells in Balochistan (Pakistan): An Analysis Based on Socio-Cultural and Policy Distortions
Authors: Rameesha Javaid
Abstract:
Agriculture is the backbone of the economy of the province of Balochistan which is known as the ‘fruit basket’ of Pakistan. Its climate zones comprising highlands and plateaus, dependent on rain water, are more suited for the production of deciduous fruit. The vagaries of weather and more so the persistent droughts prompted the government to announce flat rates of electricity bills per month irrespective of the size of the farm, quantum or water used and the category of crop group. That has, no doubt, resulted in increased cropping intensity, more production and employment but has enormously burdened the official exchequer which picks up the residual bills in certain percentages amongst the federal and provincial governments and the local electricity company. This study tests the desirability of continuing the subsidy in the present mode. Optimization of social welfare of farmers has been the focus of the study with emphasis on the contribution of positive externalities and distortions caused in terms of negative externalities. By using the optimization technique with due allowance for distortions, it has been established that the subsidy calls for limiting policy distortions as they cause sub-optimal utilization of the tube well subsidy and improved policy programming. The sensitivity analysis with changed rankings of contributing variables towards social welfare does not significantly change the result. Therefore it leads to the net findings and policy recommendations of significantly reducing the subsidy size, correcting and curtailing policy distortions and targeting the subsidy grant more towards small farmers to generate more welfare by saving a sizeable amount from the subsidy for investment in the wellbeing of the farmers in rural Balochistan.Keywords: distortion, policy distortion, socio-cultural distortion, social welfare, subsidy
Procedia PDF Downloads 2931999 Design and Optimisation of 2-Oxoglutarate Dioxygenase Expression in Escherichia coli Strains for Production of Bioethylene from Crude Glycerol
Authors: Idan Chiyanzu, Maruping Mangena
Abstract:
Crude glycerol, a major by-product from the transesterification of triacylglycerides with alcohol to biodiesel, is known to have a broad range of applications. For example, its bioconversion can afford a wide range of chemicals including alcohols, organic acids, hydrogen, solvents and intermediate compounds. In bacteria, the 2-oxoglutarate dioxygenase (2-OGD) enzymes are widely found among the Pseudomonas syringae species and have been recognized with an emerging importance in ethylene formation. However, the use of optimized enzyme function in recombinant systems for crude glycerol conversion to ethylene is still not been reported. The present study investigated the production of ethylene from crude glycerol using engineered E. coli MG1655 and JM109 strains. Ethylene production with an optimized expression system for 2-OGD in E. coli using a codon optimized construct of the ethylene-forming gene was studied. The codon-optimization resulted in a 20-fold increase of protein production and thus an enhanced production of the ethylene gas. For a reliable bioreactor performance, the effect of temperature, fermentation time, pH, substrate concentration, the concentration of methanol, concentration of potassium hydroxide and media supplements on ethylene yield was investigated. The results demonstrate that the recombinant enzyme can be used for future studies to exploit the conversion of low-priced crude glycerol into advanced value products like light olefins, and tools including recombineering techniques for DNA, molecular biology, and bioengineering can be used to allowing unlimited the production of ethylene directly from the fermentation of crude glycerol. It can be concluded that recombinant E.coli production systems represent significantly secure, renewable and environmentally safe alternative to thermochemical approach to ethylene production.Keywords: crude glycerol, bioethylene, recombinant E. coli, optimization
Procedia PDF Downloads 2801998 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 241997 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model
Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki
Abstract:
As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China
Procedia PDF Downloads 2891996 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity
Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz
Abstract:
The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance
Procedia PDF Downloads 1101995 Technical Sustainable Management: An Instrument to Increase Energy Efficiency in Wastewater Treatment Plants, a Case Study in Jordan
Authors: Dirk Winkler, Leon Koevener, Lamees AlHayary
Abstract:
This paper contributes to the improvement of the municipal wastewater systems in Jordan. An important goal is increased energy efficiency in wastewater treatment plants and therefore lower expenses due to reduced electricity consumption. The chosen way to achieve this goal is through the implementation of Technical Sustainable Management adapted to the Jordanian context. Three wastewater treatment plants in Jordan have been chosen as a case study for the investigation. These choices were supported by the fact that the three treatment plants are suitable for average performance and size. Beyond that, an energy assessment has been recently conducted in those facilities. The project succeeded in proving the following hypothesis: Energy efficiency in wastewater treatment plants can be improved by implementing principles of Technical Sustainable Management adapted to the Jordanian context. With this case study, a significant increase in energy efficiency can be achieved by optimization of operational performance, identifying and eliminating shortcomings and appropriate plant management. Implementing Technical Sustainable Management as a low-cost tool with a comparable little workload, provides several additional benefits supplementing increased energy efficiency, including compliance with all legal and technical requirements, process optimization, but also increased work safety and convenient working conditions. The research in the chosen field continues because there are indications for possible integration of the adapted tool into other regions and sectors. The concept of Technical Sustainable Management adapted to the Jordanian context could be extended to other wastewater treatment plants in all regions of Jordan but also into other sectors including water treatment, water distribution, wastewater network, desalination, or chemical industry.Keywords: energy efficiency, quality management system, technical sustainable management, wastewater treatment
Procedia PDF Downloads 1661994 Revolutionizing Manufacturing: Embracing Additive Manufacturing with Eggshell Polylactide (PLA) Polymer
Authors: Choy Sonny Yip Hong
Abstract:
This abstract presents an exploration into the creation of a sustainable bio-polymer compound for additive manufacturing, specifically 3D printing, with a focus on eggshells and polylactide (PLA) polymer. The project initially conducted experiments using a variety of food by-products to create bio-polymers, and promising results were obtained when combining eggshells with PLA polymer. The research journey involved precise measurements, drying of PLA to remove moisture, and the utilization of a filament-making machine to produce 3D printable filaments. The project began with exploratory research and experiments, testing various combinations of food by-products to create bio-polymers. After careful evaluation, it was discovered that eggshells and PLA polymer produced promising results. The initial mixing of the two materials involved heating them just above the melting point. To make the compound 3D printable, the research focused on finding the optimal formulation and production process. The process started with precise measurements of the PLA and eggshell materials. The PLA was placed in a heating oven to remove any absorbed moisture. Handmade testing samples were created to guide the planning for 3D-printed versions. The scrap PLA was recycled and ground into a powdered state. The drying process involved gradual moisture evaporation, which required several hours. The PLA and eggshell materials were then placed into the hopper of a filament-making machine. The machine's four heating elements controlled the temperature of the melted compound mixture, allowing for optimal filament production with accurate and consistent thickness. The filament-making machine extruded the compound, producing filament that could be wound on a wheel. During the testing phase, trials were conducted with different percentages of eggshell in the PLA mixture, including a high percentage (20%). However, poor extrusion results were observed for high eggshell percentage mixtures. Samples were created, and continuous improvement and optimization were pursued to achieve filaments with good performance. To test the 3D printability of the DIY filament, a 3D printer was utilized, set to print the DIY filament smoothly and consistently. Samples were printed and mechanically tested using a universal testing machine to determine their mechanical properties. This testing process allowed for the evaluation of the filament's performance and suitability for additive manufacturing applications. In conclusion, the project explores the creation of a sustainable bio-polymer compound using eggshells and PLA polymer for 3D printing. The research journey involved precise measurements, drying of PLA, and the utilization of a filament-making machine to produce 3D printable filaments. Continuous improvement and optimization were pursued to achieve filaments with good performance. The project's findings contribute to the advancement of additive manufacturing, offering opportunities for design innovation, carbon footprint reduction, supply chain optimization, and collaborative potential. The utilization of eggshell PLA polymer in additive manufacturing has the potential to revolutionize the manufacturing industry, providing a sustainable alternative and enabling the production of intricate and customized products.Keywords: additive manufacturing, 3D printing, eggshell PLA polymer, design innovation, carbon footprint reduction, supply chain optimization, collaborative potential
Procedia PDF Downloads 741993 Optimization of Enzymatic Hydrolysis of Cooked Porcine Blood to Obtain Hydrolysates with Potential Biological Activities
Authors: Miguel Pereira, Lígia Pimentel, Manuela Pintado
Abstract:
Animal blood is a major by-product of slaughterhouses and still represents a cost and environmental problem in some countries. To be eliminated, blood should be stabilised by cooking and afterwards the slaughterhouses must have to pay for its incineration. In order to reduce the elimination costs and valorise the high protein content the aim of this study was the optimization of hydrolysis conditions, in terms of enzyme ratio and time, in order to obtain hydrolysates with biological activity. Two enzymes were tested in this assay: pepsin and proteases from Cynara cardunculus (cardosins). The latter has the advantage to be largely used in the Portuguese Dairy Industry and has a low price. The screening assays were carried out in a range of time between 0 and 10 h and using a ratio of enzyme/reaction volume between 0 and 5%. The assays were performed at the optimal conditions of pH and temperature for each enzyme: 55 °C at pH 5.2 for cardosins and 37 °C at pH 2.0 for pepsin. After reaction, the hydrolysates were evaluated by FPLC (Fast Protein Liquid Chromatography) and tested for their antioxidant activity by ABTS method. FPLC chromatograms showed different profiles when comparing the enzymatic reactions with the control (no enzyme added). The chromatogram exhibited new peaks with lower MW that were not present in control samples, demonstrating the hydrolysis by both enzymes. Regarding to the antioxidant activity, the best results for both enzymes were obtained using a ratio enzyme/reactional volume of 5% during 5 h of hydrolysis. However, the extension of reaction did not affect significantly the antioxidant activity. This has an industrial relevant aspect in what concerns to the process cost. In conclusion, the enzymatic blood hydrolysis can be a better alternative to the current elimination process allowing to the industry the reuse of an ingredient with biological properties and economic value.Keywords: antioxidant activity, blood, by-products, enzymatic hydrolysis
Procedia PDF Downloads 5111992 Enhancing Wire Electric Discharge Machining Efficiency through ANOVA-Based Process Optimization
Authors: Rahul R. Gurpude, Pallvita Yadav, Amrut Mulay
Abstract:
In recent years, there has been a growing focus on advanced manufacturing processes, and one such emerging process is wire electric discharge machining (WEDM). WEDM is a precision machining process specifically designed for cutting electrically conductive materials with exceptional accuracy. It achieves material removal from the workpiece metal through spark erosion facilitated by electricity. Initially developed as a method for precision machining of hard materials, WEDM has witnessed significant advancements in recent times, with numerous studies and techniques based on electrical discharge phenomena being proposed. These research efforts and methods in the field of ED encompass a wide range of applications, including mirror-like finish machining, surface modification of mold dies, machining of insulating materials, and manufacturing of micro products. WEDM has particularly found extensive usage in the high-precision machining of complex workpieces that possess varying hardness and intricate shapes. During the cutting process, a wire with a diameter ranging from 0.18mm is employed. The evaluation of EDM performance typically revolves around two critical factors: material removal rate (MRR) and surface roughness (SR). To comprehensively assess the impact of machining parameters on the quality characteristics of EDM, an Analysis of Variance (ANOVA) was conducted. This statistical analysis aimed to determine the significance of various machining parameters and their relative contributions in controlling the response of the EDM process. By undertaking this analysis, optimal levels of machining parameters were identified to achieve desirable material removal rates and surface roughness.Keywords: WEDM, MRR, optimization, surface roughness
Procedia PDF Downloads 781991 Patient-Specific Design Optimization of Cardiovascular Grafts
Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw
Abstract:
Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering
Procedia PDF Downloads 2461990 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 2391989 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid
Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan
Abstract:
In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.Keywords: acid treatment, chemical extraction, sludge, waste management
Procedia PDF Downloads 1991988 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 361987 Auto Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.Keywords: auto-calibration, Gilan, large-scale water resources, simulation
Procedia PDF Downloads 3351986 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology
Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal
Abstract:
Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.Keywords: chloramine decay, modelling, response surface methodology, water quality parameters
Procedia PDF Downloads 2281985 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics
Procedia PDF Downloads 1251984 Enhanced Growth of Microalgae Chlamydomonas reinhardtii Cultivated in Different Organic Waste and Effective Conversion of Algal Oil to Biodiesel
Authors: Ajith J. Kings, L. R. Monisha Miriam, R. Edwin Raj, S. Julyes Jaisingh, S. Gavaskar
Abstract:
Microalgae are a potential bio-source for rejuvenated solutions in various disciplines of science and technology, especially in medicine and energy. Biodiesel is being replaced for conventional fuels in automobile industries with reduced pollution and equivalent performance. Since it is a carbon neutral fuel by recycling CO2 in photosynthesis, global warming potential can be held in control using this fuel source. One of the ways to meet the rising demand of automotive fuel is to adopt with eco-friendly, green alternative fuels called sustainable microalgal biodiesel. In this work, a microalga Chlamydomonas reinhardtii was cultivated and optimized in different media compositions developed from under-utilized waste materials in lab scale. Using the optimized process conditions, they are then mass propagated in out-door ponds, harvested, dried and oils extracted for optimization in ambient conditions. The microalgal oil was subjected to two step esterification processes using acid catalyst to reduce the acid value (0.52 mg kOH/g) in the initial stage, followed by transesterification to maximize the biodiesel yield. The optimized esterification process parameters are methanol/oil ratio 0.32 (v/v), sulphuric acid 10 vol.%, duration 45 min at 65 ºC. In the transesterification process, commercially available alkali catalyst (KOH) is used and optimized to obtain a maximum biodiesel yield of 95.4%. The optimized parameters are methanol/oil ratio 0.33(v/v), alkali catalyst 0.1 wt.%, duration 90 min at 65 ºC 90 with smooth stirring. Response Surface Methodology (RSM) is employed as a tool for optimizing the process parameters. The biodiesel was then characterized with standard procedures and especially by GC-MS to confirm its compatibility for usage in internal combustion engine.Keywords: microalgae, organic media, optimization, transesterification, characterization
Procedia PDF Downloads 2361983 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems
Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer
Abstract:
This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control
Procedia PDF Downloads 1541982 Application of Life Cycle Assessment “LCA” Approach for a Sustainable Building Design under Specific Climate Conditions
Authors: Djeffal Asma, Zemmouri Noureddine
Abstract:
In order for building designer to be able to balance environmental concerns with other performance requirements, they need clear and concise information. For certain decisions during the design process, qualitative guidance, such as design checklists or guidelines information may not be sufficient for evaluating the environmental benefits between different building materials, products and designs. In this case, quantitative information, such as that generated through a life cycle assessment, provides the most value. LCA provides a systematic approach to evaluating the environmental impacts of a product or system over its entire life. In the case of buildings life cycle includes the extraction of raw materials, manufacturing, transporting and installing building components or products, operating and maintaining the building. By integrating LCA into building design process, designers can evaluate the life cycle impacts of building design, materials, components and systems and choose the combinations that reduce the building life cycle environmental impact. This article attempts to give an overview of the integration of LCA methodology in the context of building design, and focuses on the use of this methodology for environmental considerations concerning process design and optimization. A multiple case study was conducted in order to assess the benefits of the LCA as a decision making aid tool during the first stages of the building design under specific climate conditions of the North East region of Algeria. It is clear that the LCA methodology can help to assess and reduce the impact of a building design and components on the environment even if the process implementation is rather long and complicated and lacks of global approach including human factors. It is also demonstrated that using LCA as a multi objective optimization of building process will certainly facilitates the improvement in design and decision making for both new design and retrofit projects.Keywords: life cycle assessment, buildings, sustainability, elementary schools, environmental impacts
Procedia PDF Downloads 5471981 Efficient Chiller Plant Control Using Modern Reinforcement Learning
Authors: Jingwei Du
Abstract:
The need of optimizing air conditioning systems for existing buildings calls for control methods designed with energy-efficiency as a primary goal. The majority of current control methods boil down to two categories: empirical and model-based. To be effective, the former heavily relies on engineering expertise and the latter requires extensive historical data. Reinforcement Learning (RL), on the other hand, is a model-free approach that explores the environment to obtain an optimal control strategy often referred to as “policy”. This research adopts Proximal Policy Optimization (PPO) to improve chiller plant control, and enable the RL agent to collaborate with experienced engineers. It exploits the fact that while the industry lacks historical data, abundant operational data is available and allows the agent to learn and evolve safely under human supervision. Thanks to the development of language models, renewed interest in RL has led to modern, online, policy-based RL algorithms such as the PPO. This research took inspiration from “alignment”, a process that utilizes human feedback to finetune the pretrained model in case of unsafe content. The methodology can be summarized into three steps. First, an initial policy model is generated based on minimal prior knowledge. Next, the prepared PPO agent is deployed so feedback from both critic model and human experts can be collected for future finetuning. Finally, the agent learns and adapts itself to the specific chiller plant, updates the policy model and is ready for the next iteration. Besides the proposed approach, this study also used traditional RL methods to optimize the same simulated chiller plants for comparison, and it turns out that the proposed method is safe and effective at the same time and needs less to no historical data to start up.Keywords: chiller plant, control methods, energy efficiency, proximal policy optimization, reinforcement learning
Procedia PDF Downloads 31