Search results for: optimization methods
16927 Optimal Type and Installation Time of Wind Farm in a Power System, Considering Service Providers
Authors: M. H. Abedi, A. Jalilvand
Abstract:
The economic development benefits of wind energy may be the most tangible basis for the local and state officials’ interests. In addition to the direct salaries associated with building and operating wind projects, the wind energy industry provides indirect jobs and benefits. The optimal planning of a wind farm is one most important topic in renewable energy technology. Many methods have been implemented to optimize the cost and output benefit of wind farms, but the contribution of this paper is mentioning different types of service providers and also time of installation of wind turbines during planning horizon years. Genetic algorithm (GA) is used to optimize the problem. It is observed that an appropriate layout of wind farm can cause to minimize the different types of cost.Keywords: renewable energy, wind farm, optimization, planning
Procedia PDF Downloads 52416926 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 6716925 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design
Authors: H. K. Esfahani, B. Datta
Abstract:
Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site
Procedia PDF Downloads 23116924 Data-Driven Dynamic Overbooking Model for Tour Operators
Authors: Kannapha Amaruchkul
Abstract:
We formulate a dynamic overbooking model for a tour operator, in which most reservations contain at least two people. The cancellation rate and the timing of the cancellation may depend on the group size. We propose two overbooking policies, namely economic- and service-based. In an economic-based policy, we want to minimize the expected oversold and underused cost, whereas, in a service-based policy, we ensure that the probability of an oversold situation does not exceed the pre-specified threshold. To illustrate the applicability of our approach, we use tour package data in 2016-2018 from a tour operator in Thailand to build a data-driven robust optimization model, and we tested the proposed overbooking policy in 2019. We also compare the data-driven approach to the conventional approach of fitting data into a probability distribution.Keywords: applied stochastic model, data-driven robust optimization, overbooking, revenue management, tour operator
Procedia PDF Downloads 13416923 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 29716922 Soft Computing Employment to Optimize Safety Stock Levels in Supply Chain Dairy Product under Supply and Demand Uncertainty
Authors: Riyadh Jamegh, Alla Eldin Kassam, Sawsan Sabih
Abstract:
In order to overcome uncertainty conditions and inability to meet customers' requests due to these conditions, organizations tend to reserve a certain safety stock level (SSL). This level must be chosen carefully in order to avoid the increase in holding cost due to excess in SSL or shortage cost due to too low SSL. This paper used soft computing fuzzy logic to identify optimal SSL; this fuzzy model uses the dynamic concept to cope with high complexity environment status. The proposed model can deal with three input variables, i.e., demand stability level, raw material availability level, and on hand inventory level by using dynamic fuzzy logic to obtain the best SSL as an output. In this model, demand stability, raw material, and on hand inventory levels are described linguistically and then treated by inference rules of the fuzzy model to extract the best level of safety stock. The aim of this research is to provide dynamic approach which is used to identify safety stock level, and it can be implanted in different industries. Numerical case study in the dairy industry with Yogurt 200 gm cup product is explained to approve the validity of the proposed model. The obtained results are compared with the current level of safety stock which is calculated by using the traditional approach. The importance of the proposed model has been demonstrated by the significant reduction in safety stock level.Keywords: inventory optimization, soft computing, safety stock optimization, dairy industries inventory optimization
Procedia PDF Downloads 12516921 Optimization of Three Phase Squirrel Cage Induction Motor
Authors: Tunahan Sapmaz, Harun Etçi, İbrahim Şenol, Yasemin Öner
Abstract:
Rotor bar dimensions have a great influence on the air-gap magnetic flux density. Therefore, poor selection of this parameter during the machine design phase causes the air-gap magnetic flux density to be distorted. Thus, it causes noise, torque fluctuation, and losses in the induction motor. On the other hand, the change in rotor bar dimensions will change the resistance of the conductor, so the current will be affected. Therefore, the increase and decrease of rotor bar current affect operation, starting torque, and efficiency. The aim of this study is to examine the effect of rotor bar dimensions on the electromagnetic performance criteria of the induction motor. Modeling of the induction motor is done by the finite element method (FEM), which is a very powerful tool. In FEM, the results generally focus on performance criteria such as torque, torque fluctuation, efficiency, and current.Keywords: induction motor, finite element method, optimization, rotor bar
Procedia PDF Downloads 12616920 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies
Authors: Chao-Ton Su, Li-Fei Chen
Abstract:
The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design
Procedia PDF Downloads 14516919 Empirical Modeling and Optimization of Laser Welding of AISI 304 Stainless Steel
Authors: Nikhil Kumar, Asish Bandyopadhyay
Abstract:
Laser welding process is a capable technology for forming the automobile, microelectronics, marine and aerospace parts etc. In the present work, a mathematical and statistical approach is adopted to study the laser welding of AISI 304 stainless steel. A robotic control 500 W pulsed Nd:YAG laser source with 1064 nm wavelength has been used for welding purpose. Butt joints are made. The effects of welding parameters, namely; laser power, scanning speed and pulse width on the seam width and depth of penetration has been investigated using the empirical models developed by response surface methodology (RSM). Weld quality is directly correlated with the weld geometry. Twenty sets of experiments have been conducted as per central composite design (CCD) design matrix. The second order mathematical model has been developed for predicting the desired responses. The results of ANOVA indicate that the laser power has the most significant effect on responses. Microstructural analysis as well as hardness of the selected weld specimens has been carried out to understand the metallurgical and mechanical behaviour of the weld. Average micro-hardness of the weld is observed to be higher than the base metal. Higher hardness of the weld is the resultant of grain refinement and δ-ferrite formation in the weld structure. The result suggests that the lower line energy generally produce fine grain structure and improved mechanical properties than the high line energy. The combined effects of input parameters on responses have been analyzed with the help of developed 3-D response surface and contour plots. Finally, multi-objective optimization has been conducted for producing weld joint with complete penetration, minimum seam width and acceptable welding profile. Confirmatory tests have been conducted at optimum parametric conditions to validate the applied optimization technique.Keywords: ANOVA, laser welding, modeling and optimization, response surface methodology
Procedia PDF Downloads 29416918 Experience of the Formation of Professional Competence of Students of IT-Specialties
Authors: B. I. Zhumagaliyev, L. Sh. Balgabayeva, G. S. Nabiyeva, B. A. Tulegenova, P. Oralkhan, B. S. Kalenova, S. S. Akhmetov
Abstract:
The article describes an approach to build competence in research of Bachelor and Master, which is now an important feature of modern specialist in the field of engineering. Provides an example of methodical teaching methods with the research aspect, is including the formulation of the problem, the method of conducting experiments, analysis of the results. Implementation of methods allows the student to better consolidate their knowledge and skills at the same time to get research. Knowledge on the part of the media requires some training in the subject area and teaching methods.Keywords: professional competence, model of it-specialties, teaching methods, educational technology, decision making
Procedia PDF Downloads 43716917 Examining Electroencephalographic Activity Differences Between Goalkeepers and Forwards in Professional Football Players
Authors: Ruhollah Basatnia, Ali Reza Aghababa, Mehrdad Anbarian, Sara Akbari, Mohammad Khazaee
Abstract:
Introduction: The investigation of brain activity in sports has become a subject of interest for researchers. Several studies have examined the patterns or differences in brain activity during different sports situations. Previous studies have suggested that the pattern of cortical activity may differ between different football positions, such as goalkeepers and other players. This study aims to investigate the differences in electroencephalographic (EEG) activity between the positions of goalkeeper and forward in professional football players. Methods: Fourteen goalkeepers and twelve forwards, all males between 19-28 years old, participated in the study. EEG activity was recorded while participants were sitting with their eyes closed for 5 minutes. The mean relative power of EEG activity for each frequency band was compared between the two groups using independent samples t-test. Findings: The study found significant differences in the relative power of EEG activity between different frequency bands and electrodes. Notably, significant differences were observed in the mean relative power of EEG activity between the two groups for certain frequency bands and electrodes. These findings suggest that EEG activity can serve as a sensory indicator for cognitive and performance differences between goalkeepers and forwards in football players. Discussion: The results of this study suggest that EEG activity can be used to identify cognitive and performance differences between goalkeepers and forwards in football players. However, further research is needed to establish the relationship between EEG activity and actual performance in the field. Future studies should investigate the potential influence of other factors, such as fatigue and stress, on the EEG activity of football players. Additionally, the use of real-time EEG feedback could be explored as a tool for training and performance optimization in football players. Further research is required to fully understand the potential of EEG activity as a sensory indicator for cognitive and performance differences between football player positions and to explore its potential applications for training and performance optimization in football and other sports.Keywords: football, brain activity, EEG, goalkeepers, forwards
Procedia PDF Downloads 8416916 Optimization of Process Parameters Affecting Biogas Production from Organic Fraction of Municipal Solid Waste via Anaerobic Digestion
Authors: B. Sajeena Beevi, P. P. Jose, G. Madhu
Abstract:
The aim of this study was to obtain the optimal conditions for biogas production from anaerobic digestion of organic fraction of municipal solid waste (OFMSW) using response surface methodology (RSM). The parameters studied were initial pH, substrate concentration and total organic carbon (TOC). The experimental results showed that the linear model terms of initial pH and substrate concentration and the quadratic model terms of the substrate concentration and TOC had significant individual effect (p < 0.05) on biogas yield. However, there was no interactive effect between these variables (p > 0.05). The highest level of biogas produced was 53.4 L/Kg VS at optimum pH, substrate concentration and total organic carbon of 6.5, 99gTS/L, and 20.32 g/L respectively.Keywords: anaerobic digestion, biogas, optimization, response surface methodology
Procedia PDF Downloads 43316915 Bone Mineral Density and Quality, Body Composition of Women in the Postmenopausal Period
Authors: Vladyslav Povoroznyuk, Oksana Ivanyk, Nataliia Dzerovych
Abstract:
In the diagnostics of osteoporosis, the gold standard is considered to be bone mineral density; however, X-ray densitometry is not an accurate indicator of osteoporotic fracture risk under all circumstances. In this regard, the search for new methods that could determine the indicators not only of the mineral density, but of the bone tissue quality, is a logical step for diagnostic optimization. One of these methods is the evaluation of trabecular bone quality. The aim of this study was to examine the quality and mineral density of spine bone tissue, femoral neck, and body composition of women depending on the duration of the postmenopausal period, to determine the correlation of body fat with indicators of bone mineral density and quality. The study examined 179 women in premenopausal and postmenopausal periods. The patients were divided into the following groups: Women in the premenopausal period and women in the postmenopausal period at various stages (early, middle, late postmenopause). A general examination and study of the above parameters were conducted with General Electric X-ray densitometer. The results show that bone quality and mineral density probably deteriorate with advancing of postmenopausal period. Total fat and lean mass ratio is not likely to change with age. In the middle and late postmenopausal periods, the bone tissue mineral density of the spine and femoral neck increases along with total fat mass.Keywords: osteoporosis, bone tissue mineral density, bone quality, fat mass, lean mass, postmenopausal osteoporosis
Procedia PDF Downloads 34316914 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 23116913 Optimizing Emergency Rescue Center Layouts: A Backpropagation Neural Networks-Genetic Algorithms Method
Authors: Xiyang Li, Qi Yu, Lun Zhang
Abstract:
In the face of natural disasters and other emergency situations, determining the optimal location of rescue centers is crucial for improving rescue efficiency and minimizing impact on affected populations. This paper proposes a method that integrates genetic algorithms (GA) and backpropagation neural networks (BPNN) to address the site selection optimization problem for emergency rescue centers. We utilize BPNN to accurately estimate the cost of delivering supplies from rescue centers to each temporary camp. Moreover, a genetic algorithm with a special partially matched crossover (PMX) strategy is employed to ensure that the number of temporary camps assigned to each rescue center adheres to predetermined limits. Using the population distribution data during the 2022 epidemic in Jiading District, Shanghai, as an experimental case, this paper verifies the effectiveness of the proposed method. The experimental results demonstrate that the BPNN-GA method proposed in this study outperforms existing algorithms in terms of computational efficiency and optimization performance. Especially considering the requirements for computational resources and response time in emergency situations, the proposed method shows its ability to achieve rapid convergence and optimal performance in the early and mid-stages. Future research could explore incorporating more real-world conditions and variables into the model to further improve its accuracy and applicability.Keywords: emergency rescue centers, genetic algorithms, back-propagation neural networks, site selection optimization
Procedia PDF Downloads 8516912 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 7816911 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 13516910 Dosimetric Dependence on the Collimator Angle in Prostate Volumetric Modulated Arc Therapy
Authors: Muhammad Isa Khan, Jalil Ur Rehman, Muhammad Afzal Khan Rao, James Chow
Abstract:
Purpose: This study investigates the dose-volume variations in planning target volume (PTV) and organs-at-risk (OARs) using different collimator angles for smart arc prostate volumetric modulated arc therapy (VMAT). Awareness of the collimator angle for PTV and OARs sparing is essential for the planner because optimization contains numerous treatment constraints producing a complex, unstable and computationally challenging problem throughout its examination of an optimal plan in a rational time. Materials and Methods: Single arc VMAT plans at different collimator angles varied systematically (0°-90°) were performed on a Harold phantom and a new treatment plan is optimized for each collimator angle. We analyzed the conformity index (CI), homogeneity index (HI), gradient index (GI), monitor units (MUs), dose-volume histogram, mean and maximum doses to PTV. We also explored OARs (e.g. bladder, rectum and femoral heads), dose-volume criteria in the treatment plan (e.g. D30%, D50%, V30Gy and V38Gy of bladder and rectum; D5%,V14Gy and V22Gy of femoral heads), dose-volume histogram, mean and maximum doses for smart arc VMAT at different collimator angles. Results: There was no significance difference found in VMAT optimization at all studied collimator angles. However, if 0.5% accuracy is concerned then collimator angle = 45° provides higher CI and lower HI. Collimator angle = 15° also provides lower HI values like collimator angle 45°. It is seen that collimator angle = 75° is established as a good for rectum and right femur sparing. Collimator angle = 90° and collimator angle = 30° were found good for rectum and left femur sparing respectively. The PTV dose coverage statistics for each plan are comparatively independent of the collimator angles. Conclusion: It is concluded that this study will help the planner to have freedom to choose any collimator angle from (0°-90°) for PTV coverage and select a suitable collimator angle to spare OARs.Keywords: VMAT, dose-volume histogram, collimator angle, organs-at-risk
Procedia PDF Downloads 51216909 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization
Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang
Abstract:
The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling
Procedia PDF Downloads 25416908 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing
Authors: Ahmed E. Hodaib, Muhammed A. Hashem
Abstract:
High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing
Procedia PDF Downloads 33016907 Overview of Time, Resource and Cost Planning Techniques in Construction Management Research
Authors: R. Gupta, P. Jain, S. Das
Abstract:
One way to approach construction scheduling optimization problem is to focus on the individual aspects of planning, which can be broadly classified as time scheduling, crew and resource management, and cost control. During the last four decades, construction planning has seen a lot of research, but to date, no paper had attempted to summarize the literature available under important heads. This paper addresses each of aspects separately, and presents the findings of an in-depth literature of the various planning techniques. For techniques dealing with time scheduling, the authors have adopted a rough chronological documentation. For crew and resource management, classification has been done on the basis of the different steps involved in the resource planning process. For cost control, techniques dealing with both estimation of costs and the subsequent optimization of costs have been dealt with separately.Keywords: construction planning techniques, time scheduling, resource planning, cost control
Procedia PDF Downloads 48716906 Prediction-Based Midterm Operation Planning for Energy Management of Exhibition Hall
Authors: Doseong Eom, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
Large exhibition halls require a lot of energy to maintain comfortable atmosphere for the visitors viewing inside. One way of reducing the energy cost is to have thermal energy storage systems installed so that the thermal energy can be stored in the middle of night when the energy price is low and then used later when the price is high. To minimize the overall energy cost, however, we should be able to decide how much energy to save during which time period exactly. If we can foresee future energy load and the corresponding cost, we will be able to make such decisions reasonably. In this paper, we use machine learning technique to obtain models for predicting weather conditions and the number of visitors on hourly basis for the next day. Based on the energy load thus predicted, we build a cost-optimal daily operation plan for the thermal energy storage systems and cooling and heating facilities through simulation-based optimization.Keywords: building energy management, machine learning, operation planning, simulation-based optimization
Procedia PDF Downloads 32216905 Optimization of Extraction Conditions and Characteristics of Scale collagen From Sardine: Sardina pilchardus
Authors: F. Bellali, M. Kharroubi, M. Loutfi, N.Bourhim
Abstract:
In Morocco, fish processing industry is an important source income for a large amount of byproducts including skins, bones, heads, guts and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Scales from Sardina plichardus resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic and bio medical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. Moreover, the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The basic principle of RSM is to determinate model equations that describe interrelations between the independent variables and the dependent variables.Keywords: Sardina pilchardus, scales, valorization, collagen extraction, response surface methodology
Procedia PDF Downloads 41716904 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 2316903 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 14616902 Theoretical Exploration for the Impact of Accounting for Special Methods in Connectivity-Based Cohesion Measurement
Authors: Jehad Al Dallal
Abstract:
Class cohesion is a key object-oriented software quality attribute that is used to evaluate the degree of relatedness of class attributes and methods. Researchers have proposed several class cohesion measures. However, the effect of considering the special methods (i.e., constructors, destructors, and access and delegation methods) in cohesion calculation is not thoroughly theoretically studied for most of them. In this paper, we address this issue for three popular connectivity-based class cohesion measures. For each of the considered measures we theoretically study the impact of including or excluding special methods on the values that are obtained by applying the measure. This study is based on analyzing the definitions and formulas that are proposed for the measures. The results show that including/excluding special methods has a considerable effect on the obtained cohesion values and that this effect varies from one measure to another. For each of the three connectivity-based measures, the proposed theoretical study recommended excluding the special methods in cohesion measurement.Keywords: object-oriented class, software quality, class cohesion measure, class cohesion, special methods
Procedia PDF Downloads 29716901 Design Parameters Optimization of a Gas Turbine with Exhaust Gas Recirculation: An Energy and Exergy Approach
Authors: Joe Hachem, Marianne Cuif-Sjostrand, Thierry Schuhler, Dominique Orhon, Assaad Zoughaib
Abstract:
The exhaust gas recirculation, EGR, implementation on gas turbines is increasingly gaining the attention of many researchers. This emerging technology presents many advantages, such as lowering the NOx emissions and facilitating post-combustion carbon capture as the carbon dioxide concentration in the cycle increases. As interesting as this technology may seem, the gas turbine, or its thermodynamic equivalent, the Brayton cycle, shows an intrinsic efficiency decrease with increasing EGR rate. In this paper, a thermodynamic model is presented to show the cycle efficiency decrease with EGR, alternative values of design parameters of both the pressure ratio (PR) and the turbine inlet temperature (TIT) are then proposed to optimize the cycle efficiency with different EGR rates. Results show that depending on the given EGR rate, both the design PR & TIT should be increased to compensate for the deficit in efficiency.Keywords: gas turbines, exhaust gas recirculation, design parameters optimization, thermodynamic approach
Procedia PDF Downloads 14516900 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing
Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl
Abstract:
This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization
Procedia PDF Downloads 15916899 Objects Tracking in Catadioptric Images Using Spherical Snake
Authors: Khald Anisse, Amina Radgui, Mohammed Rziza
Abstract:
Tracking objects on video sequences is a very challenging task in many works in computer vision applications. However, there is no article that treats this topic in catadioptric vision. This paper is an attempt that tries to describe a new approach of omnidirectional images processing based on inverse stereographic projection in the half-sphere. We used the spherical model proposed by Gayer and al. For object tracking, our work is based on snake method, with optimization using the Greedy algorithm, by adapting its different operators. The algorithm will respect the deformed geometries of omnidirectional images such as spherical neighborhood, spherical gradient and reformulation of optimization algorithm on the spherical domain. This tracking method that we call "spherical snake" permitted to know the change of the shape and the size of object in different replacements in the spherical image.Keywords: computer vision, spherical snake, omnidirectional image, object tracking, inverse stereographic projection
Procedia PDF Downloads 40216898 Analytic Hierarchy Process and Multi-Criteria Decision-Making Approach for Selecting the Most Effective Soil Erosion Zone in Gomati River Basin
Authors: Rajesh Chakraborty, Dibyendu Das, Rabindra Nath Barman, Uttam Kumar Mandal
Abstract:
In the present study, the objective is to find out the most effective zone causing soil erosion in the Gumati river basin located in the state of Tripura, a north eastern state of India using analytical hierarchy process (AHP) and multi-objective optimization on the basis of ratio analysis (MOORA).The watershed is segmented into 20 zones based on Area. The watershed is considered by pointing the maximum elevation from sea lever from Google earth. The soil erosion is determined using the universal soil loss equation. The different independent variables of soil loss equation bear different weightage for different soil zones. And therefore, to find the weightage factor for all the variables of soil loss equation like rainfall runoff erosivity index, soil erodibility factor etc, analytical hierarchy process (AHP) is used. And thereafter, multi-objective optimization on the basis of ratio analysis (MOORA) approach is used to select the most effective zone causing soil erosion. The MCDM technique concludes that the maximum soil erosion is occurring in the zone 14.Keywords: soil erosion, analytic hierarchy process (AHP), multi criteria decision making (MCDM), universal soil loss equation (USLE), multi-objective optimization on the basis of ratio analysis (MOORA)
Procedia PDF Downloads 538