Search results for: Pareto optimal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3077

Search results for: Pareto optimal

2417 Julia-Based Computational Tool for Composite System Reliability Assessment

Authors: Josif Figueroa, Kush Bubbar, Greg Young-Morris

Abstract:

The reliability evaluation of composite generation and bulk transmission systems is crucial for ensuring a reliable supply of electrical energy to significant system load points. However, evaluating adequacy indices using probabilistic methods like sequential Monte Carlo Simulation can be computationally expensive. Despite this, it is necessary when time-varying and interdependent resources, such as renewables and energy storage systems, are involved. Recent advances in solving power network optimization problems and parallel computing have improved runtime performance while maintaining solution accuracy. This work introduces CompositeSystems, an open-source Composite System Reliability Evaluation tool developed in Julia™, to address the current deficiencies of commercial and non-commercial tools. This work introduces its design, validation, and effectiveness, which includes analyzing two different formulations of the Optimal Power Flow problem. The simulations demonstrate excellent agreement with existing published studies while improving replicability and reproducibility. Overall, the proposed tool can provide valuable insights into the performance of transmission systems, making it an important addition to the existing toolbox for power system planning.

Keywords: open-source software, composite system reliability, optimization methods, Monte Carlo methods, optimal power flow

Procedia PDF Downloads 51
2416 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

BACKGROUND: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. OBJECTIVE: This article tried to optimize the layout of troops’ cafeteria and to improve the overall efficiency of the dining process. METHODS: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. RESULTS: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interference reduced as well, which verified corresponding simulation results. CONCLUSIONS: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: layout optimization, dining efficiency, troops’ cafeteria, anylogic simulation, field experiment

Procedia PDF Downloads 123
2415 A Folk Theorem with Public Randomization Device in Repeated Prisoner’s Dilemma under Costly Observation

Authors: Yoshifumi Hino

Abstract:

An infinitely repeated prisoner’s dilemma is a typical model that represents teamwork situation. If both players choose costly actions and contribute to the team, then both players are better off. However, each player has an incentive to choose a selfish action. We analyze the game under costly observation. Each player can observe the action of the opponent only when he pays an observation cost in that period. In reality, teamwork situations are often costly observation. Members of some teams sometimes work in distinct rooms, areas, or countries. In those cases, they have to spend their time and money to see other team members if they want to observe it. The costly observation assumption makes the cooperation difficult substantially because the equilibrium must satisfy the incentives not only on the action but also on the observational decision. Especially, it is the most difficult to cooperate each other when the stage-game is prisoner's dilemma because players have to communicate through only two actions. We examine whether or not players can cooperate each other in prisoner’s dilemma under costly observation. Specifically, we check whether symmetric Pareto efficient payoff vectors in repeated prisoner’s dilemma can be approximated by sequential equilibria or not (efficiency result). We show the efficiency result without any randomization device under certain circumstances. It means that players can cooperate with each other without any randomization device even if the observation is costly. Next, we assume that public randomization device is available, and then we show that any feasible and individual rational payoffs in prisoner’s dilemma can be approximated by sequential equilibria under a specific situation (folk theorem). It implies that players can achieve asymmetric teamwork like leadership situation when public randomization device is available.

Keywords: cost observation, efficiency, folk theorem, prisoner's dilemma, private monitoring, repeated games.

Procedia PDF Downloads 216
2414 Performance Evaluation of Adsorption Refrigerating Systems

Authors: Nadia Allouache, Omar Rahli

Abstract:

Many promising technologies have been developed to harness the sun's energy. These technologies help in economizing energy and environmental protection. The solar refrigerating systems are one of these important technologies. In addition to environmental benefits and energy saving, adsorption refrigerating systems have many advantages such as lack of moving parts, simplicity of construction and low operating costs. The work aimed to establish the main factors that affect the performances of an adsorption refrigerating system using different geometries of adsorbers and different adsorbent-adsorbate pairs. The numerical modeling of the heat and mass transfer in the system, using various working pairs, such as: activated carbon-ammonia, calcium chlorid-ammonia, activated carbon fiber- methanol and activated carbon AC35-methanol, show that the adsorber design can influence the system performances; The thermal performances of system are better in the annular configuration case. An optimal value of generating temperature is observed in annular adsorber case for which the thermal performance of the cooling system is maximal. While in the plate adsorber, above a certain value of generating temperature, the performance of the system remains almost constant. The environmental conditions such as solar radiation and pressure have a great influence in the system efficiency, and the choice of the working pair depends on the environmental conditions and the geometry of the adsorber.

Keywords: adsorber geometry, numerical modeling, optimal environmental conditions, working pairs.

Procedia PDF Downloads 68
2413 Optimal Consume of NaOH in Starches Gelatinization for Froth Flotation

Authors: André C. Silva, Débora N. Sousa, Elenice M. S. Silva, Thales P. Fontes, Raphael S. Tomaz

Abstract:

Starches are widely used as depressant in froth flotation operations in Brazil due to their efficiency, increasing the selectivity in the inverse flotation of quartz depressing iron ore. Starches market have been growing and improving in recent years, leading to better products attending the requirements of the mineral industry. The major source of starch used for iron ore is corn starch, which needs to be gelatinized with sodium hydroxide (NaOH) prior to use. This stage has a direct impact on industrials costs, once the lowest consumption of NaOH in gelatinization provides better control of the pH in the froth flotation and reduces the amount of electrolytes present in the pulp. In order to evaluate the gelatinization degree of different starches and flour were subjected to the addiction of NaOH and temperature variation experiments. Samples of starch (corn, cassava, HIPIX 100, HIPIX 101 and HIPIX 102 commercialized by Ingredion) and flour (cassava and potato) were tested. The starch samples were characterized through Scanning Electronic Microscopy and the amylose content were determined through spectrometry, swelling and solubility tests. The gelatinization was carried out through titration with NaOH, keeping the solution temperature constant at 40 oC. At the end of the tests, the optimal amount of NaOH consumed to gelatinize the starch or flour from different botanical sources was established and a correlation between the content of amylopectin in the starch and the starch/NaOH ratio needed for its gelatinization.

Keywords: froth flotation, gelatinization, sodium hydroxide, starches and flours

Procedia PDF Downloads 345
2412 An Exploration of Health Promotion Approach to Increase Optimal Complementary Feeding among Pastoral Mothers Having Children between 6 and 23 Months in Dikhil, Djibouti

Authors: Haruka Ando

Abstract:

Undernutrition of children is a critical issue, especially for people in the remote areas of the Republic of Djibouti, since household food insecurity, inadequate child caring and feeding, unhealthy environment and lack of clean water, as well as insufficient maternal and child healthcare, are underlying causes which affect. Nomadic pastoralists living in the Dikhil region (Dikhil) are socio-economically and geographically more vulnerable due to displacement, which in turn worsens the situation of child stunting. A high prevalence of inappropriate complementary feeding among pastoral mothers might be a significant barrier to child growth. This study aims to identify health promotion intervention strategies that would support an increase in optimal complementary feeding among pastoral mothers of children aged 6-23 months in Dikhil. There are four objectives; to explore and to understand the existing practice of complementary feeding among pastoral mothers in Dikhil; to identify the barriers in appropriate complementary feeding among the mothers; to critically explore and analyse the strategies for an increase in complementary feeding among the mothers; to make pragmatic recommendations to address the barriers in Djibouti. This is an in-depth study utilizing a conceptual framework, the behaviour change wheel, to analyse the determinants of complementary feeding and categorize health promotion interventions for increasing optimal complementary feeding among pastoral mothers living in Dikhil. The analytical tool was utilized to appraise the strategies to mitigate the selected barriers against optimal complementary feeding. The data sources were secondary literature from both published and unpublished sources. The literature was systematically collected. The findings of the determinants including the barriers of optimal complementary feeding were identified: heavy household workload, caring for multiple children under five, lack of education, cultural norms and traditional eating habits, lack of husbands' support, poverty and food insecurity, lack of clean water, low media coverage, insufficient health services on complementary feeding, fear, poor personal hygiene, and mothers' low decision-making ability and lack of motivation for food choice. To mitigate selected barriers of optimal complementary feeding, four intervention strategies based on interpersonal communication at the community-level were chosen: scaling up mothers' support groups, nutrition education, grandmother-inclusive approach, and training for complementary feeding counseling. The strategies were appraised through the criteria of effectiveness and feasibility. Scaling up mothers' support groups could be the best approach. Mid-term and long-term recommendations are suggested based on the situation analysis and appraisal of intervention strategies. Mid-term recommendations include complementary feeding promotion interventions are integrated into the healthcare service providing system in Dikhil, and donor agencies advocate and lobby the Ministry of Health Djibouti (MoHD) to increase budgetary allocation on complementary feeding promotion to implement interventions at a community level. Moreover, the recommendations include a community health management team in Dikhil training healthcare workers and mother support groups by using complementary feeding communication guidelines and monitors behaviour change of pastoral mothers and health outcome of their children. Long-term recommendations are the MoHD develops complementary feeding guidelines to cover sector-wide collaboration for multi-sectoral related barriers.

Keywords: Afar, child food, child nutrition, complementary feeding, complementary food, developing countries, Djibouti, East Africa, hard-to-reach areas, Horn of Africa, nomad, pastoral, rural area, Somali, Sub-Saharan Africa

Procedia PDF Downloads 103
2411 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis

Authors: Yao Cheng, Weihua Zhang

Abstract:

Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.

Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution

Procedia PDF Downloads 353
2410 Optimization of Hepatitis B Surface Antigen Purifications to Improving the Production of Hepatitis B Vaccines on Pichia pastoris

Authors: Rizky Kusuma Cahyani

Abstract:

Hepatitis B is a liver inflammatory disease caused by hepatitis B virus (HBV). This infection can be prevented by vaccination which contains HBV surface protein (sHBsAg). However, vaccine supply is limited. Several attempts have been conducted to produce local sHBsAg. However, the purity degree and protein yield are still inadequate. Therefore optimization of HBsAg purification steps is required to obtain high yield with better purification fold. In this study, optimization of purification was done in 2 steps, precipitation using variation of NaCl concentration (0,3 M; 0,5 M; 0,7 M) and PEG (3%, 5%, 7%); ion exchange chromatography (IEC) using NaCl 300-500 mM elution buffer concentration.To determine HBsAg protein, bicinchoninic acid assay (BCA) and enzyme-linked immunosorbent assay (ELISA) was used in this study. Visualization of HBsAg protein was done by SDS-PAGE analysis. Based on quantitative analysis, optimal condition at precipitation step was given 0,3 M NaCl and PEG 3%, while in ion exchange chromatography step, the optimum condition when protein eluted with NaCl 500 mM. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis indicates that the presence of protein HBsAg with a molecular weight of 25 kDa (monomer) and 50 kDa (dimer). The optimum condition for purification of sHBsAg produced in Pichia pastoris gave a yield of 47% and purification fold 17x so that it would increase the production of hepatitis B vaccine to be more optimal.

Keywords: hepatitis B virus, HBsAg, hepatitis B surface antigen, Pichia pastoris, purification

Procedia PDF Downloads 130
2409 Load-Enabled Deployment and Sensing Range Optimization for Lifetime Enhancement of WSNs

Authors: Krishan P. Sharma, T. P. Sharma

Abstract:

Wireless sensor nodes are resource constrained battery powered devices usually deployed in hostile and ill-disposed areas to cooperatively monitor physical or environmental conditions. Due to their limited power supply, the major challenge for researchers is to utilize their battery power for enhancing the lifetime of whole network. Communication and sensing are two major sources of energy consumption in sensor networks. In this paper, we propose a deployment strategy for enhancing the average lifetime of a sensor network by effectively utilizing communication and sensing energy to provide full coverage. The proposed scheme is based on the fact that due to heavy relaying load, sensor nodes near to the sink drain energy at much faster rate than other nodes in the network and consequently die much earlier. To cover this imbalance, proposed scheme finds optimal communication and sensing ranges according to effective load at each node and uses a non-uniform deployment strategy where there is a comparatively high density of nodes near to the sink. Probable relaying load factor at particular node is calculated and accordingly optimal communication distance and sensing range for each sensor node is adjusted. Thus, sensor nodes are placed at locations that optimize energy during network operation. Formal mathematical analysis for calculating optimized locations is reported in present work.

Keywords: load factor, network lifetime, non-uniform deployment, sensing range

Procedia PDF Downloads 359
2408 Statistical Modelling of Maximum Temperature in Rwanda Using Extreme Value Analysis

Authors: Emmanuel Iyamuremye, Edouard Singirankabo, Alexis Habineza, Yunvirusaba Nelson

Abstract:

Temperature is one of the most important climatic factors for crop production. However, severe temperatures cause drought, feverish and cold spells that have various consequences for human life, agriculture, and the environment in general. It is necessary to provide reliable information related to the incidents and the probability of such extreme events occurring. In the 21st century, the world faces a huge number of threats, especially from climate change, due to global warming and environmental degradation. The rise in temperature has a direct effect on the decrease in rainfall. This has an impact on crop growth and development, which in turn decreases crop yield and quality. Countries that are heavily dependent on agriculture use to suffer a lot and need to take preventive steps to overcome these challenges. The main objective of this study is to model the statistical behaviour of extreme maximum temperature values in Rwanda. To achieve such an objective, the daily temperature data spanned the period from January 2000 to December 2017 recorded at nine weather stations collected from the Rwanda Meteorological Agency were used. The two methods, namely the block maxima (BM) method and the Peaks Over Threshold (POT), were applied to model and analyse extreme temperature. Model parameters were estimated, while the extreme temperature return periods and confidence intervals were predicted. The model fit suggests Gumbel and Beta distributions to be the most appropriate models for the annual maximum of daily temperature. The results show that the temperature will continue to increase, as shown by estimated return levels.

Keywords: climate change, global warming, extreme value theory, rwanda, temperature, generalised extreme value distribution, generalised pareto distribution

Procedia PDF Downloads 153
2407 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 266
2406 Optimal 3D Deployment and Path Planning of Multiple Uavs for Maximum Coverage and Autonomy

Authors: Indu Chandran, Shubham Sharma, Rohan Mehta, Vipin Kizheppatt

Abstract:

Unmanned aerial vehicles are increasingly being explored as the most promising solution to disaster monitoring, assessment, and recovery. Current relief operations heavily rely on intelligent robot swarms to capture the damage caused, provide timely rescue, and create road maps for the victims. To perform these time-critical missions, efficient path planning that ensures quick coverage of the area is vital. This study aims to develop a technically balanced approach to provide maximum coverage of the affected area in a minimum time using the optimal number of UAVs. A coverage trajectory is designed through area decomposition and task assignment. To perform efficient and autonomous coverage mission, solution to a TSP-based optimization problem using meta-heuristic approaches is designed to allocate waypoints to the UAVs of different flight capacities. The study exploits multi-agent simulations like PX4-SITL and QGroundcontrol through the ROS framework and visualizes the dynamics of UAV deployment to different search paths in a 3D Gazebo environment. Through detailed theoretical analysis and simulation tests, we illustrate the optimality and efficiency of the proposed methodologies.

Keywords: area coverage, coverage path planning, heuristic algorithm, mission monitoring, optimization, task assignment, unmanned aerial vehicles

Procedia PDF Downloads 188
2405 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 285
2404 Evaluation of the Power Generation Effect Obtained by Inserting a Piezoelectric Sheet in the Backlash Clearance of a Circular Arc Helical Gear

Authors: Barenten Suciu, Yuya Nakamoto

Abstract:

Power generation effect, obtained by inserting a piezo- electric sheet in the backlash clearance of a circular arc helical gear, is evaluated. Such type of screw gear is preferred since, in comparison with the involute tooth profile, the circular arc profile leads to reduced stress-concentration effects, and improved life of the piezoelectric film. Firstly, geometry of the circular arc helical gear, and properties of the piezoelectric sheet are presented. Then, description of the test-rig, consisted of a right-hand thread gear meshing with a left-hand thread gear, and the voltage measurement procedure are given. After creating the tridimensional (3D) model of the meshing gears in SolidWorks, they are 3D-printed in acrylonitrile butadiene styrene (ABS) resin. Variation of the generated voltage versus time, during a meshing cycle of the circular arc helical gear, is measured for various values of the center distance. Then, the change of the maximal, minimal, and peak-to-peak voltage versus the center distance is illustrated. Optimal center distance of the gear, to achieve voltage maximization, is found and its significance is discussed. Such results prove that the contact pressure of the meshing gears can be measured, and also, the electrical power can be generated by employing the proposed technique.

Keywords: circular arc helical gear, contact problem, optimal center distance, piezoelectric sheet, power generation

Procedia PDF Downloads 147
2403 Hydrometallurgical Recovery of Cobalt, Nickel, Lithium, and Manganese from Spent Lithium-Ion Batteries

Authors: E. K. Hardwick, L. B. Siwela, J. G. Falconer, M. E. Mathibela, W. Rolfe

Abstract:

Lithium-ion battery (LiB) demand has increased with the advancement in technologies. The applications include electric vehicles, cell phones, laptops, and many more devices. Typical components of the cathodes include lithium, cobalt, nickel, and manganese. Recycling the spent LiBs is necessary to reduce the ecological footprint of their production and use and to have a secondary source of valuable metals. A hydrometallurgical method was investigated for the recovery of cobalt and nickel from LiB cathodes. The cathodes were leached using a chloride solution. Ion exchange was then used to recover the chloro-complexes of the metals. The aim of the research was to determine the efficiency of a chloride leach, as well as ion exchange operating capacities that can be achieved for LiB recycling, and to establish the optimal operating conditions (ideal pH, temperature, leachate and eluant, flowrate, and reagent concentrations) for the recovery of the cathode metals. It was found that the leaching of the cathodes could be hindered by the formation of refractory metal oxides of cathode components. A reducing agent was necessary to improve the leaching rate and efficiency. Leaching was achieved using various chloride-containing solutions. The chloro-complexes were absorbed by the ion exchange resin and eluted to produce concentrated cobalt, nickel, lithium, and manganese streams. Chromatographic separation of these elements was achieved. Further work is currently underway to determine the optimal operating conditions for the recovery by ion exchange.

Keywords: cobalt, ion exchange, leachate formation, lithium-ion batteries, manganese, nickel

Procedia PDF Downloads 79
2402 Systems Approach on Thermal Analysis of an Automatic Transmission

Authors: Sinsze Koo, Benjin Luo, Matthew Henry

Abstract:

In order to increase the performance of an automatic transmission, the automatic transmission fluid is required to be warm up to an optimal operating temperature. In a conventional vehicle, cold starts result in friction loss occurring in the gear box and engine. The stop and go nature of city driving dramatically affect the warm-up of engine oil and automatic transmission fluid and delay the time frame needed to reach an optimal operating temperature. This temperature phenomenon impacts both engine and transmission performance but also increases fuel consumption and CO2 emission. The aim of this study is to develop know-how of the thermal behavior in order to identify thermal impacts and functional principles in automatic transmissions. Thermal behavior was studied using models and simulations, developed using GT-Suit, on a one-dimensional thermal and flow transport. A power train of a conventional vehicle was modeled in order to emphasis the thermal phenomena occurring in the various components and how they impact the automatic transmission performance. The simulation demonstrates the thermal model of a transmission fluid cooling system and its component parts in warm-up after a cold start. The result of these analyses will support the future designs of transmission systems and components in an attempt to obtain better fuel efficiency and transmission performance. Therefore, these thermal analyses could possibly identify ways that improve existing thermal management techniques with prioritization on fuel efficiency.

Keywords: thermal management, automatic transmission, hybrid, and systematic approach

Procedia PDF Downloads 356
2401 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 124
2400 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 401
2399 Experimental and CFD Simulation of the Jet Pump for Air Bubbles Formation

Authors: L. Grinis, N. Lubashevsky, Y. Ostrovski

Abstract:

A jet pump is a type of pump that accelerates the flow of a secondary fluid (driven fluid) by introducing a motive fluid with high velocity into a converging-diverging nozzle. Jet pumps are also known as adductors or ejectors depending on the motivator phase. The ejector's motivator is of a gaseous nature, usually steam or air, while the educator's motivator is a liquid, usually water. Jet pumps are devices that use air bubbles and are widely used in wastewater treatment processes. In this work, we will discuss about the characteristics of the jet pump and the computational simulation of this device. To find the optimal angle and depth for the air pipe, so as to achieve the maximal air volumetric flow rate, an experimental apparatus was constructed to ascertain the best geometrical configuration for this new type of jet pump. By using 3D printing technology, a series of jet pumps was printed and tested whilst aspiring to maximize air flow rate dependent on angle and depth of the air pipe insertion. The experimental results show a major difference of up to 300% in performance between the different pumps (ratio of air flow rate to supplied power) where the optimal geometric model has an insertion angle of 600 and air pipe insertion depth ending at the center of the mixing chamber. The differences between the pumps were further explained by using CFD for better understanding the reasons that affect the airflow rate. The validity of the computational simulation and the corresponding assumptions have been proved experimentally. The present research showed high degree of congruence with the results of the laboratory tests. This study demonstrates the potential of using of the jet pump in many practical applications.

Keywords: air bubbles, CFD simulation, jet pump, applications

Procedia PDF Downloads 224
2398 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites

Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner

Abstract:

Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.

Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites

Procedia PDF Downloads 36
2397 Fast Generation of High-Performance Driveshafts: A Digital Approach to Automated Linked Topology and Design Optimization

Authors: Willi Zschiebsch, Alrik Dargel, Sebastian Spitzer, Philipp Johst, Robert Böhm, Niels Modler

Abstract:

In this article, we investigate an approach that digitally links individual development process steps by using the drive shaft of an aircraft engine as a representative example of a fiber polymer composite. Such high-performance, lightweight composite structures have many adjustable parameters that influence the mechanical properties. Only a combination of optimal parameter values can lead to energy efficient lightweight structures. The development tools required for the Engineering Design Process (EDP) are often isolated solutions, and their compatibility with each other is limited. A digital framework is presented in this study, which allows individual specialised tools to be linked via the generated data in such a way that automated optimization across programs becomes possible. This is demonstrated using the example of linking geometry generation with numerical structural analysis. The proposed digital framework for automated design optimization demonstrates the feasibility of developing a complete digital approach to design optimization. The methodology shows promising potential for achieving optimal solutions in terms of mass, material utilization, eigenfrequency, and deformation under lateral load with less development effort. The development of such a framework is an important step towards promoting a more efficient design approach that can lead to stable and balanced results.

Keywords: digital linked process, composite, CFRP, multi-objective, EDP, NSGA-2, NSGA-3, TPE

Procedia PDF Downloads 54
2396 Tandem Concentrated Photovoltaic-Thermoelectric Hybrid System: Feasibility Analysis and Performance Enhancement Through Material Assessment Methodology

Authors: Shuwen Hu, Yuancheng Lou, Dongxu Ji

Abstract:

Photovoltaic (PV) power generation, as one of the most commercialized methods to utilize solar power, can only convert a limited range of solar spectrum into electricity, whereas the majority of the solar energy is dissipated as heat. To address this problem, thermoelectric (TE) module is often integrated with the concentrated PV module for waste heat recovery and regeneration. In this research, a feasibility analysis is conducted for the tandem concentrated photovoltaic-thermoelectric (CPV-TE) hybrid system considering various operational parameters as well as TE material properties. Furthermore, the power output density of the CPV-TE hybrid system is maximized by selecting the optimal TE material with application of a systematic assessment methodology. In the feasibility analysis, CPV-TE is found to be more advantageous than sole CPV system except under high optical concentration ratio with low cold side convective coefficient. It is also shown that the effects of the TE material properties, including Seebeck coefficient, thermal conductivity, and electrical resistivity, on the feasibility of CPV-TE are interacted with each other and might have opposite effect on the system performance under different operational conditions. In addition, the optimal TE material selected by the proposed assessment methodology can improve the system power output density by 227 W/m2 under highly concentrated solar irradiance hence broaden the feasible range of CPV-TE considering optical concentration ratio.

Keywords: feasibility analysis, material assessment methodology, photovoltaic waste heat recovery, tandem photovoltaic-thermoelectric

Procedia PDF Downloads 56
2395 Cross-Country Mitigation Policies and Cross Border Emission Taxes

Authors: Massimo Ferrari, Maria Sole Pagliari

Abstract:

Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.

Keywords: climate change, general equilibrium, optimal taxation, monetary policy

Procedia PDF Downloads 138
2394 An Investigation into Computer Vision Methods to Identify Material Other Than Grapes in Harvested Wine Grape Loads

Authors: Riaan Kleyn

Abstract:

Mass wine production companies across the globe are provided with grapes from winegrowers that predominantly utilize mechanical harvesting machines to harvest wine grapes. Mechanical harvesting accelerates the rate at which grapes are harvested, allowing grapes to be delivered faster to meet the demands of wine cellars. The disadvantage of the mechanical harvesting method is the inclusion of material-other-than-grapes (MOG) in the harvested wine grape loads arriving at the cellar which degrades the quality of wine that can be produced. Currently, wine cellars do not have a method to determine the amount of MOG present within wine grape loads. This paper seeks to find an optimal computer vision method capable of detecting the amount of MOG within a wine grape load. A MOG detection method will encourage winegrowers to deliver MOG-free wine grape loads to avoid penalties which will indirectly enhance the quality of the wine to be produced. Traditional image segmentation methods were compared to deep learning segmentation methods based on images of wine grape loads that were captured at a wine cellar. The Mask R-CNN model with a ResNet-50 convolutional neural network backbone emerged as the optimal method for this study to determine the amount of MOG in an image of a wine grape load. Furthermore, a statistical analysis was conducted to determine how the MOG on the surface of a grape load relates to the mass of MOG within the corresponding grape load.

Keywords: computer vision, wine grapes, machine learning, machine harvested grapes

Procedia PDF Downloads 69
2393 Path Planning for Unmanned Aerial Vehicles in Constrained Environments for Locust Elimination

Authors: Aadiv Shah, Hari Nair, Vedant Mittal, Alice Cheeran

Abstract:

Present-day agricultural practices such as blanket spraying not only lead to excessive usage of pesticides but also harm the overall crop yield. This paper introduces an algorithm to optimize the traversal of an unmanned aerial vehicle (UAV) in constrained environments. The proposed system focuses on the agricultural application of targeted spraying for locust elimination. Given a satellite image of a farm, target zones that are prone to locust swarm formation are detected through the calculation of the normalized difference vegetation index (NDVI). This is followed by determining the optimal path for traversal of a UAV through these target zones using the proposed algorithm in order to perform pesticide spraying in the most efficient manner possible. Unlike the classic travelling salesman problem involving point-to-point optimization, the proposed algorithm determines an optimal path for multiple regions, independent of its geometry. Finally, the paper explores the idea of implementing reinforcement learning to model complex environmental behaviour and make the path planning mechanism for UAVs agnostic to external environment changes. This system not only presents a solution to the enormous losses incurred due to locust attacks but also an efficient way to automate agricultural practices across the globe in order to improve farmer ergonomics.

Keywords: locust, NDVI, optimization, path planning, reinforcement learning, UAV

Procedia PDF Downloads 231
2392 Purpose-Driven Collaborative Strategic Learning

Authors: Mingyan Hong, Shuozhao Hou

Abstract:

Collaborative Strategic Learning (CSL) teaches students to use learning strategies while working cooperatively. Student strategies include the following steps: defining the learning task and purpose; conducting ongoing negotiation of the learning materials by deciding "click" (I get it and I can teach it – green card, I get it –yellow card) or "clunk" (I don't get it – red card) at the end of each learning unit; "getting the gist" of the most important parts of the learning materials; and "wrapping up" key ideas. Find out how to help students of mixed achievement levels apply learning strategies while learning content area in materials in small groups. The design of CSL is based on social-constructivism and Vygotsky’s best-known concept of the Zone of Proximal Development (ZPD). The definition of ZPD is the distance between the actual acquisition level as decided by individual problem solution case and the level of potential acquisition level, similar to Krashen (1980)’s i+1, as decided through the problem-solution case under the facilitator’s guidance, or in group work with other more capable members (Vygotsky, 1978). Vygotsky claimed that learners’ ideal learning environment is in the ZPD. An ideal teacher or more-knowledgable-other (MKO) should be able to recognize a learner’s ZPD and facilitates them to develop beyond it. Then the MKO is able to leave the support step by step until the learner can perform the task without aid. Steven Krashen (1980) proposed Input hypothesis including i+1 hypothesis. The input hypothesis models are the application of ZPD in second language acquisition and have been widely recognized until today. Krashen (2019)’s optimal language learning environment (2019) further developed the application of ZPD and added the component of strategic group learning. The strategic group learning is composed of desirable learning materials learners are motivated to learn and desirable group members who are more capable and are therefore able to offer meaningful input to the learners. Purpose-driven Collaborative Strategic Learning Model is a strategic integration of ZPD, i+1 hypothesis model, and Optimal Language Learning Environment Model. It is purpose driven to ensure group members are motivated. It is collaborative so that an optimal learning environment where meaningful input from meaningful conversation can be generated. It is strategic because facilitators in the model strategically assign each member a meaningful and collaborative role, e.g., team leader, technician, problem solver, appraiser, offer group learning instrument so that the learning process is structured, and integrate group learning and team building making sure holistic development of each participant. Using data collected from college year one and year two students’ English courses, this presentation will demonstrate how purpose-driven collaborative strategic learning model is implemented in the second/foreign language classroom, using the qualitative data from questionnaire and interview. Particular, this presentation will show how second/foreign language learners grow from functioning with facilitator or more capable peer’s aid to performing without aid. The implication of this research is that purpose-driven collaborative strategic learning model can be used not only in language learning, but also in any subject area.

Keywords: collaborative, strategic, optimal input, second language acquisition

Procedia PDF Downloads 107
2391 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design

Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz

Abstract:

The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.

Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha

Procedia PDF Downloads 150
2390 Seismic Performance of Benchmark Building Installed with Semi-Active Dampers

Authors: B. R. Raut

Abstract:

The seismic performance of 20-storey benchmark building with semi-active dampers is investigated under various earthquake ground motions. The Semi-Active Variable Friction Dampers (SAVFD) and Magnetorheological Dampers (MR) are used in this study. A recently proposed predictive control algorithm is employed for SAVFD and a simple mechanical model based on a Bouc–Wen element with clipped optimal control algorithm is employed for MR damper. A parametric study is carried out to ascertain the optimum parameters of the semi-active controllers, which yields the minimum performance indices of controlled benchmark building. The effectiveness of dampers is studied in terms of the reduction in structural responses and performance criteria. To minimize the cost of the dampers, the optimal location of the damper, rather than providing the dampers at all floors, is also investigated. The semi-active dampers installed in benchmark building effectively reduces the earthquake-induced responses. Lesser number of dampers at appropriate locations also provides comparable response of benchmark building, thereby reducing cost of dampers significantly. The effectiveness of two semi-active devices in mitigating seismic responses is cross compared. Among two semi-active devices majority of the performance criteria of MR dampers are lower than SAVFD installed with benchmark building. Thus the performance of the MR dampers is far better than SAVFD in reducing displacement, drift, acceleration and base shear of mid to high-rise building against seismic forces.

Keywords: benchmark building, control strategy, input excitation, MR dampers, peak response, semi-active variable friction dampers

Procedia PDF Downloads 266
2389 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (Rsm)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarse-aggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: mix proportioning, response surface methodology, compressive strength, optimal design

Procedia PDF Downloads 247
2388 Variable Renewable Energy Droughts in the Power Sector – A Model-based Analysis and Implications in the European Context

Authors: Martin Kittel, Alexander Roth

Abstract:

The continuous integration of variable renewable energy sources (VRE) in the power sector is required for decarbonizing the European economy. Power sectors become increasingly exposed to weather variability, as the availability of VRE, i.e., mainly wind and solar photovoltaic, is not persistent. Extreme events, e.g., long-lasting periods of scarce VRE availability (‘VRE droughts’), challenge the reliability of supply. Properly accounting for the severity of VRE droughts is crucial for designing a resilient renewable European power sector. Energy system modeling is used to identify such a design. Our analysis reveals the sensitivity of the optimal design of the European power sector towards VRE droughts. We analyze how VRE droughts impact optimal power sector investments, especially in generation and flexibility capacity. We draw upon work that systematically identifies VRE drought patterns in Europe in terms of frequency, duration, and seasonality, as well as the cross-regional and cross-technological correlation of most extreme drought periods. Based on their analysis, the authors provide a selection of relevant historical weather years representing different grades of VRE drought severity. These weather years will serve as input for the capacity expansion model for the European power sector used in this analysis (DIETER). We additionally conduct robustness checks varying policy-relevant assumptions on capacity expansion limits, interconnections, and level of sector coupling. Preliminary results illustrate how an imprudent selection of weather years may cause underestimating the severity of VRE droughts, flawing modeling insights concerning the need for flexibility. Sub-optimal European power sector designs vulnerable to extreme weather can result. Using relevant weather years that appropriately represent extreme weather events, our analysis identifies a resilient design of the European power sector. Although the scope of this work is limited to the European power sector, we are confident that our insights apply to other regions of the world with similar weather patterns. Many energy system studies still rely on one or a limited number of sometimes arbitrarily chosen weather years. We argue that the deliberate selection of relevant weather years is imperative for robust modeling results.

Keywords: energy systems, numerical optimization, variable renewable energy sources, energy drought, flexibility

Procedia PDF Downloads 54