Search results for: forecast aggregation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 752

Search results for: forecast aggregation

62 The Aspect of the Digital Formation in the Solar Community as One Prototype to Find the Algorithmic Sustainable Conditions in the Global Environment

Authors: Kunihisa Kakumoto

Abstract:

Purpose: The global environmental problem is now raised in the global dimension. The sprawl phenomenon over the natural limitation is to be made a forecast beforehand in an algorithmic way so that the condition of our social life can hopefully be protected under the natural limitation. The sustainable condition in the globe is now to be found to keep the balance between the capacity of nature and the possibility of our social lives. The amount of water on the earth is limited. Therefore, on the reason, sustainable conditions are strongly dependent on the capacity of water. The amount of water can be considered in relation to the area of the green planting because a certain volume of the water can be obtained in the forest, where the green planting can be preserved. We can find the sustainable conditions of the water in relation to the green planting area. The reduction of CO₂ by green planting is also possible. Possible Measure and the Methods: Until now, by the opportunity of many international conferences, the concept of the solar community as one prototype has been introduced by technical papers. The algorithmic trial calculation on the basic concept of the solar community can be taken into consideration. The concept of the solar community is based on the collected data of the solar model house. According to the algorithmic results of the prototype, the simulation work in the globe can be performed as the algorithmic conversion results. This algorithmic study can be simulated by the amount of water, also in relation to the green planting area. Additionally, the submission of CO₂ in the solar community and the reduction of CO₂ by green planting can be calculated. On the base of these calculations in the solar community, the sustainable conditions on the globe can be simulated as the conversion results in an algorithmic way. The digital formation in the solar community can also be taken into consideration by this opportunity. Conclusion: For the finding of sustainable conditions around the globe, the solar community as one prototype has been taken into consideration. The role of the water is very important because the capacity of the water supply is very limited. But, at present, the cycle of the social community is not composed by the point of the natural mechanism. The simulative calculation of this study can be shown by the limitation of the total water supply. According to this process, the total capacity of the water supply and the capable residential number of the population and the areas can be taken into consideration by the algorithmic calculation. For keeping enough water, the green planting areas are very important. The planting area is also very important to keep the balance of CO₂. The simulative calculation can be performed by the relation between the submission and the reduction of CO₂ in the solar community. For the finding of this total balance and the sustainable conditions, the green planting area and the total amount of water can be recognized by the algorithmic simulative calculation. The study for the finding of sustainable conditions can be performed by the simulative calculations on the algorithmic model in the solar community as one prototype. The example of one prototype can be in balance. The activity of the social life must be in the capacity of the natural mechanism. The capable capacity of the natural environment in our world is very limited.

Keywords: the solar community, the sustainable condition, the natural limitation, the algorithmic calculation

Procedia PDF Downloads 110
61 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes

Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert

Abstract:

In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theory

Keywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments

Procedia PDF Downloads 176
60 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection

Procedia PDF Downloads 90
59 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 33
58 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery

Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats

Abstract:

Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.

Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform

Procedia PDF Downloads 456
57 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 191
56 Atypical Intoxication Due to Fluoxetine Abuse with Symptoms of Amnesia

Authors: Ayse Gul Bilen

Abstract:

Selective serotonin reuptake inhibitors (SSRIs) are commonly prescribed antidepressants that are used clinically for the treatment of anxiety disorders, obsessive-compulsive disorder (OCD), panic disorders and eating disorders. The first SSRI, fluoxetine (sold under the brand names Prozac and Sarafem among others), had an adverse effect profile better than any other available antidepressant when it was introduced because of its selectivity for serotonin receptors. They have been considered almost free of side effects and have become widely prescribed, however questions about the safety and tolerability of SSRIs have emerged with their continued use. Most SSRI side effects are dose-related and can be attributed to serotonergic effects such as nausea. Continuous use might trigger adverse effects such as hyponatremia, tremor, nausea, weight gain, sleep disturbance and sexual dysfunction. Moderate toxicity can be safely observed in the hospital for 24 hours, and mild cases can be safely discharged (if asymptomatic) from the emergency department once cleared by Psychiatry in cases of intentional overdose and after 6 to 8 hours of observation. Although fluoxetine is relatively safe in terms of overdose, it might still be cardiotoxic and inhibit platelet secretion, aggregation, and plug formation. There have been reported clinical cases of seizures, cardiac conduction abnormalities, and even fatalities associated with fluoxetine ingestions. While the medical literature strongly suggests that most fluoxetine overdoses are benign, emergency physicians need to remain cognizant that intentional, high-dose fluoxetine ingestions may induce seizures and can even be fatal due to cardiac arrhythmia. Our case is a 35-year old female patient who was sent to ER with symptoms of confusion, amnesia and loss of orientation for time and location after being found wandering in the streets unconsciously by police forces that informed 112. Upon laboratory examination, no pathological symptom was found except sinus tachycardia in the EKG and high levels of aspartate transaminase (AST) and alanine transaminase (ALT). Diffusion MRI and computed tomography (CT) of the brain all looked normal. Upon physical and sexual examination, no signs of abuse or trauma were found. Test results for narcotics, stimulants and alcohol were negative as well. There was a presence of dysrhythmia which required admission to the intensive care unit (ICU). The patient gained back her conscience after 24 hours. It was discovered from her story afterward that she had been using fluoxetine due to post-traumatic stress disorder (PTSD) for 6 months and that she had attempted suicide after taking 3 boxes of fluoxetine due to the loss of a parent. She was then transferred to the psychiatric clinic. Our study aims to highlight the need to consider toxicologic drug use, in particular, the abuse of selective serotonin reuptake inhibitors (SSRIs), which have been widely prescribed due to presumed safety and tolerability, for diagnosis of patients applying to the emergency room (ER).

Keywords: abuse, amnesia, fluoxetine, intoxication, SSRI

Procedia PDF Downloads 198
55 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 189
54 Plasmonic Biosensor for Early Detection of Environmental DNA (eDNA) Combined with Enzyme Amplification

Authors: Monisha Elumalai, Joana Guerreiro, Joana Carvalho, Marta Prado

Abstract:

DNA biosensors popularity has been increasing over the past few years. Traditional analytical techniques tend to require complex steps and expensive equipment however DNA biosensors have the advantage of getting simple, fast and economic. Additionally, the combination of DNA biosensors with nanomaterials offers the opportunity to improve the selectivity, sensitivity and the overall performance of the devices. DNA biosensors are based on oligonucleotides as sensing elements. These oligonucleotides are highly specific to complementary DNA sequences resulting in the hybridization of the strands. DNA biosensors are not only an advantage in the clinical field but also applicable in numerous research areas such as food analysis or environmental control. Zebra Mussels (ZM), Dreissena polymorpha are invasive species responsible for enormous negative impacts on the environment and ecosystems. Generally, the detection of ZM is made when the observation of adult or macroscopic larvae's is made however at this stage is too late to avoid the harmful effects. Therefore, there is a need to develop an analytical tool for the early detection of ZM. Here, we present a portable plasmonic biosensor for the detection of environmental DNA (eDNA) released to the environment from this invasive species. The plasmonic DNA biosensor combines gold nanoparticles, as transducer elements, due to their great optical properties and high sensitivity. The detection strategy is based on the immobilization of a short base pair DNA sequence on the nanoparticles surface followed by specific hybridization in the presence of a complementary target DNA. The hybridization events are tracked by the optical response provided by the nanospheres and their surrounding environment. The identification of the DNA sequences (synthetic target and probes) to detect Zebra mussel were designed by using Geneious software in order to maximize the specificity. Moreover, to increase the optical response enzyme amplification of DNA might be used. The gold nanospheres were synthesized and characterized by UV-visible spectrophotometry and transmission electron microscopy (TEM). The obtained nanospheres present the maximum localized surface plasmon resonance (LSPR) peak position are found to be around 519 nm and a diameter of 17nm. The DNA probes modified with a sulfur group at one end of the sequence were then loaded on the gold nanospheres at different ionic strengths and DNA probe concentrations. The optimal DNA probe loading will be selected based on the stability of the optical signal followed by the hybridization study. Hybridization process leads to either nanoparticle dispersion or aggregation based on the presence or absence of the target DNA. Finally, this detection system will be integrated into an optical sensing platform. Considering that the developed device will be used in the field, it should fulfill the inexpensive and portability requirements. The sensing devices based on specific DNA detection holds great potential and can be exploited for sensing applications in-loco.

Keywords: ZM DNA, DNA probes, nicking enzyme, gold nanoparticles

Procedia PDF Downloads 245
53 Smart Architecture and Sustainability in the Built Environment for the Hatay Refugee Camp

Authors: Ali Mohammed Ali Lmbash

Abstract:

The global refugee crisis points to the vital need for sustainable and resistant solutions to different kinds of problems for displaced persons all over the world. Among the myriads of sustainable concerns, however, there are diverse considerations including energy consumption, waste management, water access, and resiliency of structures. Our research aims to develop distinct ideas for sustainable architecture given the exigent problems in disaster-threatened areas starting with the Hatay Refugee camp in Turkey where the majority of the camp dwellers are Syrian refugees. Commencing community-based participatory research which focuses on the socio-environmental issues of displaced populations, this study will apply two approaches with a specific focus on the Hatay region. The initial experiment uses Richter's predictive model and simulations to forecast earthquake outcomes in refugee campers. The result could be useful in implementing architectural design tactics that enhance structural reliability and ensure the security and safety of shelters through earthquakes. In the second experiment a model is generated which helps us in predicting the quality of the existing water sources and since we understand how greatly water is vital for the well-being of humans, we do it. This research aims to enable camp administrators to employ forward-looking practices while managing water resources and thus minimizing health risks as well as building resilience of the refugees in the Hatay area. On the other side, this research assesses other sustainability problems of Hatay Refugee Camp as well. As energy consumption becomes the major issue, housing developers are required to consider energy-efficient designs as well as feasible integration of renewable energy technologies to minimize the environmental impact and improve the long-term sustainability of housing projects. Waste management is given special attention in this case by imposing recycling initiatives and waste reduction measures to reduce the pace of environmental degradation in the camp's land area. As well, study gives an insight into the social and economic reality of the camp, investigating the contribution of initiatives such as urban agriculture or vocational training to the enhancement of livelihood and community empowerment. In a similar fashion, this study combines the latest research with practical experience in order to contribute to the continuing discussion on sustainable architecture during disaster relief, providing recommendations and info that can be adapted on every scale worldwide. Through collaborative efforts and a dedicated sustainability approach, we can jointly get to the root of the cause and work towards a far more robust and equitable society.

Keywords: smart architecture, Hatay Camp, sustainability, machine learning.

Procedia PDF Downloads 54
52 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 313
51 Ways for University to Conduct Research Evaluation: Based on National Research University Higher School of Economics Example

Authors: Svetlana Petrikova, Alexander Yu Kostinskiy

Abstract:

Management of research evaluation in the Higher School of Economics (HSE) originates from the HSE Academic Fund created in 2004 to facilitate and support academic research and presents its results to international academic community. As the means to inspire the applicants, science projects went through competitive selection process evaluated by the group of experts. Drastic development of HSE, quantity of applied projects for each Academic Fund competition and the need to coordinate the conduct of expert evaluation resulted in founding of the Office for Research Evaluation in 2013. The Office’s primary objective is management of research evaluation of science projects. The standards to conduct the evaluation are defined as follows: - The exercise of the process approach, the unification of the functioning of department. - The uniformity of regulatory, organizational and methodological framework. - The development of proper on-line evaluation system. - The broad involvement of external Russian and international experts, the renouncement of the usage of own employees. - The development of an algorithm to make a correspondence between experts and science projects. - The methodical usage of opened/closed international and Russian databases to extend the expert database. - The transparency of evaluation results – free access to assessment while keeping experts confidentiality. The management of research evaluation of projects is based on the sole standard, organization and financing. The standard way of conducting research evaluation at HSE is based upon Regulations on basic principles for research evaluation at HSE. These Regulations have been developed from the moment of establishment of the Office for Research Evaluation and are based on conventional corporate standards for regulatory document management. The management system of research evaluation is implemented on the process approach basis. Process approach means deployment of work as a process, which is the aggregation of interrelated and interacting activities processing inputs into outputs. Inputs are firstly client asking for the assessment to be conducted, defining the conditions for organizing and carrying of the assessment and secondly the applicant with proper for the competition application; output is assessment given to the client. While exercising process approach to clarify interrelation and interacting main parties or subjects of the assessment are determined and the way for interaction between them forms up. Parties to expert assessment are: - Ordering Party – The department of the university taking the decision to subject a project to expert assessment; - Providing Party – The department of the university authorized to provide such assessment by the Ordering Party; - Performing Party – The legal and natural entities that have expertise in the area of research evaluation. Experts assess projects in accordance with criteria and states of expert opinions approved by the Ordering Party. Objects of assessment generally are applications or HSE competition project reports. Mainly assessments are deployed for internal needs, i.e. the most ordering parties are HSE branches and departments, but assessment can also be conducted for external clients. The financing of research evaluation at HSE is based on the established corporate culture and traditions of HSE.

Keywords: expert assessment, management of research evaluation, process approach, research evaluation

Procedia PDF Downloads 253
50 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines

Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky

Abstract:

Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.

Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods

Procedia PDF Downloads 113
49 Application of Low Frequency Ac Magnetic Field for Controlled Delivery of Drugs by Magnetic Nanoparticles

Authors: K. Yu Vlasova, M. A. Abakumov, H. Wishwarsao, M. Sokolsky, N. V. Nukolova, A. G. Majouga, Y. I. Golovin, N. L. Klyachko, A. V. Kabanov

Abstract:

Introduction:Nowadays pharmaceutical medicine is aimed to create systems for combined therapy, diagnostic, drug delivery and controlled release of active molecules to target cells. Magnetic nanoparticles (MNPs) are used to achieve this aim. MNPs can be applied in molecular diagnostics, magnetic resonance imaging (T1/T2 contrast agents), drug delivery, hyperthermia and could improve therapeutic effect of drugs. The most common drug containers, containing MNPs, are liposomes, micelles and polymeric molecules bonded to the MNPs surface. Usually superparamagnetic nanoparticles are used (the general diameter is about 5-6 nm) and all effects of high frequency magnetic field (MF) application are based on Neel relaxation resulting in heating of surrounded media. In this work we try to develop a new method to improve drug release from MNPs under super low frequency MF. We suppose that under low frequency MF exposures the Brown’s relaxation dominates and MNPs rotation could occur leading to conformation changes and release of bioactive molecules immobilized on MNPs surface.The aim of this work was to synthesize different systems with active drug (biopolymers coated MNPs nanoclusters with immobilized enzymes and doxorubicin (Dox) loaded magnetic liposomes/micelles) and investigate the effect of super low frequency MF on these drug containers. Methods: We have synthesized MNPs of magnetite with magnetic core diameter 7-12 nm . The MNPs were coated with block-copolymer of polylysine and polyethylene glycol. Superoxide dismutase 1 (SOD1) was electrostatically adsorbed on the surface of the clusters. Liposomes were prepared as follow: MNPs, phosphatidylcholine and cholesterol were dispersed in chloroform, dried to get film and then dispersed in distillated water, sonicated. Dox was added to the solution, pH was adjusted to 7.4 and excess of drug was removed by centrifugation through 3 kDa filters. Results: Polylysine coated MNPs formed nanosized clusters (as observed by TEM) with intensity average diameter of 112±5 nm and zeta potential 12±3 mV. After low frequency AC MF exposure we observed change of immobilized enzyme activity and hydrodynamic size of clusters. We suppose that the biomolecules (enzymes) are released from the MNPs surface followed with additional aggregation of complexes at the MF in medium. Centrifugation of the nanosuspension after AC MF exposures resulted in increase of positive charge of clusters and change in enzyme concentration in comparison with control sample without MF, thus confirming desorption of negatively charged enzyme from the positively charged surface of MNPs. Dox loaded magnetic liposomes had average diameter of 160±8 nm and polydispersity index (PDI) 0.25±0.07. Liposomes were stable in DW and PBS at pH=7.4 at 370C during a week. After MF application (10 min of exposure, 50 Hz, 230 mT) diameter of liposomes raised to 190±10 nm and PDI was 0.38±0.05. We explain this by destroying and/or reorganization of lipid bilayer, that leads to changes in release of drug in comparison with control without MF exposure. Conclusion: A new application of low frequency AC MF for drug delivery and controlled drug release was shown. Investigation was supported by RSF-14-13-00731 grant, K1-2014-022 grant.

Keywords: magnetic nanoparticles, low frequency magnetic field, drug delivery, controlled drug release

Procedia PDF Downloads 481
48 Foodborne Outbreak Calendar: Application of Time Series Analysis

Authors: Ryan B. Simpson, Margaret A. Waskow, Aishwarya Venkat, Elena N. Naumova

Abstract:

The Centers for Disease Control and Prevention (CDC) estimate that 31 known foodborne pathogens cause 9.4 million cases of these illnesses annually in US. Over 90% of these illnesses are associated with exposure to Campylobacter, Cryptosporidium, Cyclospora, Listeria, Salmonella, Shigella, Shiga-Toxin Producing E.Coli (STEC), Vibrio, and Yersinia. Contaminated products contain parasites typically causing an intestinal illness manifested by diarrhea, stomach cramping, nausea, weight loss, fatigue and may result in deaths in fragile populations. Since 1998, the National Outbreak Reporting System (NORS) has allowed for routine collection of suspected and laboratory-confirmed cases of food poisoning. While retrospective analyses have revealed common pathogen-specific seasonal patterns, little is known concerning the stability of those patterns over time and whether they can be used for preventative forecasting. The objective of this study is to construct a calendar of foodborne outbreaks of nine infections based on the peak timing of outbreak incidence in the US from 1996 to 2017. Reported cases were abstracted from FoodNet for Salmonella (135115), Campylobacter (121099), Shigella (48520), Cryptosporidium (21701), STEC (18022), Yersinia (3602), Vibrio (3000), Listeria (2543), and Cyclospora (758). Monthly counts were compiled for each agent, seasonal peak timing and peak intensity were estimated, and the stability of seasonal peaks and synchronization of infections was examined. Negative Binomial harmonic regression models with the delta-method were applied to derive confidence intervals for the peak timing for each year and overall study period estimates. Preliminary results indicate that five infections continue to lead as major causes of outbreaks, exhibiting steady upward trends with annual increases in cases ranging from 2.71% (95%CI: [2.38, 3.05]) in Campylobacter, 4.78% (95%CI: [4.14, 5.41]) in Salmonella, 7.09% (95%CI: [6.38, 7.82]) in E.Coli, 7.71% (95%CI: [6.94, 8.49]) in Cryptosporidium, and 8.67% (95%CI: [7.55, 9.80]) in Vibrio. Strong synchronization of summer outbreaks were observed, caused by Campylobacter, Vibrio, E.Coli and Salmonella, peaking at 7.57 ± 0.33, 7.84 ± 0.47, 7.85 ± 0.37, and 7.82 ± 0.14 calendar months, respectively, with the serial cross-correlation ranging 0.81-0.88 (p < 0.001). Over 21 years, Listeria and Cryptosporidium peaks (8.43 ± 0.77 and 8.52 ± 0.45 months, respectively) have a tendency to arrive 1-2 weeks earlier, while Vibrio peaks (7.8 ± 0.47) delay by 2-3 weeks. These findings will be incorporated in the forecast models to predict common paths of the spread, long-term trends, and the synchronization of outbreaks across etiological agents. The predictive modeling of foodborne outbreaks should consider long-term changes in seasonal timing, spatiotemporal trends, and sources of contamination.

Keywords: foodborne outbreak, national outbreak reporting system, predictive modeling, seasonality

Procedia PDF Downloads 128
47 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations

Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis

Abstract:

Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.

Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling

Procedia PDF Downloads 263
46 Modeling of Hot Casting Technology of Beryllium Oxide Ceramics with Ultrasonic Activation

Authors: Zamira Sattinova, Tassybek Bekenov

Abstract:

The article is devoted to modeling the technology of hot casting of beryllium oxide ceramics. The stages of ultrasonic activation of beryllium oxide slurry in the plant vessel to improve the rheological property, hot casting in the moulding cavity with cooling and solidification of the casting are described. Thermoplastic slurry (hereinafter referred to as slurry) shows the rheology of a non-Newtonian fluid with yield and plastic viscosity. Cooling-solidification of the slurry in the forming cavity occurs in the liquid, taking into account crystallization and solid state. In this work is the method of calculation of hot casting of the slurry using the method of effective molecular viscosity of viscoplastic fluid. It is shown that the slurry near the cooled wall is in a state of crystallization and plasticity, and the rest may still be in the liquid phase. Nonuniform distribution of temperature, density and concentration of kinetically free binder takes place along the cavity section. This leads to compensation of shrinkage by the influx of slurry from the liquid into the crystallization zones and plasticity of the castings. In the plasticity zone, the shrinkage determined by the concentration of kinetically free binder is compensated under the action of the pressure gradient. The solidification mechanism, as well as the mechanical behavior of the casting mass during casting, the rheological and thermophysical properties of the thermoplastic BeO slurry due to ultrasound exposure have not been well studied. Nevertheless, experimental data allow us to conclude that the effect of ultrasonic vibrations on the slurry mass leads to it: a change in structure, an increase in technological properties, a decrease in heterogeneity and a change in rheological properties. In the course of experiments, the effect of ultrasonic treatment and its duration on the change in viscosity and ultimate shear stress of the slurry depending on temperature (55-75℃) and the mass fraction of the binder (10 - 11.7%) have been studied. At the same time, changes in these properties before and after ultrasound exposure have been analyzed, as well as the nature of the flow in the system under study. The experience of operating the unit with ultrasonic impact has shown that at the same time, the casting capacity of the slurry increases by an average of 15%, and the viscosity decreases by more than half. Experimental study of physicochemical properties and phase change with simultaneous consideration of all factors affecting the quality of products in the process of continuous casting is labor-intensive. Therefore, an effective way to control the physical processes occurring in the formation of articles with predetermined properties and shapes is to simulate the process and determine its basic characteristics. The results of the calculations show the whole stage of hot casting of beryllium oxide slurry, taking into account the change in its state of aggregation. Ultrasonic treatment improves rheological properties and increases the fluidity of the slurry in the forming cavity. Calculations show the influence of velocity, temperature factors and structural data of the cavity on the cooling-solidification process of the casting. In the calculations, conditions for molding with shrinkage of the slurry by hot casting have been found, which makes it possible to obtain a solidifying product with a uniform beryllium oxide structure at the outlet of the cavity.

Keywords: hot casting, thermoplastic slurry molding, shrinkage, beryllium oxide

Procedia PDF Downloads 23
45 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.

Keywords: cruise ship, gas turbine, hull fouling, performance, propulsion, weather

Procedia PDF Downloads 165
44 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model

Authors: Al. I. Diwedar, Moheb Iskander, Mohamed Yossef, Ahmed ElKut, Noha Fouad, Radwa Fathy, Mustafa M. Almaghraby, Amira Samir, Ahmed Romya, Nourhan Hassan, Asmaa Abo Zed, Bas Reijmerink, Julien Groenenboom

Abstract:

Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that it is accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. This enables decision-makers to make informed choices and develop effective strategies for sustainable development and risk mitigation of the coastal zone. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.

Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model

Procedia PDF Downloads 28
43 Correlation of Hyperlipidemia with Platelet Parameters in Blood Donors

Authors: S. Nishat Fatima Rizvi, Tulika Chandra, Abbas Ali Mahdi, Devisha Agarwal

Abstract:

Introduction: Blood components are an unexplored area prone to numerous discoveries which influence patient’s care. Experiments at different levels will further change the present concept of blood banking. Hyperlipidemia is a condition of elevated plasma level of low-density lipoprotein (LDL) as well as decreased plasma level of high-density lipoprotein (HDL). Studies show that platelets play a vital role in the progression of atherosclerosis and thrombosis, a major cause of death worldwide. They are activated by many triggers like elevated LDL in the blood resulting in aggregation and formation of plaques. Hyperlipidemic platelets are frequently transfused to patients with various disorders. Screening the random donor platelets for hyperlipidemia and correlating the condition with other donor criteria such as lipid rich diet, oral contraceptive pills intake, weight, alcohol intake, smoking, sedentary lifestyle, family history of heart diseases will lead to further deciding the exclusion criteria for donor selection. This will help in making the patients safe as well as the donor deferral criteria more stringent to improve the quality of blood supply. Technical evaluation and assessment will enable blood bankers to supply safe blood and improve the guidelines for blood safety. Thus, we try to study the correlation between hyperlipidemic platelets with platelets parameters, weight, and specific history of the donors. Methodology: This case control study included 100 blood samples of Blood donors, out of 100 only 30 samples were found to be hyperlipidemic and were included as cases, while rest were taken as controls. Lipid Profile were measured by fully automated analyzer (TRIGL:triglycerides),(LDL-C:LDL –Cholesterol plus 2nd generation),CHOL 2: Cholesterol Gen 2), HDL C 3: HDL-Cholesterol plus 3rdgeneration)-(Cobas C311-Roche Diagnostic).And Platelets parameters were analyzed by the Sysmex KX21 automated hematology analyzer. Results: A significant correlation was found amongst hyperlipidemic level in single time donor. In which 80% donors have history of heart disease, 66.66% donors have sedentary life style, 83.3% donors were smokers, 50% donors were alcoholic, and 63.33% donors had taken lipid rich diet. Active physical activity was found amongst 40% donors. We divided donors sample in two groups based on their body weight. In group 1, hyperlipidemic samples: Platelet Parameters were 75% in normal 25% abnormal in >70Kg weight while in 50-70Kg weight 90% were normal 10% were abnormal. In-group 2, Non Hyperlipidemic samples: platelet Parameters were 95% normal and 5% abnormal in >70Kg weight, while in 50-70Kg Weight, 66.66% normal and 33.33% abnormal. Conclusion: The findings indicate that Hyperlipidemic status of donors may affect the platelet parameters and can be distinguished on history by their weight, Smoking, Alcoholic intake, Sedentary lifestyle, Active physical activity, Lipid rich diet, Oral contraceptive pills intake, and Family history of heart disease. However further studies on a large sample size will affirm this finding.

Keywords: blood donors, hyperlipidemia, platelet, weight

Procedia PDF Downloads 314
42 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland

Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski

Abstract:

Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.

Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics

Procedia PDF Downloads 401
41 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 381
40 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study

Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard

Abstract:

The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.

Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development

Procedia PDF Downloads 289
39 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 41
38 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 71
37 Urban Slum Communities Engage in the Fight Against TB in Karnataka, South India

Authors: N. Rambabu, H. Gururaj, Reynold Washington, Oommen George

Abstract:

Motivation: Under the USAID Strengthening Health Outcomes through Private Sector (SHOPS-TB) initiative, Karnataka Health Promotion Trust (KHPT) with technical support of Abt associates is implementing a TB prevention and care model in Karnataka State, South India. KHPT is the interface agency between the public and private sectors, and providers and the target community facilitating early TB case detection and enhancing treatment compliance through private health care providers (pHCP) engagement in RNTCP. The project coverage is 0.84 million urban poor from 663 slums in 12 districts of Karnataka. Problem Statement: India with the highest burden of global TB (26%) and two million cases annually, accounts for approximately one fifth of the global incidence. WHO estimates 300,000 people die from TB annually in India. India expanded the coverage of Directly Observed Treatment, Short-course chemotherapy (DOTS) to the entire country as early as 2006. However, the performance of RNTCP has not been uniform across states. While the national annual new smear-positive (NSP) case notification rate is 53, it is much lower at 47 in Karnataka. A third of TB patients in India reside in urban slums. Approach: Under SHOPS, KHPT actively engages with communities through key opinion leaders and community structures. Interpersonal communication, by Outreach workers through house-to-house visits and at aggregation points, is the primary method used for communication about TB and its management and to increase demand for sputum examination and DOTS. pHCP are mapped, trained and mentored by KHPT. ORWs also provide patient and family counseling on TB treatment, side effects and adherence, screen close contacts of index patients especially children under 6 years of age and screen co-morbidities including HIV, diabetes and malnutrition and risk factors including alcoholism, tobacco use, occupational hazards making appropriate accompanied or documented referrals. A treatment ‘buddy’ system for the patients involving close friends or family members, ICT-based support, DOTS Prerana (inspiration) groups of TB patients, family members and community, DOTS Mitra (friend) helpline services are also used for care and support services. Results: The intervention educated 39988 slum dwellers, referred 1731 chest symptomatics, tested 1061 patients and initiated 248 patients on anti-TB treatment within three months of intervention through continuous community engagement. Conclusions: The intervention’s potential to increase access to preferred health care providers, reduce patient and health system delays in diagnosis and initiation of treatment, improve health seeking behaviour and enhance compliance of pHCPs to standard treatment protocols is being monitored. Initial results are promising.

Keywords: DOTS, KHPT, health outcomes, public and private sector

Procedia PDF Downloads 316
36 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 149
35 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda

Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi

Abstract:

Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.

Keywords: land use change, urbanization, Kigali City, Landsat

Procedia PDF Downloads 307
34 Navigating Top Management Team Characteristics for Ambidexterity in Small and Medium-Sized African Businesses: The Key to Unlocking Success

Authors: Rumbidzai Sipiwe Zimvumi

Abstract:

The study aimed to identify the top management team attributes for ambidexterity in small and medium-sized enterprises by utilizing the upper echelons theory. The conventional opinion holds that an organization's ability to pursue both exploitative and explorative innovation methods at the same time is reflected in its ambidexterity. Top-level managers are critical to this matrix because they forecast and explain strategic choices that guarantee success by improving organizational performance. Since the focus of the study was on the unique characteristics of TMTs that can facilitate ambidexterity, the primary goal was to comprehend how TMTs in SMEs can better manage ambidexterity. The study used document analysis to collect information on ambidexterity and TMT traits. Finding, choosing, assessing, and synthesizing data from peer-reviewed publications allowed for the review and evaluation of papers. The fact that SMEs will perform better if they can achieve a balance between exploration and exploitation cannot be overstated. Unfortunately, exploitation is the main priority for most SMEs. The results showed that some of the noteworthy TMT traits that support ambidexterity in SMEs are age diversity, shared responsibility, leadership impact, psychological safety, and self-confidence. It has been shown that most SMEs confront significant obstacles in recruiting people, including formalizing their management and assembling executive teams with seniority. Small and medium-sized enterprises (SMEs) are often held by families or people who neglect to keep their personal lives apart from the firm, which eliminates the opportunity for management and staff to take the initiative. This helps to explain why exploitative strategies, which preserve present success, are used rather than explorative strategies, which open new economic opportunities and dimensions. It is evident that psychological safety deteriorates, and creativity is hindered in the process. The study makes the case that TMTs who are motivated to become ambidextrous can exist. According to the report, small- and medium-sized business owners should value the opinions of all parties involved and provide their managers and regular staff the freedom to think creatively and in a safe environment. TMTs who experience psychological safety are more likely to be inventive, creative, and productive. A team's collective perception that it is acceptable to take chances, voice opinions and concerns, ask questions, and own up to mistakes without fear of unfavorable outcomes is known as team psychological safety. Thus, traits like age diversity, leadership influence, learning agility, psychological safety, and self-assurance are critical to the success of SMEs. As a solution to ensuring ambidexterity is attained, the study suggests a clear separation of ownership and control, the adoption of technology to stimulate creativity, team spirit and excitement, shared accountability, and good management of diversity. Among the suggestions for the SME's success are resource allocation and important collaborations.

Keywords: navigating, ambidexterity, top management team, small and medium enterprises

Procedia PDF Downloads 57
33 Analysis of Electric Mobility in the European Union: Forecasting 2035

Authors: Domenico Carmelo Mongelli

Abstract:

The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.

Keywords: engines, Europe, mobility, transition

Procedia PDF Downloads 61