Search results for: simulation modeling
128 The Negative Effects of Controlled Motivation on Mathematics Achievement
Authors: John E. Boberg, Steven J. Bourgeois
Abstract:
The decline in student engagement and motivation through the middle years is well documented and clearly associated with a decline in mathematics achievement that persists through high school. To combat this trend and, very often, to meet high-stakes accountability standards, a growing number of parents, teachers, and schools have implemented various methods to incentivize learning. However, according to Self-Determination Theory, forms of incentivized learning such as public praise, tangible rewards, or threats of punishment tend to undermine intrinsic motivation and learning. By focusing on external forms of motivation that thwart autonomy in children, adults also potentially threaten relatedness measures such as trust and emotional engagement. Furthermore, these controlling motivational techniques tend to promote shallow forms of cognitive engagement at the expense of more effective deep processing strategies. Therefore, any short-term gains in apparent engagement or test scores are overshadowed by long-term diminished motivation, resulting in inauthentic approaches to learning and lower achievement. The current study focuses on the relationships between student trust, engagement, and motivation during these crucial years as students transition from elementary to middle school. In order to test the effects of controlled motivational techniques on achievement in mathematics, this quantitative study was conducted on a convenience sample of 22 elementary and middle schools from a single public charter school district in the south-central United States. The study employed multi-source data from students (N = 1,054), parents (N = 7,166), and teachers (N = 356), along with student achievement data and contextual campus variables. Cross-sectional questionnaires were used to measure the students’ self-regulated learning, emotional and cognitive engagement, and trust in teachers. Parents responded to a single item on incentivizing the academic performance of their child, and teachers responded to a series of questions about their acceptance of various incentive strategies. Structural equation modeling (SEM) was used to evaluate model fit and analyze the direct and indirect effects of the predictor variables on achievement. Although a student’s trust in teacher positively predicted both emotional and cognitive engagement, none of these three predictors accounted for any variance in achievement in mathematics. The parents’ use of incentives, on the other hand, predicted a student’s perception of his or her controlled motivation, and these two variables had significant negative effects on achievement. While controlled motivation had the greatest effects on achievement, parental incentives demonstrated both direct and indirect effects on achievement through the students’ self-reported controlled motivation. Comparing upper elementary student data with middle-school student data revealed that controlling forms of motivation may be taking their toll on student trust and engagement over time. While parental incentives positively predicted both cognitive and emotional engagement in the younger sub-group, such forms of controlling motivation negatively predicted both trust in teachers and emotional engagement in the middle-school sub-group. These findings support the claims, posited by Self-Determination Theory, about the dangers of incentivizing learning. Short-term gains belie the underlying damage to motivational processes that lead to decreased intrinsic motivation and achievement. Such practices also appear to thwart basic human needs such as relatedness.Keywords: controlled motivation, student engagement, incentivized learning, mathematics achievement, self-determination theory, student trust
Procedia PDF Downloads 219127 Molecular Dynamics Simulation of Realistic Biochar Models with Controlled Microporosity
Authors: Audrey Ngambia, Ondrej Masek, Valentina Erastova
Abstract:
Biochar is an amorphous carbon-rich material generated from the pyrolysis of biomass with multifarious properties and functionality. Biochar has shown proven applications in the treatment of flue gas and organic and inorganic pollutants in soil and water/wastewater as a result of its multiple surface functional groups and porous structures. These properties have also shown potential in energy storage and carbon capture. The availability of diverse sources of biomass to produce biochar has increased interest in it as a sustainable and environmentally friendly material. The properties and porous structures of biochar vary depending on the type of biomass and high heat treatment temperature (HHT). Biochars produced at HHT between 400°C – 800°C generally have lower H/C and O/C ratios, higher porosities, larger pore sizes and higher surface areas with temperature. While all is known experimentally, there is little knowledge on the porous role structure and functional groups play on processes occurring at the atomistic scale, which are extremely important for the optimization of biochar for application, especially in the adsorption of gases. Atomistic simulations methods have shown the potential to generate such amorphous materials; however, most of the models available are composed of only carbon atoms or graphitic sheets, which are very dense or with simple slit pores, all of which ignore the important role of heteroatoms such as O, N, S and pore morphologies. Hence, developing realistic models that integrate these parameters are important to understand their role in governing adsorption mechanisms that will aid in guiding the design and optimization of biochar materials for target applications. In this work, molecular dynamics simulations in the isobaric ensemble are used to generate realistic biochar models taking into account experimentally determined H/C, O/C, N/C, aromaticity, micropore size range, micropore volumes and true densities of biochars. A pore generation approach was developed using virtual atoms, which is a Lennard-Jones sphere of varying van der Waals radius and softness. Its interaction via a soft-core potential with the biochar matrix allows the creation of pores with rough surfaces while varying the van der Waals radius parameters gives control to the pore-size distribution. We focused on microporosity, creating average pore sizes of 0.5 - 2 nm in diameter and pore volumes in the range of 0.05 – 1 cm3/g, which corresponds to experimental gas adsorption micropore sizes of amorphous porous biochars. Realistic biochar models with surface functionalities, micropore size distribution and pore morphologies were developed, and they could aid in the study of adsorption processes in confined micropores.Keywords: biochar, heteroatoms, micropore size, molecular dynamics simulations, surface functional groups, virtual atoms
Procedia PDF Downloads 71126 Theoretical Modelling of Molecular Mechanisms in Stimuli-Responsive Polymers
Authors: Catherine Vasnetsov, Victor Vasnetsov
Abstract:
Context: Thermo-responsive polymers are materials that undergo significant changes in their physical properties in response to temperature changes. These polymers have gained significant attention in research due to their potential applications in various industries and medicine. However, the molecular mechanisms underlying their behavior are not well understood, particularly in relation to cosolvency, which is crucial for practical applications. Research Aim: This study aimed to theoretically investigate the phenomenon of cosolvency in long-chain polymers using the Flory-Huggins statistical-mechanical framework. The main objective was to understand the interactions between the polymer, solvent, and cosolvent under different conditions. Methodology: The research employed a combination of Monte Carlo computer simulations and advanced machine-learning methods. The Flory-Huggins mean field theory was used as the basis for the simulations. Spinodal graphs and ternary plots were utilized to develop an initial computer model for predicting polymer behavior. Molecular dynamic simulations were conducted to mimic real-life polymer systems. Machine learning techniques were incorporated to enhance the accuracy and reliability of the simulations. Findings: The simulations revealed that the addition of very low or very high volumes of cosolvent molecules resulted in smaller radii of gyration for the polymer, indicating poor miscibility. However, intermediate volume fractions of cosolvent led to higher radii of gyration, suggesting improved miscibility. These findings provide a possible microscopic explanation for the cosolvency phenomenon in polymer systems. Theoretical Importance: This research contributes to a better understanding of the behavior of thermo-responsive polymers and the role of cosolvency. The findings provide insights into the molecular mechanisms underlying cosolvency and offer specific predictions for future experimental investigations. The study also presents a more rigorous analysis of the Flory-Huggins free energy theory in the context of polymer systems. Data Collection and Analysis Procedures: The data for this study was collected through Monte Carlo computer simulations and molecular dynamic simulations. The interactions between the polymer, solvent, and cosolvent were analyzed using the Flory-Huggins mean field theory. Machine learning techniques were employed to enhance the accuracy of the simulations. The collected data was then analyzed to determine the impact of cosolvent volume fractions on the radii of gyration of the polymer. Question Addressed: The research addressed the question of how cosolvency affects the behavior of long-chain polymers. Specifically, the study aimed to investigate the interactions between the polymer, solvent, and cosolvent under different volume fractions and understand the resulting changes in the radii of gyration. Conclusion: In conclusion, this study utilized theoretical modeling and computer simulations to investigate the phenomenon of cosolvency in long-chain polymers. The findings suggest that moderate cosolvent volume fractions can lead to improved miscibility, as indicated by higher radii of gyration. These insights contribute to a better understanding of the molecular mechanisms underlying cosolvency in polymer systems and provide predictions for future experimental studies. The research also enhances the theoretical analysis of the Flory-Huggins free energy theory.Keywords: molecular modelling, flory-huggins, cosolvency, stimuli-responsive polymers
Procedia PDF Downloads 70125 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements
Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker
Abstract:
Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.Keywords: adaptive, CAx, function blocks, turbomachinery
Procedia PDF Downloads 297124 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 262123 Experimental and Simulation Results for the Removal of H2S from Biogas by Means of Sodium Hydroxide in Structured Packed Columns
Authors: Hamadi Cherif, Christophe Coquelet, Paolo Stringari, Denis Clodic, Laura Pellegrini, Stefania Moioli, Stefano Langè
Abstract:
Biogas is a promising technology which can be used as a vehicle fuel, for heat and electricity production, or injected in the national gas grid. It is storable, transportable, not intermittent and substitutable for fossil fuels. This gas produced from the wastewater treatment by degradation of organic matter under anaerobic conditions is mainly composed of methane and carbon dioxide. To be used as a renewable fuel, biogas, whose energy comes only from methane, must be purified from carbon dioxide and other impurities such as water vapor, siloxanes and hydrogen sulfide. Purification of biogas for this application particularly requires the removal of hydrogen sulfide, which negatively affects the operation and viability of equipment especially pumps, heat exchangers and pipes, causing their corrosion. Several methods are available to eliminate hydrogen sulfide from biogas. Herein, reactive absorption in structured packed column by means of chemical absorption in aqueous sodium hydroxide solutions is considered. This study is based on simulations using Aspen Plus™ V8.0, and comparisons are done with data from an industrial pilot plant treating 85 Nm3/h of biogas which contains about 30 ppm of hydrogen sulfide. The rate-based model approach has been used for simulations in order to determine the efficiencies of separation for different operating conditions. To describe vapor-liquid equilibrium, a γ/ϕ approach has been considered: the Electrolyte NRTL model has been adopted to represent non-idealities in the liquid phase, while the Redlich-Kwong equation of state has been used for the vapor phase. In order to validate the thermodynamic model, Henry’s law constants of each compound in water have been verified against experimental data. Default values available in Aspen Plus™ V8.0 for the properties of pure components properties as heat capacity, density, viscosity and surface tension have also been verified. The obtained results for physical and chemical properties are in a good agreement with experimental data. Reactions involved in the process have been studied rigorously. Equilibrium constants for equilibrium reactions and the reaction rate constant for the kinetically controlled reaction between carbon dioxide and the hydroxide ion have been checked. Results of simulations of the pilot plant purification section show the influence of low temperatures, concentration of sodium hydroxide and hydrodynamic parameters on the selective absorption of hydrogen sulfide. These results show an acceptable degree of accuracy when compared with the experimental data obtained from the pilot plant. Results show also the great efficiency of sodium hydroxide for the removal of hydrogen sulfide. The content of this compound in the gas leaving the column is under 1 ppm.Keywords: biogas, hydrogen sulfide, reactive absorption, sodium hydroxide, structured packed column
Procedia PDF Downloads 354122 Employing Remotely Sensed Soil and Vegetation Indices and Predicting by Long Short-Term Memory to Irrigation Scheduling Analysis
Authors: Elham Koohikerade, Silvio Jose Gumiere
Abstract:
In this research, irrigation is highlighted as crucial for improving both the yield and quality of potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate soil moisture content, addressing the limitations of field data. Developed under the guidance of the Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing drought conditions and determining irrigation needs. This study validated the spectral characteristics of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture was developed using a machine learning approach combining model-based and satellite-based datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and times, with its accuracy verified through cross-validation and comparison with existing soil moisture datasets. The model effectively captures temporal dynamics, making it valuable for applications requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By identifying typical peak soil moisture values and observing distribution shapes, irrigation can be scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a uniform irrigation strategy might be effective across multiple parcels, with adjustments based on specific parcel characteristics and historical data trends. The application of the LSTM model to predict soil moisture and vegetation indices yielded mixed results. While the model effectively captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately predicting EVI, NDVI, and NMDI.Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation monitoring
Procedia PDF Downloads 41121 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 180120 The Strategic Role of Accommodation Providers in Encouraging Travelers to Adopt Environmentally-Friendly Modes of Transportation: An Experiment from France
Authors: Luc Beal
Abstract:
Introduction. Among the stakeholders involved in the tourist decision-making process, the accommodation provider has the potential to play a crucial role in raising awareness, disseminating information, and thus influencing the tourists’ choice of transportation. Since the early days of tourism, the accommodation provider has consistently served as the primary point of contact with the destination, and consequently, as the primary source of information for visitors. By offering accommodation and hospitality, the accommodation provider has evolved into a trusted third party, functioning as an 'ambassador' capable of recommending the finest attractions and activities available at the destination. In contemporary times, when tourists plan their trips, they make a series of consecutive decisions, with the most important decision being to lock-in the accommodation reservation for the earliest days, so as to secure a safe arrival. Consequently, tourists place their trust in the accommodation provider not only for lodging but also for recommendations regarding restaurants, activities, and more. Thus, the latter has the opportunity to inform and influence tourists well in advance of their arrival, particularly during the booking phase, namely when it comes to selecting their mode of transportation. The pressing need to reduce greenhouse gas emissions within the tourism sector presents an opportunity to underscore the influence that accommodation providers have historically exerted on tourist decision-making . Methodology A participatory research, currently ongoing in south-western France, in collaboration with a nationwide hotel group and several destination management organizations, aims at examining the factors that determine the ability of accommodation providers to influence tourist transportation choices. Additionally, the research seeks to identify the conditions that motivate accommodation providers to assume a proactive role, such as fostering customer loyalty, reduced distribution costs, and financial compensation mechanisms. A panel of hotels participated in a series of focus group sessions with tourists, with the objective of modeling the decision-making process of tourists regarding their choice of transportation mode and to identify and quantify the types and levels of incentives liable to encourage environmentally responsible choices. Individual interviews were also conducted with hotel staff, including receptionists and guest relations officers, to develop a framework for interactions with tourists during crucial decision-making moments related to transportation choices. The primary finding of this research indicates that financial incentives significantly outweigh symbolic incentives in motivating tourists to opt for eco-friendly modes of transportation. Another noteworthy result underscores the crucial impact of organizational conditions governing interactions with tourists both before and during their stay. These conditions greatly influence the ability to raise awareness at key decision-making moments and the possibility of gathering data about the chosen transportation mode during the stay. In conclusion, this research has led to the formulation of practical recommendations for accommodation providers and Destination Marketing Organizations (DMOs). These recommendations pertain to communication protocols with tourists, the collection of evidences confirming chosen transportation modes, and the implementation of necessary incentives. Through these measures, accommodation provider can assume a central role in guiding tourists towards making responsible choices in terms of transportation.Keywords: accommodation provider, trusted third party, environmentally-friendly transportation, green house gas, tourist decision-making process
Procedia PDF Downloads 58119 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 99118 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 145117 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks
Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci
Abstract:
The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization
Procedia PDF Downloads 130116 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 32115 Wound Healing Process Studied on DC Non-Homogeneous Electric Fields
Authors: Marisa Rio, Sharanya Bola, Richard H. W. Funk, Gerald Gerlach
Abstract:
Cell migration, wound healing and regeneration are some of the physiological phenomena in which electric fields (EFs) have proven to have an important function. Physiologically, cells experience electrical signals in the form of transmembrane potentials, ion fluxes through protein channels as well as electric fields at their surface. As soon as a wound is created, the disruption of the epithelial layers generates an electric field of ca. 40-200 mV/mm, directing cell migration towards the wound site, starting the healing process. In vitro electrotaxis, experiments have shown cells respond to DC EFs polarizing and migrating towards one of the poles (cathode or anode). A standard electrotaxis experiment consists of an electrotaxis chamber where cells are cultured, a DC power source and agar salt bridges that help delaying toxic products from the electrodes to attain the cell surface. The electric field strengths used in such an experiment are uniform and homogeneous. In contrast, the endogenous electric field strength around a wound tend to be multi-field and non-homogeneous. In this study, we present a custom device that enables electrotaxis experiments in non-homogeneous DC electric fields. Its main feature involves the replacement of conventional metallic electrodes, separated from the electrotaxis channel by agarose gel bridges, through electrolyte-filled microchannels. The connection to the DC source is made by Ag/AgCl electrodes, incased in agarose gel and placed at the end of each microfluidic channel. An SU-8 membrane closes the fluidic channels and simultaneously serves as the single connection from each of them to the central electrotaxis chamber. The electric field distribution and current density were numerically simulated with the steady-state electric conduction module from ANSYS 16.0. Simulation data confirms the application of nonhomogeneous EF of physiological strength. To validate the biocompatibility of the device cellular viability of the photoreceptor-derived 661W cell line was accessed. The cells have not shown any signs of apoptosis, damage or detachment during stimulation. Furthermore, immunofluorescence staining, namely by vinculin and actin labelling, allowed the assessment of adhesion efficiency and orientation of the cytoskeleton, respectively. Cellular motility in the presence and absence of applied DC EFs was verified. The movement of individual cells was tracked for the duration of the experiments, confirming the EF-induced, cathodal-directed motility of the studied cell line. The in vitro monolayer wound assay, or “scratch assay” is a standard protocol to quantitatively access cell migration in vitro. It encompasses the growth of a confluent cell monolayer followed by the mechanic creation of a scratch, representing a wound. Hence, wound dynamics was monitored over time and compared for control and applied the electric field to quantify cellular population motility.Keywords: DC non-homogeneous electric fields, electrotaxis, microfluidic biochip, wound healing
Procedia PDF Downloads 270114 Social Implementation of Information Sharing Road Safety Measure in South-East Asia
Authors: Hiroki Kikuchi, Atsushi Fukuda, Hirokazu Akahane, Satoru Kobayakawa, Tuenjai Fukuda, Takeru Miyokawa
Abstract:
According to WHO reports, fatalities by road traffic accidents in many countries of South-East Asia region especially Thailand and Malaysia are increasing year by year. In order to overcome these serious problems, both governments are focusing on road safety measures. In response, the Ministry of Land, Infrastructure, Transport and Tourism (MLIT) of Japan and Japan International Cooperation Agency (JICA) have begun active support based on the experiences to reduce the number of fatalities in road accidents in Japan in the past. However, even if the successful road safety measures in Japan is adopted in South-East Asian countries, it is not sure whether it will work well or not. So, it is necessary to clarify the issues and systematize the process for the implementation of road safety measures in South-East Asia. On the basis of the above, this study examined the applicability of "information sharing traffic safety measure" which is one of the successful road safety measures in Japan to the social implementation of road safety measures in South-East Asian countries. The "Information sharing traffic safety measure" is carried out traffic safety measures by stakeholders such as residents, administration, and experts jointly. In this study, we extracted the issues of implementation of road safety measures under local context firstly. This is clarifying the particular issues with its implementation in South-East Asian cities. Secondly, we considered how to implement road safety measures for solving particular issues based on the method of "information sharing traffic safety measure". In the implementation method, the location of the occurrence of a dangerous event was extracted based on the “HIYARI-HATTO” data which were obtained from the residents. This is because it is considered that the implementation of the information sharing traffic safety measure focusing on the location where the dangerous event occurs leads to the reduction of traffic accidents. Also, the target locations for the implementation of measures differ for each city. In Penang, we targeted the intersections in the downtown, while in Suphan Buri, we targeted mainly traffic control on the intercity highway. Finally, we proposed a method for implementing traffic safety measures. For Penang, we proposed a measure to improve the signal phase and showed the effect of the measure on the micro traffic simulation. For Suphan Buri, we proposed the suitable measures for the danger points extracted by collecting the “HIYARI-HATTO” data of residents to the administration. In conclusion, in order to successfully implement the road safety measure based on the "information sharing traffic safety measure", the process for social implementation of the road safety measures should be consistent and carried out repeatedly. In particular, by clarifying specific issues based on local context in South-East Asian countries, the stakeholders, not only such as government sectors but also local citizens can share information regarding road safety and select appropriate countermeasures. Finally, we could propose this approach to the administration that had the authority.Keywords: information sharing road safety measure, social implementation, South-East Asia, HIYARI-HATTO
Procedia PDF Downloads 149113 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 136112 Housing Recovery in Heavily Damaged Communities in New Jersey after Hurricane Sandy
Authors: Chenyi Ma
Abstract:
Background: The second costliest hurricane in U.S. history, Sandy landed in southern New Jersey on October 29, 2012, and struck the entire state with high winds and torrential rains. The disaster killed more than 100 people, left more than 8.5 million households without power, and damaged or destroyed more than 200,000 homes across the state. Immediately after the disaster, public policy support was provided in nine coastal counties that constituted 98% of the major and severely damaged housing units in NJ overall. The programs include Individuals and Households Assistance Program, Small Business Loan Program, National Flood Insurance Program, and the Federal Emergency Management Administration (FEMA) Public Assistance Grant Program. In the most severely affected counties, additional funding was provided through Community Development Block Grant: Reconstruction, Rehabilitation, Elevation, and Mitigation Program, and Homeowner Resettlement Program. How these policies individually and as a whole impacted housing recovery across communities with different socioeconomic and demographic profiles has not yet been studied, particularly in relation to damage levels. The concept of community social vulnerability has been widely used to explain many aspects of natural disasters. Nevertheless, how communities are vulnerable has been less fully examined. Community resilience has been conceptualized as a protective factor against negative impacts from disasters, however, how community resilience buffers the effects of vulnerability is not yet known. Because housing recovery is a dynamic social and economic process that varies according to context, this study examined the path from community vulnerability and resilience to housing recovery looking at both community characteristics and policy interventions. Sample/Methods: This retrospective longitudinal case study compared a literature-identified set of pre-disaster community characteristics, the effects of multiple public policy programs, and a set of time-variant community resilience indicators to changes in housing stock (operationally defined by percent of building permits to total occupied housing units/households) between 2010 and 2014, two years before and after Hurricane Sandy. The sample consisted of 51 municipalities in the nine counties in which between 4% and 58% of housing units suffered either major or severe damage. Structural equation modeling (SEM) was used to determine the path from vulnerability to the housing recovery, via multiple public programs, separately and as a whole, and via the community resilience indicators. The spatial analytical tool ArcGIS 10.2 was used to show the spatial relations between housing recovery patterns and community vulnerability and resilience. Findings: Holding damage levels constant, communities with higher proportions of Hispanic households had significantly lower levels of housing recovery while communities with households with an adult >age 65 had significantly higher levels of the housing recovery. The contrast was partly due to the different levels of total public support the two types of the community received. Further, while the public policy programs individually mediated the negative associations between African American and female-headed households and housing recovery, communities with larger proportions of African American, female-headed and Hispanic households were “vulnerable” to lower levels of housing recovery because they lacked sufficient public program support. Even so, higher employment rates and incomes buffered vulnerability to lower housing recovery. Because housing is the "wobbly pillar" of the welfare state, the housing needs of these particular groups should be more fully addressed by disaster policy.Keywords: community social vulnerability, community resilience, hurricane, public policy
Procedia PDF Downloads 372111 Influence of Surface Fault Rupture on Dynamic Behavior of Cantilever Retaining Wall: A Numerical Study
Authors: Partha Sarathi Nayek, Abhiparna Dasgupta, Maheshreddy Gade
Abstract:
Earth retaining structure plays a vital role in stabilizing unstable road cuts and slopes in the mountainous region. The retaining structures located in seismically active regions like the Himalayas may experience moderate to severe earthquakes. An earthquake produces two kinds of ground motion: permanent quasi-static displacement (fault rapture) on the fault rupture plane and transient vibration, traveling a long distance. There has been extensive research work to understand the dynamic behavior of retaining structures subjected to transient ground motions. However, understanding the effect caused by fault rapture phenomena on retaining structures is limited. The presence of shallow crustal active faults and natural slopes in the Himalayan region further highlights the need to study the response of retaining structures subjected to fault rupture phenomena. In this paper, an attempt has been made to understand the dynamic response of the cantilever retaining wall subjected to surface fault rupture. For this purpose, a 2D finite element model consists of a retaining wall, backfill and foundation have been developed using Abaqus 6.14 software. The backfill and foundation material are modeled as per the Mohr-Coulomb failure criterion, and the wall is modeled as linear elastic. In this present study, the interaction between backfill and wall is modeled as ‘surface-surface contact.’ The entire simulation process is divided into three steps, i.e., the initial step, gravity load step, fault rupture step. The interaction property between wall and soil and fixed boundary condition to all the boundary elements are applied in the initial step. In the next step, gravity load is applied, and the boundary elements are allowed to move in the vertical direction to incorporate the settlement of soil due to the gravity load. In the final step, surface fault rupture has been applied to the wall-backfill system. For this purpose, the foundation is divided into two blocks, namely, the hanging wall block and the footwall block. A finite fault rupture displacement is applied to the hanging wall part while the footwall bottom boundary is kept as fixed. Initially, a numerical analysis is performed considering the reverse fault mechanism with a dip angle of 45°. The simulated result is presented in terms of contour maps of permanent displacements of the wall-backfill system. These maps highlighted that surface fault rupture can induce permanent displacement in both horizontal and vertical directions, which can significantly influence the dynamic behavior of the wall-backfill system. Further, the influence of fault mechanism, dip angle, and surface fault rupture position is also investigated in this work.Keywords: surface fault rupture, retaining wall, dynamic response, finite element analysis
Procedia PDF Downloads 106110 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance
Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi
Abstract:
This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building
Procedia PDF Downloads 31109 Nursing Education in the Pandemic Time: Case Study
Authors: Jaana Sepp, Ulvi Kõrgemaa, Kristi Puusepp, Õie Tähtla
Abstract:
COVID-19 was officially recognized as a pandemic in late 2019 by the WHO, and it has led to changes in the education sector. Educational institutions were closed, and most schools adopted distance learning. Estonia is known as a digitally well-developed country. Based on that, in the pandemic time, nursing education continued, and new technological solutions were implemented. To provide nursing education, special focus was paid on quality and flexibility. The aim of this paper is to present administrative, digital, and technological solutions which support Estonian nursing educators to continue the study process in the pandemic time and to develop a sustainable solution for nursing education for the future. This paper includes the authors’ analysis of the documents and decisions implemented in the institutions through the pandemic time. It is a case study of Estonian nursing educators. Results of the analysis show that the implementation of distance learning principles challenges the development of innovative strategies and technics for the assessment of student performance and educational outcomes and implement new strategies to encourage student engagement in the virtual classroom. Additionally, hospital internships were canceled, and the simulation approach was deeply implemented as a new opportunity to develop and assess students’ practical skills. There are many other technical and administrative changes that have also been carried out, such as students’ support and assessment systems, the designing and conducting of hybrid and blended studies, etc. All services were redesigned and made more available, individual, and flexible. Hence, the feedback system was changed, the information was collected in parallel with educational activities. Experiences of nursing education during the pandemic time are widely presented in scientific literature. However, to conclude our study, authors have found evidence that solutions implemented in Estonian nursing education allowed the students to graduate within the nominal study period without any decline in education quality. Operative information system and flexibility provided the minimum distance between the students, support, and academic staff, and likewise, the changes were implemented quickly and efficiently. Institution memberships were updated with the appropriate information, and it positively affected their satisfaction, motivation, and commitment. We recommend that the feedback process and the system should be permanently changed in the future to place all members in the same information area, redefine the hospital internship process, implement hybrid learning, as well as to improve the communication system between stakeholders inside and outside the organization. The main limitation of this study relates to the size of Estonia. Nursing education is provided by two institutions only, and similarly, the number of students is low. The result could be generated to the institutions with a similar size and administrative system. In the future, the relationship between nurses’ performance and organizational outcomes should be deeply investigated and influences of the pandemic time education analyzed at workplaces.Keywords: hybrid learning, nursing education, nursing, COVID-19
Procedia PDF Downloads 120108 Ensuring Safety in Fire Evacuation by Facilitating Way-Finding in Complex Buildings
Authors: Atefeh Omidkhah, Mohammadreza Bemanian
Abstract:
The issue of way-finding earmarks a wide range of literature in architecture and despite the 50 year background of way-finding studies, it still lacks a comprehensive theory for indoor settings. Way-finding has a notable role in emergency evacuation as well. People in the panic situation of a fire emergency need to find the safe egress route correctly and in as minimum time as possible. In this regard the parameters of an appropriate way-finding are mentioned in the evacuation related researches albeit scattered. This study reviews the fire safety related literature to extract a way-finding related framework for architectural purposes of the design of a safe evacuation route. In this regard a research trend review in addition with applied methodological approaches review is conducted. Then by analyzing eight original researches related to way-finding parameters in fire evacuation, main parameters that affect way-finding in emergency situation of a fire incident are extracted and a framework was developed based on them. Results show that the issues related to exit route and emergency evacuation can be chased in task oriented studies of way-finding. This research trend aims to access a high-level framework and in the best condition a theory that has an explanatory capability to define differences in way-finding in indoor/outdoor settings, complex/simple buildings and different building types or transitional spaces. The methodological advances demonstrate the evacuation way-finding researches in line with three approaches that the latter one is the most up-to-date and precise method to research this subject: real actors and hypothetical stimuli as in evacuation experiments, hypothetical actors and stimuli as in agent-based simulations and real actors and semi-real stimuli as in virtual reality environment by adding multi-sensory simulation. Findings on data-mining of 8 sample of original researches in way-finding in evacuation indicate that emergency way-finding design of a building should consider two level of space cognition problems in the time of emergency and performance consequences of them in the built environment. So four major classes of problems in way-finding which are visual information deficiency, confusing layout configuration, improper navigating signage and demographic issues had been defined and discussed as the main parameters that should be provided with solutions in design and interior of a building. In the design phase of complex buildings, which face more reported problem in way-finding, it is important to consider the interior components regarding to the building type of occupancy and behavior of its occupants and determine components that tend to become landmarks and set the architectural features of egress route in line with the directions that they navigate people. Research on topological cognition of environmental and its effect on way-finding task in emergency evacuation is proposed for future.Keywords: architectural design, egress route, way-finding, fire safety, evacuation
Procedia PDF Downloads 174107 Rehabilitation of Orthotropic Steel Deck Bridges Using a Modified Ortho-Composite Deck System
Authors: Mozhdeh Shirinzadeh, Richard Stroetmann
Abstract:
Orthotropic steel deck bridge consists of a deck plate, longitudinal stiffeners under the deck plate, cross beams and the main longitudinal girders. Due to the several advantages, Orthotropic Steel Deck (OSD) systems have been utilized in many bridges worldwide. The significant feature of this structural system is its high load-bearing capacity while having relatively low dead weight. In addition, cost efficiency and the ability of rapid field erection have made the orthotropic steel deck a popular type of bridge worldwide. However, OSD bridges are highly susceptible to fatigue damage. A large number of welded joints can be regarded as the main weakness of this system. This problem is, in particular, evident in the bridges which were built before 1994 when the fatigue design criteria had not been introduced in the bridge design codes. Recently, an Orthotropic-composite slab (OCS) for road bridges has been experimentally and numerically evaluated and developed at Technische Universität Dresden as a part of AIF-FOSTA research project P1265. The results of the project have provided a solid foundation for the design and analysis of Orthotropic-composite decks with dowel strips as a durable alternative to conventional steel or reinforced concrete decks. In continuation, while using the achievements of that project, the application of a modified Ortho-composite deck for an existing typical OSD bridge is investigated. Composite action is obtained by using rows of dowel strips in a clothoid (CL) shape. Regarding Eurocode criteria for different fatigue detail categories of an OSD bridge, the effect of the proposed modification approach is assessed. Moreover, a numerical parametric study is carried out utilizing finite element software to determine the impact of different variables, such as the size and arrangement of dowel strips, the application of transverse or longitudinal rows of dowel strips, and local wheel loads. For the verification of the simulation technique, experimental results of a segment of an OCS deck are used conducted in project P1265. Fatigue assessment is performed based on the last draft of Eurocode 1993-2 (2024) for the most probable detail categories (Hot-Spots) that have been reported in the previous statistical studies. Then, an analytical comparison is provided between the typical orthotropic steel deck and the modified Ortho-composite deck bridge in terms of fatigue issues and durability. The load-bearing capacity of the bridge, the critical deflections, and the composite behavior are also evaluated and compared. Results give a comprehensive overview of the efficiency of the rehabilitation method considering the required design service life of the bridge. Moreover, the proposed approach is assessed with regard to the construction method, details and practical aspects, as well as the economic point of view.Keywords: composite action, fatigue, finite element method, steel deck, bridge
Procedia PDF Downloads 84106 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems
Authors: Ramprasad Srinivasan
Abstract:
Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation
Procedia PDF Downloads 66105 ReactorDesign App: An Interactive Software for Self-Directed Explorative Learning
Authors: Chia Wei Lim, Ning Yan
Abstract:
The subject of reactor design, dealing with the transformation of chemical feedstocks into more valuable products, constitutes the central idea of chemical engineering. Despite its importance, the way it is taught to chemical engineering undergraduates has stayed virtually the same over the past several decades, even as the chemical industry increasingly leans towards the use of software for the design and daily monitoring of chemical plants. As such, there has been a widening learning gap as chemical engineering graduates transition from university to the industry since they are not exposed to effective platforms that relate the fundamental concepts taught during lectures to industrial applications. While the success of technology enhanced learning (TEL) has been demonstrated in various chemical engineering subjects, TELs in the teaching of reactor design appears to focus on the simulation of reactor processes, as opposed to arguably more important ideas such as the selection and optimization of reactor configuration for different types of reactions. This presents an opportunity for us to utilize the readily available easy-to-use MATLAB App platform to create an educational tool to aid the learning of fundamental concepts of reactor design and to link these concepts to the industrial context. Here, interactive software for the learning of reactor design has been developed to narrow the learning gap experienced by chemical engineering undergraduates. Dubbed the ReactorDesign App, it enables students to design reactors involving complex design equations for industrial applications without being overly focused on the tedious mathematical steps. With the aid of extensive visualization features, the concepts covered during lectures are explicitly utilized, allowing students to understand how these fundamental concepts are applied in the industrial context and equipping them for their careers. In addition, the software leverages the easily accessible MATLAB App platform to encourage self-directed learning. It is useful for reinforcing concepts taught, complementing homework assignments, and aiding exam revision. Accordingly, students are able to identify any lapses in understanding and clarify them accordingly. In terms of the topics covered, the app incorporates the design of different types of isothermal and non-isothermal reactors, in line with the lecture content and industrial relevance. The main features include the design of single reactors, such as batch reactors (BR), continuously stirred tank reactors (CSTR), plug flow reactors (PFR), and recycle reactors (RR), as well as multiple reactors consisting of any combination of ideal reactors. A version of the app, together with some guiding questions to aid explorative learning, was released to the undergraduates taking the reactor design module. A survey was conducted to assess its effectiveness, and an overwhelmingly positive response was received, with 89% of the respondents agreeing or strongly agreeing that the app has “helped [them] with understanding the unit” and 87% of the respondents agreeing or strongly agreeing that the app “offers learning flexibility”, compared to the conventional lecture-tutorial learning framework. In conclusion, the interactive ReactorDesign App has been developed to encourage self-directed explorative learning of the subject and demonstrate the industrial applications of the taught design concepts.Keywords: explorative learning, reactor design, self-directed learning, technology enhanced learning
Procedia PDF Downloads 93104 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling
Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather
Abstract:
New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling
Procedia PDF Downloads 191103 Analysis of Electric Mobility in the European Union: Forecasting 2035
Authors: Domenico Carmelo Mongelli
Abstract:
The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.Keywords: engines, Europe, mobility, transition
Procedia PDF Downloads 62102 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal
Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi
Abstract:
The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.Keywords: low income, subsidy policy, distributed energy resources, energy justice
Procedia PDF Downloads 112101 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling
Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci
Abstract:
Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.Keywords: land use, spatial resolution, WRF-Chem, air quality assessment
Procedia PDF Downloads 158100 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change
Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo
Abstract:
With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation
Procedia PDF Downloads 7599 Sensitivity Improvement of Optical Ring Resonator for Strain Analysis with the Direction of Strain Recognition Possibility
Authors: Tayebeh Sahraeibelverdi, Ahmad Shirazi Hadi Veladi, Mazdak Radmalekshah
Abstract:
Optical sensors became attractive due to preciseness, low power consumption, and intrinsic electromagnetic interference-free characteristic. Among the waveguide optical sensors, cavity-based ones attended for the high Q-factor. Micro ring resonators as a potential platform have been investigated for various applications as biosensors to pressure sensors thanks to their sensitive ring structure responding to any small change in the refractive index. Furthermore, these small micron size structures can come in an array, bringing the opportunity to have any of the resonance in a specific wavelength and be addressed in this way. Another exciting application is applying a strain to the ring and making them an optical strain gauge where the traditional ones are based on the piezoelectric material. Making them in arrays needs electrical wiring and about fifty times bigger in size. Any physical element that impacts the waveguide cross-section, Waveguide elastic-optic property change, or ring circumference can play a role. In comparison, ring size change has a larger effect than others. Here an engineered ring structure is investigated to study the strain effect on the ring resonance wavelength shift and its potential for more sensitive strain devices. At the same time, these devices can measure any strain by mounting on the surface of interest. The idea is to change the" O" shape ring to a "C" shape ring with a small opening starting from 2π/360 or one degree. We used the Mode solution of Lumbrical software to investigate the effect of changing the ring's opening and the shift induced by applied strain. The designed ring radius is a three Micron silicon on isolator ring which can be fabricated by standard complementary metal-oxide-semiconductor (CMOS) micromachining. The measured wavelength shifts from1-degree opening of the ring to a 6-degree opening have been investigated. Opening the ring for 1-degree affects the ring's quality factor from 3000 to 300, showing an order of magnitude Q-factor reduction. Assuming a strain making the ring-opening from 1 degree to 6 degrees, our simulation results showing negligible Q-factor reduction from 300 to 280. A ring resonator quality factor can reach up to 108 where an order of magnitude reduction is negligible. The resonance wavelength shift showed a blue shift and was obtained to be 1581, 1579,1578,1575nm for 1-, 2-, 4- and 6-degree ring-opening, respectively. This design can find the direction of the strain-induced by applying the opening on different parts of the ring. Moreover, by addressing the specified wavelength, we can precisely find the direction. We can open a significant opportunity to find cracks and any surface mechanical property very specifically and precisely. This idea can be implemented on polymer ring resonators while they can come with a flexible substrate and can be very sensitive to any strain making the two ends of the ring in the slit part come closer or further.Keywords: optical ring resonator, strain gauge, strain sensor, surface mechanical property analysis
Procedia PDF Downloads 126