Search results for: classroom simulations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3147

Search results for: classroom simulations

177 Oscillating Water Column Wave Energy Converter with Deep Water Reactance

Authors: William C. Alexander

Abstract:

The oscillating water column (OSC) wave energy converter (WEC) with deep water reactance (DWR) consists of a large hollow sphere filled with seawater at the base, referred to as the ‘stabilizer’, a hollow cylinder at the top of the device, with a said cylinder having a bottom open to the sea and a sealed top save for an orifice which leads to an air turbine, and a long, narrow rod connecting said stabilizer with said cylinder. A small amount of ballast at the bottom of the stabilizer and a small amount of floatation in the cylinder keeps the device upright in the sea. The floatation is set such that the mean water level is nominally halfway up the cylinder. The entire device is loosely moored to the seabed to keep it from drifting away. In the presence of ocean waves, seawater will move up and down within the cylinder, producing the ‘oscillating water column’. This gives rise to air pressure within the cylinder alternating between positive and negative gauge pressure, which in turn causes air to alternately leave and enter the cylinder through said top-cover situated orifice. An air turbine situated within or immediately adjacent to said orifice converts the oscillating airflow into electric power for transport to shore or elsewhere by electric power cable. Said oscillating air pressure produces large up and down forces on the cylinder. Said large forces are opposed through the rod to the large mass of water retained within the stabilizer, which is located deep enough to be mostly free of any wave influence and which provides the deepwater reactance. The cylinder and stabilizer form a spring-mass system which has a vertical (heave) resonant frequency. The diameter of the cylinder largely determines the power rating of the device, while the size (and water mass within) of the stabilizer determines said resonant frequency. Said frequency is chosen to be on the lower end of the wave frequency spectrum to maximize the average power output of the device over a large span of time (such as a year). The upper portion of the device (the cylinder) moves laterally (surge) with the waves. This motion is accommodated with minimal loading on the said rod by having the stabilizer shaped like a sphere, allowing the entire device to rotate about the center of the stabilizer without rotating the seawater within the stabilizer. A full-scale device of this type may have the following dimensions. The cylinder may be 16 meters in diameter and 30 meters high, the stabilizer 25 meters in diameter, and the rod 55 meters long. Simulations predict that this will produce 1,400 kW in waves of 3.5-meter height and 12 second period, with a relatively flat power curve between 5 and 16 second wave periods, as will be suitable for an open-ocean location. This is nominally 10 times higher power than similar-sized WEC spar buoys as reported in the literature, and the device is projected to have only 5% of the mass per unit power of other OWC converters.

Keywords: oscillating water column, wave energy converter, spar bouy, stabilizer

Procedia PDF Downloads 106
176 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations

Authors: Sini George

Abstract:

Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.

Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations

Procedia PDF Downloads 172
175 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 296
174 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 283
173 An Exploratory Case Study of Pre-Service Teachers' Learning to Teach Mathematics to Culturally Diverse Students through a Community-Based After-School Field Experience

Authors: Eugenia Vomvoridi-Ivanovic

Abstract:

It is broadly assumed that participation in field experiences will help pre-service teachers (PSTs) bridge theory to practice. However, this is often not the case since PSTs who are placed in classrooms with large numbers of students from diverse linguistic, cultural, racial, and ethnic backgrounds (culturally diverse students (CDS)) usually observe ineffective mathematics teaching practices that are in contrast to those discussed in their teacher preparation program. Over the past decades, the educational research community has paid increasing attention to investigating out-of-school learning contexts and how participation in such contexts can contribute to the achievement of underrepresented groups in Science, Technology, Engineering, and mathematics (STEM) education and their expanded participation in STEM fields. In addition, several research studies have shown that students display different kinds of mathematical behaviors and discourse practices in out-of-school contexts than they do in the typical mathematics classroom since they draw from a variety of linguistic and cultural resources to negotiate meanings and participate in joint problem solving. However, almost no attention has been given to exploring these contexts as field experiences for pre-service mathematics teachers. The purpose of this study was to explore how participation in a community based after-school field experience promotes understanding of the content pedagogy concepts introduced in elementary mathematics methods courses, particularly as they apply to teaching mathematics to CDS. This study draws upon a situated, socio-cultural theory of teacher learning that centers on the concept of learning as situated social practice, which includes discourse, social interaction, and participation structures. Consistent with exploratory case study methodology, qualitative methods were employed to investigate how a cohort of twelve participating pre-service teacher's approach to pedagogy and their conversations around teaching and learning mathematics to CDS evolved through their participation in the after-school field experience, and how they connected the content discussed in their mathematics methods course with their interactions with the CDS in the after-school. Data were collected over a period of one academic year from the following sources: (a) audio recordings of the PSTs' interactions with the students during the after-school sessions, (b) PSTs' after-school field-notes, (c) audio-recordings of weekly methods course meetings, and (d) other document data (e.g., PST and student generated artifacts, PSTs' written course assignments). The findings of this study reveal that the PSTs benefitted greatly through their participation in the after-school field experience. Specifically, after-school participation promoted a deeper understanding of the content pedagogy concepts introduced in the mathematics methods course and gained a greater appreciation for how students learn mathematics with understanding. Further, even though many of PSTs' assumptions about the mathematical abilities of CDS were challenged and PSTs began to view CDSs' cultural and linguistic backgrounds as resources (rather than obstacles) for learning, some PSTs still held negative stereotypes about CDS and teaching and learning mathematics to CDS in particular. Insights gained through this study contribute to a better understanding of how informal mathematics learning contexts may provide a valuable context for pre-service teacher's learning to teach mathematics to CDS.

Keywords: after-school mathematics program, pre-service mathematical education of teachers, qualitative methods, situated socio-cultural theory, teaching culturally diverse students

Procedia PDF Downloads 130
172 Development of a Framework for Assessing Public Health Risk Due to Pluvial Flooding: A Case Study of Sukhumvit, Bangkok

Authors: Pratima Pokharel

Abstract:

When sewer overflow due to rainfall in urban areas, this leads to public health risks when an individual is exposed to that contaminated floodwater. Nevertheless, it is still unclear the extent to which the infections pose a risk to public health. This study analyzed reported diarrheal cases by month and age in Bangkok, Thailand. The results showed that the cases are reported higher in the wet season than in the dry season. It was also found that in Bangkok, the probability of infection with diarrheal diseases in the wet season is higher for the age group between 15 to 44. However, the probability of infection is highest for kids under 5 years, but they are not influenced by wet weather. Further, this study introduced a vulnerability that leads to health risks from urban flooding. This study has found some vulnerability variables that contribute to health risks from flooding. Thus, for vulnerability analysis, the study has chosen two variables, economic status, and age, that contribute to health risk. Assuming that the people's economic status depends on the types of houses they are living in, the study shows the spatial distribution of economic status in the vulnerability maps. The vulnerability map result shows that people living in Sukhumvit have low vulnerability to health risks with respect to the types of houses they are living in. In addition, from age the probability of infection of diarrhea was analyzed. Moreover, a field survey was carried out to validate the vulnerability of people. It showed that health vulnerability depends on economic status, income level, and education. The result depicts that people with low income and poor living conditions are more vulnerable to health risks. Further, the study also carried out 1D Hydrodynamic Advection-Dispersion modelling with 2-year rainfall events to simulate the dispersion of fecal coliform concentration in the drainage network as well as 1D/2D Hydrodynamic model to simulate the overland flow. The 1D result represents higher concentrations for dry weather flows and a large dilution of concentration on the commencement of a rainfall event, resulting in a drop of the concentration due to runoff generated after rainfall, whereas the model produced flood depth, flood duration, and fecal coliform concentration maps, which were transferred to ArcGIS to produce hazard and risk maps. In addition, the study also simulates the 5-year and 10-year rainfall simulations to show the variation in health hazards and risks. It was found that even though the hazard coverage is very high with a 10-year rainfall events among three rainfall events, the risk was observed to be the same with a 5-year and 10-year rainfall events.

Keywords: urban flooding, risk, hazard, vulnerability, health risk, framework

Procedia PDF Downloads 75
171 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 233
170 Buddhism and Education for Children: Cultivating Wisdom and Compassion

Authors: Harry Einhorn

Abstract:

This paper aims to explore the integration of Buddhism into educational settings with the goal of fostering the holistic development of children. By incorporating Buddhist principles and practices, educators can create a nurturing environment that cultivates wisdom, compassion, and ethical values in children. The teachings of Buddhism provide valuable insights into mindfulness, compassion, and critical thinking, which can be adapted and applied to educational curricula to enhance children's intellectual, emotional, and moral growth. One of the fundamental aspects of Buddhist philosophy that is particularly relevant to education is the concept of mindfulness. By introducing mindfulness practices, such as meditation and breathing exercises, children can learn to cultivate present-moment awareness, develop emotional resilience, and enhance their ability to concentrate and focus. These skills are essential for effective learning and can contribute to reducing stress and promoting overall well-being in children. Mindfulness practices can also teach children how to manage their emotions and thoughts, promoting self-regulation and creating a positive classroom environment. In addition to mindfulness, Buddhism emphasizes the cultivation of compassion and empathy toward all living beings. Integrating teachings on kindness, empathy, and ethical behavior into the educational framework can help children develop a deep sense of interconnectedness and social responsibility. By engaging children in activities that promote empathy and encourage acts of kindness, such as community service projects and cooperative learning, educators can foster the development of compassionate individuals who are actively engaged in creating a more harmonious and compassionate society. Moreover, Buddhist teachings encourage critical thinking and inquiry, which are crucial skills for intellectual development. By introducing children to fundamental Buddhist concepts such as impermanence, interdependence, and the nature of suffering, educators can engage them in philosophical reflections and broaden their perspectives on life. These teachings promote open-mindedness, curiosity, and a deeper understanding of the interconnectedness of all things. Through the exploration of these concepts, children can develop critical thinking skills and gain insights into the complexities of the world, enabling them to navigate challenges with wisdom and discernment. While integrating Buddhism into education requires sensitivity, cultural awareness, and respect for diverse beliefs and backgrounds, it holds great potential for nurturing the holistic development of children. By incorporating mindfulness practices, fostering compassion and empathy, and promoting critical thinking, Buddhism can contribute to the creation of a more compassionate, inclusive, and harmonious educational environment. This integration can shape well-rounded individuals who are equipped with the necessary skills and qualities to navigate the complexities of the modern world with wisdom, compassion, and resilience. In conclusion, the integration of Buddhism into education offers a valuable framework for cultivating wisdom, compassion, and ethical values in children. By incorporating mindfulness, compassion, and critical thinking into educational practices, educators can create a supportive environment that promotes children's holistic development. By nurturing these qualities, Buddhism can help shape individuals who are not only academically proficient but also morally and ethically responsible, contributing to a more compassionate and harmonious society.

Keywords: Buddhism, education, children, mindfulness

Procedia PDF Downloads 63
169 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 222
168 Adding a Degree of Freedom to Opinion Dynamics Models

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 119
167 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 109
166 Material Chemistry Level Deformation and Failure in Cementitious Materials

Authors: Ram V. Mohan, John Rivas-Murillo, Ahmed Mohamed, Wayne D. Hodo

Abstract:

Cementitious materials, an excellent example of highly complex, heterogeneous material systems, are cement-based systems that include cement paste, mortar, and concrete that are heavily used in civil infrastructure; though commonly used are one of the most complex in terms of the material morphology and structure than most materials, for example, crystalline metals. Processes and features occurring at the nanometer sized morphological structures affect the performance, deformation/failure behavior at larger length scales. In addition, cementitious materials undergo chemical and morphological changes gaining strength during the transient hydration process. Hydration in cement is a very complex process creating complex microstructures and the associated molecular structures that vary with hydration. A fundamental understanding can be gained through multi-scale level modeling for the behavior and properties of cementitious materials starting from the material chemistry level atomistic scale to further explore their role and the manifested effects at larger length and engineering scales. This predictive modeling enables the understanding, and studying the influence of material chemistry level changes and nanomaterial additives on the expected resultant material characteristics and deformation behavior. Atomistic-molecular dynamic level modeling is required to couple material science to engineering mechanics. Starting at the molecular level a comprehensive description of the material’s chemistry is required to understand the fundamental properties that govern behavior occurring across each relevant length scale. Material chemistry level models and molecular dynamics modeling and simulations are employed in our work to describe the molecular-level chemistry features of calcium-silicate-hydrate (CSH), one of the key hydrated constituents of cement paste, their associated deformation and failure. The molecular level atomic structure for CSH can be represented by Jennite mineral structure. Jennite has been widely accepted by researchers and is typically used to represent the molecular structure of the CSH gel formed during the hydration of cement clinkers. This paper will focus on our recent work on the shear and compressive deformation and failure behavior of CSH represented by Jennite mineral structure that has been widely accepted by researchers and is typically used to represent the molecular structure of CSH formed during the hydration of cement clinkers. The deformation and failure behavior under shear and compression loading deformation in traditional hydrated CSH; effect of material chemistry changes on the predicted stress-strain behavior, transition from linear to non-linear behavior and identify the on-set of failure based on material chemistry structures of CSH Jennite and changes in its chemistry structure will be discussed.

Keywords: cementitious materials, deformation, failure, material chemistry modeling

Procedia PDF Downloads 286
165 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 153
164 Near-Peer Mentoring/Curriculum and Community Enterprise for Environmental Restoration Science

Authors: Lauren B. Birney

Abstract:

The BOP-CCERS (Billion Oyster Project- Curriculum and Community Enterprise for Restoration Science) Near-Peer Mentoring Program provides the long-term (five-year) support network to motivate and guide students toward restoration science-based CTE pathways. Students are selected from middle schools with actively participating BOP-CCERS teachers. Teachers will nominate students from grades 6-8 to join cohorts of between 10 and 15 students each. Cohorts are comprised primarily of students from the same school in order to facilitate mentors' travel logistics as well as to sustain connections with students and their families. Each cohort is matched with an exceptional undergraduate or graduate student, either a BOP research associate or STEM mentor recruited from collaborating City University of New York (CUNY) partner programs. In rare cases, an exceptional high school junior or senior may be matched with a cohort in addition to a research associate or graduate student. In no case is a high school student or minor be placed individually with a cohort. Mentors meet with students at least once per month and provide at least one offsite field visit per month, either to a local STEM Hub or research lab. Keeping with its five-year trajectory, the near-peer mentoring program will seek to retain students in the same cohort with the same mentor for the full duration of middle school and for at least two additional years of high school. Upon reaching the final quarter of 8th grade, the mentor will develop a meeting plan for each individual mentee. The mentee and the mentor will be required to meet individually or in small groups once per month. Once per quarter, individual meetings will be substituted for full cohort professional outings. The mentor will organize the entire cohort on a field visit or educational workshop with a museum or aquarium partner. In addition to the mentor-mentee relationship, each participating student will also be asked to conduct and present his or her own BOP field research. This research is ideally carried out with the support of the students’ regular high school STEM subject teacher; however, in cases where the teacher or school does not permit independent study, the student will be asked to conduct the research on an extracurricular basis. Near-peer mentoring affects students’ social identities and helps them to connect to role models from similar groups, ultimately giving them a sense of belonging. Qualitative and quantitative analytics were performed throughout the study. Interviews and focus groups also ensued. Additionally, an external evaluator was utilized to ensure project efficacy, efficiency, and effectiveness throughout the entire project. The BOP-CCERS Near Peer Mentoring program is a peer support network in which high school students with interest or experience in BOP (Billion Oyster Project) topics and activities (such as classroom oyster tanks, STEM Hubs, or digital platform research) provide mentorship and support for middle school or high school freshmen mentees. Peer mentoring not only empowers those students being taught but also increases the content knowledge and engagement of mentors. This support provides the necessary resources, structure, and tools to assist students in finding success.

Keywords: STEM education, environmental science, citizen science, near peer mentoring

Procedia PDF Downloads 91
163 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 152
162 Evolutionary Advantages of Loneliness with an Agent-Based Model

Authors: David Gottlieb, Jason Yoder

Abstract:

The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.

Keywords: agent-based, behavior, evolution, loneliness, social

Procedia PDF Downloads 96
161 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 231
160 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers

Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy

Abstract:

In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.

Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology

Procedia PDF Downloads 101
159 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 246
158 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger

Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez

Abstract:

One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.

Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation

Procedia PDF Downloads 147
157 A Triple Win: Linking Students, Academics, and External Organisations to Provide Real-World Learning Experiences with Real-World Benefits

Authors: Anne E. Goodenough

Abstract:

Students often learn best ‘on the job’ through holistic real-world projects. They need real-world experiences to make classroom learning applicable and to increase their employability. Academics typically value working on projects where new knowledge is created and have a genuine desire to help students engage with learning and develop new skills. They might also have institutional pressure to enhance student engagement, retention, and satisfaction. External organizations - especially non-governmental bodies, charities, and small enterprises - often have fundamental and pressing questions, but lack the manpower and academic expertise to answer them effectively. They might also be on the lookout for talented potential employees. This study examines ways in which these diverse requirements can be met simultaneously by creating three-way projects that provide excellent academic and real-world outcomes for all involved. It studied a range of innovative projects across natural sciences (biology, ecology, physical geography and social sciences (human geography, sociology, criminology, and community engagement) to establish how to best harness the potential of this powerful approach. Focal collaborations included: (1) development of practitioner-linked modules; (2) frameworks where students collected/analyzed data for link organizations in research methods modules; (3) placement-based internships and dissertations; and (4) immersive fieldwork projects in novel locations to allow students engage first-hand with contemporary issues as diverse as rhino poaching in South Africa, segregation in Ireland, and gun crime in Florida. Although there was no ‘magic formula’ for success, the approach was found to work best when small projects were developed that were achievable in a short time-frame, both to tie into modular curricula and meet the immediacy expectations of many link organizations. Bigger projects were found to work well in some cases, especially when they were essentially a series of linked smaller projects, either running concurrently or successively with each building on previous work. Opportunities were maximized when there were tangible benefits to the link organization as this generally increased organization investment in the project and motivated students too. The importance of finding the right approach for a given project was found to be key: it was vital to ensure that something that could work effectively as an independent research project for one student, for example, was not shoehorned into being a project for multiple students within a taught module. In general, students were very positive about collaboration projects. They identified benefits to confidence, time-keeping and communication, as well as conveying their enthusiasm when their work was of benefit to the wider community. Several students have gone on to do further work with the link organization in a voluntary capacity or as paid staff, or used the experiences to help them break into the ever-more competitive job market in other ways. Although this approach involves a substantial time investment, especially from academics, the benefits can be profound. The approach has strong potential to engage students, help retention, improve student satisfaction, and teach new skills; keep the knowledge of academics fresh and current; and provide valuable tangible benefits for link organizations: a real triple win.

Keywords: authentic learning, curriculum development, effective education, employability, higher education, innovative pedagogy, link organizations, student experience

Procedia PDF Downloads 219
156 Phytochemical and Antimicrobial Properties of Zinc Oxide Nanocomposites on Multidrug-Resistant E. coli Enzyme: In-vitro and in-silico Studies

Authors: Callistus I. Iheme, Kenneth E. Asika, Emmanuel I. Ugwor, Chukwuka U. Ogbonna, Ugonna H. Uzoka, Nneamaka A. Chiegboka, Chinwe S. Alisi, Obinna S. Nwabueze, Amanda U. Ezirim, Judeanthony N. Ogbulie

Abstract:

Antimicrobial resistance (AMR) is a major threat to the global health sector. Zinc oxide nanocomposites (ZnONCs), composed of zinc oxide nanoparticles and phytochemicals from Azadirachta indica aqueous leaf extract, were assessed for their physico-chemicals, in silico and in vitro antimicrobial properties on multidrug-resistant Escherichia coli enzymes. Gas chromatography coupled with mass spectroscope (GC-MS) analysis on the ZnONCs revealed the presence of twenty volatile phytochemical compounds, among which is scoparone. Characterization of the ZnONCs was done using ultraviolet-visible spectroscopy (UV-vis), energy dispersive spectroscopy (EDX), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and x-ray diffractometer (XRD). Dehydrogenase enzyme converts colorless 2,3,5-triphenyltetrazolium chloride to the red triphenyl formazan (TPF). The rate of formazan formation in the presence of ZnONCs is proportional to the enzyme activities. The color formation is extracted and determined at 500 nm, and the percentage of enzyme activity is calculated. To determine the bioactive components of the ZnONCs, characterize their binding to enzymes, and evaluate the enzyme-ligand complex stability, respectively Discrete Fourier Transform (DFT) analysis, docking, and molecular dynamics simulations will be employed. The results showed arrays of ZnONCs nanorods with maximal absorption wavelengths of 320 nm and 350 nm and thermally stable at the temperature range of 423.77 to 889.69 ℃. In vitro study assessed the dehydrogenase inhibitory properties of the ZnONCs, conjugate of ZnONCs and ampicillin (ZnONCs-amp), the aqueous leaf extract of A. indica, and ampicillin (standard drug). The findings revealed that at the concentration of 500 μm/mL, 57.89 % of the enzyme activities were inhibited by ZnONCs compared to 33.33% and 21.05% of the standard drug (Ampicillin), and the aqueous leaf extract of the A. indica respectively. The inhibition of the enzyme activities by the ZnONCs at 500 μm/mL was further enhanced to 89.74 % by conjugating with Ampicillin. In silico study on the ZnONCs revealed scoparone as the most viable competitor of nicotinamide adenine dinucleotide (NAD⁺) for the coenzyme binding pocket on E. coli malate and histidinol dehydrogenase. From the findings, it can be concluded that the scoparone components of the nanocomposites in synergy with the zinc oxide nanoparticles inhibited E. coli malate and histidinol dehydrogenase by competitively binding to the NAD⁺ pocket and that the conjugation of the ZnONCs with ampicillin further enhanced the antimicrobial efficiency of the nanocomposite against multidrug resistant E. coli.

Keywords: antimicrobial resistance, dehydrogenase activities, E. coli, zinc oxide nanocomposites

Procedia PDF Downloads 49
155 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach

Authors: Zhuoran Li, Guan Qin

Abstract:

A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.

Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method

Procedia PDF Downloads 172
154 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 80
153 Advancing Sustainable Seawater Desalination Technologies: Exploring the Sub-Atmospheric Vapor Pipeline (SAVP) and Energy-Efficient Solution for Urban and Industrial Water Management in Smart, Eco-Friendly, and Green Building Infrastructure

Authors: Mona Shojaei

Abstract:

The Sub-Atmospheric Vapor Pipeline (SAVP) introduces a distinct approach to seawater desalination with promising applications in both land and industrial sectors. SAVP systems exploit the temperature difference between a hot source and a cold environment to facilitate efficient vapor transfer, offering substantial benefits in diverse industrial and field applications. This approach incorporates dynamic boundary conditions, where the temperatures of hot and cold sources vary over time, particularly in natural and industrial environments. Such variations critically influence convection and diffusion processes, introducing challenges that require the refinement of the convection-diffusion equation and the derivation of temperature profiles along the pipeline through advanced engineering mathematics. This study formulates vapor temperature as a function of time and length using two mathematical approaches: Eigen functions and Green’s equation. Combining detailed theoretical modeling, mathematical simulations, and extensive field and industrial tests, this research underscores the SAVP system’s scalability for real-world applications. Results reveal a high degree of accuracy, highlighting SAVP’s significant potential for energy conservation and environmental sustainability. Furthermore, the integration of SAVP technology within smart and green building systems creates new opportunities for sustainable urban water management. By capturing and repurposing vapor for non-potable uses such as irrigation, greywater recycling, and ecosystem support in green spaces, SAVP aligns with the principles of smart and green buildings. Smart buildings emphasize efficient resource management, enhanced system control, and automation for optimal energy and water use, while green buildings prioritize environmental impact reduction and resource conservation. SAVP technology bridges both paradigms, enhancing water self-sufficiency and reducing reliance on external water supplies. The sustainable and energy-efficient properties of SAVP make it a vital component in resilient infrastructure development, addressing urban water scarcity while promoting eco-friendly living. This dual alignment with smart and green building goals positions SAVP as a transformative solution in the pursuit of sustainable urban resource management.

Keywords: sub-atmospheric vapor pipeline, seawater desalination, energy efficiency, vapor transfer dynamics, mathematical modeling, sustainable water solutions, smart buildings

Procedia PDF Downloads 12
152 Synthesis of Functionalized-2-Aryl-2, 3-Dihydroquinoline-4(1H)-Ones via Fries Rearrangement of Azetidin-2-Ones

Authors: Parvesh Singh, Vipan Kumar, Vishu Mehra

Abstract:

Quinoline-4-ones represent an important class of heterocyclic scaffolds that have attracted significant interest due to their various biological and pharmacological activities. This heterocyclic unit also constitutes an integral component in drugs used for the treatment of neurodegenerative diseases, sleep disorders and in antibiotics viz. norfloxacin and ciprofloxacin. The synthetic accessibility and possibility of fictionalization at varied positions in quinoline-4-ones exemplifies an elegant platform for the designing of combinatorial libraries of functionally enriched scaffolds with a range of pharmacological profles. They are also considered to be attractive precursors for the synthesis of medicinally imperative molecules such as non-steroidal androgen receptor antagonists, antimalarial drug Chloroquine and martinellines with antibacterial activity. 2-Aryl-2,3-dihydroquinolin-4(1H)-ones are present in many natural and non-natural compounds and are considered to be the aza-analogs of favanones. The β-lactam class of antibiotics is generally recognized to be a cornerstone of human health care due to the unparalleled clinical efficacy and safety of this type of antibacterial compound. In addition to their biological relevance as potential antibiotics, β-lactams have also acquired a prominent place in organic chemistry as synthons and provide highly efficient routes to a variety of non-protein amino acids, such as oligopeptides, peptidomimetics, nitrogen-heterocycles, as well as biologically active natural and unnatural products of medicinal interest such as indolizidine alkaloids, paclitaxel, docetaxel, taxoids, cyptophycins, lankacidins, etc. A straight forward route toward the synthesis of quinoline-4-ones via the triflic acid assisted Fries rearrangement of N-aryl-βlactams has been reported by Tepe and co-workers. The ring expansion observed in this case was solely attributed to the inherent ring strain in β-lactam ring because -lactam failed to undergo rearrangement under reaction conditions. Theabovementioned protocol has been recently extended by our group for the synthesis of benzo[b]-azocinon-6-ones via a tandem Michael addition–Fries rearrangement of sorbyl anilides as well as for the single-pot synthesis of 2-aryl-quinolin-4(3H)-ones through the Fries rearrangement of 3-dienyl-βlactams. In continuation with our synthetic endeavours with the β-lactam ring and in view of the lack of convenient approaches for the synthesis of C-3 functionalized quinolin-4(1H)-ones, the present work describes the single-pot synthesis of C-3 functionalized quinolin-4(1H)-ones via the trific acid promoted Fries rearrangement of C-3 vinyl/isopropenyl substituted β-lactams. In addition, DFT calculations and MD simulations were performed to investigate the stability profles of synthetic compounds.

Keywords: dihydroquinoline, fries rearrangement, azetidin-2-ones, quinoline-4-ones

Procedia PDF Downloads 250
151 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling

Authors: Mohammed El Raey, Moustafa Osman Mohammed

Abstract:

The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.

Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology

Procedia PDF Downloads 80
150 Representational Issues in Learning Solution Chemistry at Secondary School

Authors: Lam Pham, Peter Hubber, Russell Tytler

Abstract:

Students’ conceptual understandings of chemistry concepts/phenomena involve capability to coordinate across the three levels of Johnston’s triangle model. This triplet model is based on reasoning about chemical phenomena across macro, sub-micro and symbolic levels. In chemistry education, there is a need for further examining inquiry-based approaches that enhance students’ conceptual learning and problem solving skills. This research adopted a directed inquiry pedagogy based on students constructing and coordinating representations, to investigate senior school students’ capabilities to flexibly move across Johnston’ levels when learning dilution and molar concentration concepts. The participants comprise 50 grade 11 and 20 grade 10 students and 4 chemistry teachers who were selected from 4 secondary schools located in metropolitan Melbourne, Victoria. This research into classroom practices used ethnographic methodology, involved teachers working collaboratively with the research team to develop representational activities and lesson sequences in the instruction of a unit on solution chemistry. The representational activities included challenges (Representational Challenges-RCs) that used ‘representational tools’ to assist students to move across Johnson’s three levels for dilution phenomena. In this report, the ‘representational tool’ called ‘cross and portion’ model was developed and used in teaching and learning the molar concentration concept. Students’ conceptual understanding and problem solving skills when learning with this model are analysed through group case studies of year 10 and 11 chemistry students. In learning dilution concepts, students in both group case studies actively conducted a practical experiment, used their own language and visualisation skills to represent dilution phenomena at macroscopic level (RC1). At the sub-microscopic level, students generated and negotiated representations of the chemical interactions between solute and solvent underpinning the dilution process. At the symbolic level, students demonstrated their understandings about dilution concepts by drawing chemical structures and performing mathematical calculations. When learning molar concentration with a ‘cross and portion’ model (RC2), students coordinated across visual and symbolic representational forms and Johnson’s levels to construct representations. The analysis showed that in RC1, Year 10 students needed more ‘scaffolding’ in inducing to representations to explicit the form and function of sub-microscopic representations. In RC2, Year 11 students showed clarity in using visual representations (drawings) to link to mathematics to solve representational challenges about molar concentration. In contrast, year 10 students struggled to get match up the two systems, symbolic system of mole per litre (‘cross and portion’) and visual representation (drawing). These conceptual problems do not lie in the students’ mathematical calculation capability but rather in students’ capability to align visual representations with the symbolic mathematical formulations. This research also found that students in both group case studies were able to coordinate representations when probed about the use of ‘cross and portion’ model (in RC2) to demonstrate molar concentration of diluted solutions (in RC1). Students mostly succeeded in constructing ‘cross and portion’ models to represent the reduction of molar concentration of the concentration gradients. In conclusion, this research demonstrated how the strategic introduction and coordination of chemical representations across modes and across the macro, sub-micro and symbolic levels, supported student reasoning and problem solving in chemistry.

Keywords: cross and portion, dilution, Johnston's triangle, molar concentration, representations

Procedia PDF Downloads 137
149 Numerical Investigation of Flow Boiling within Micro-Channels in the Slug-Plug Flow Regime

Authors: Anastasios Georgoulas, Manolia Andredaki, Marco Marengo

Abstract:

The present paper investigates the hydrodynamics and heat transfer characteristics of slug-plug flows under saturated flow boiling conditions within circular micro-channels. Numerical simulations are carried out, using an enhanced version of the open-source CFD-based solver ‘interFoam’ of OpenFOAM CFD Toolbox. The proposed user-defined solver is based in the Volume Of Fluid (VOF) method for interface advection, and the mentioned enhancements include the implementation of a smoothing process for spurious current reduction, the coupling with heat transfer and phase change as well as the incorporation of conjugate heat transfer to account for transient solid conduction. In all of the considered cases in the present paper, a single phase simulation is initially conducted until a quasi-steady state is reached with respect to the hydrodynamic and thermal boundary layer development. Then, a predefined and constant frequency of successive vapour bubbles is patched upstream at a certain distance from the channel inlet. The proposed numerical simulation set-up can capture the main hydrodynamic and heat transfer characteristics of slug-plug flow regimes within circular micro-channels. In more detail, the present investigation is focused on exploring the interaction between subsequent vapour slugs with respect to their generation frequency, the hydrodynamic characteristics of the liquid film between the generated vapour slugs and the channel wall as well as of the liquid plug between two subsequent vapour slugs. The proposed investigation is carried out for the 3 different working fluids and three different values of applied heat flux in the heated part of the considered microchannel. The post-processing and analysis of the results indicate that the dynamics of the evolving bubbles in each case are influenced by both the upstream and downstream bubbles in the generated sequence. In each case a slip velocity between the vapour bubbles and the liquid slugs is evident. In most cases interfacial waves appear close to the bubble tail that significantly reduce the liquid film thickness. Finally, in accordance with previous investigations vortices that are identified in the liquid slugs between two subsequent vapour bubbles can significantly enhance the convection heat transfer between the liquid regions and the heated channel walls. The overall results of the present investigation can be used to enhance the present understanding by providing better insight of the complex, underpinned heat transfer mechanisms in saturated boiling within micro-channels in the slug-plug flow regime.

Keywords: slug-plug flow regime, micro-channels, VOF method, OpenFOAM

Procedia PDF Downloads 267
148 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording

Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen

Abstract:

It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.

Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration

Procedia PDF Downloads 179