Search results for: building energy simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15342

Search results for: building energy simulation

1122 Dimensional-Controlled Functional Gold Nanoparticles and Zinc Oxide Nanorods for Solar Water Splitting

Authors: Kok Hong Tan, Hing Wah Lee, Jhih-Wei Chen, Chang Fu Dee, Chung-Lin Wu, Siang-Piao Chai, Wei Sea Chang

Abstract:

Semiconductor photocatalyst is known as one of the key roles in developing clean and sustainable energy. However, most of the semiconductor only possesses photoactivity within the UV light region, and hence, decreases the overall photocatalyst efficiency. Generally, the overall effectiveness of the photocatalyst activity is determined by three critical steps: (i) light absorption efficiency and photoexcitation electron-hole pair generation, (ii) separation and migration of charge carriers to the surface of the photocatalyst, and (iii) surface reaction of the carriers with its environment. Much effort has been invested on optimizing hierarchical nanostructures of semiconductors for efficient photoactivity due to the fact that the visible light absorption capability and occurrence of the chemical reactions mostly depend on the dimension of photocatalysts. In this work, we incorporated zero-dimensional (0D) gold nanoparticles (AuNPs) and one dimensional (1D) Zinc Oxide (ZnO) nanorods (NRs) onto strontium titanate (STO) for efficient visible light absorption, charge transfer, and separation. We demonstrate that the electrical and optical properties of the photocatalyst can be tuned by controlling the dimensional structures of AuNPs and ZnO NRs. We found that smaller AuNPs sizes exhibited higher photoactivity because of Fermi level shifting toward the conductive band of STO, STO band gap narrowing and broadening of absorption spectrum to the visible light region. For ZnO NRs, it was found that the average ZnO NRs c-axis length must achieve of certain length to induce multiphoton absorption as a result of light reflection and trapping behavior in the free space between adjacent ZnO NRs hence broadening the absorption spectrum of ZnO from UV to visible light region. This work opens up a new way of broadening the absorption spectrum by incorporating controllable nanostructures of semiconductors, which is important in optimizing the solar water splitting process.

Keywords: gold nanoparticles, photoelectrochemical, PEC, semiconductor photocatalyst, zinc oxide nanorods

Procedia PDF Downloads 161
1121 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation

Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang

Abstract:

The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.

Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics

Procedia PDF Downloads 133
1120 Improvement of Environment and Climate Change Canada’s Gem-Hydro Streamflow Forecasting System

Authors: Etienne Gaborit, Dorothy Durnford, Daniel Deacu, Marco Carrera, Nathalie Gauthier, Camille Garnaud, Vincent Fortin

Abstract:

A new experimental streamflow forecasting system was recently implemented at the Environment and Climate Change Canada’s (ECCC) Canadian Centre for Meteorological and Environmental Prediction (CCMEP). It relies on CaLDAS (Canadian Land Data Assimilation System) for the assimilation of surface variables, and on a surface prediction system that feeds a routing component. The surface energy and water budgets are simulated with the SVS (Soil, Vegetation, and Snow) Land-Surface Scheme (LSS) at 2.5-km grid spacing over Canada. The routing component is based on the Watroute routing scheme at 1-km grid spacing for the Great Lakes and Nelson River watersheds. The system is run in two distinct phases: an analysis part and a forecast part. During the analysis part, CaLDAS outputs are used to force the routing system, which performs streamflow assimilation. In forecast mode, the surface component is forced with the Canadian GEM atmospheric forecasts and is initialized with a CaLDAS analysis. Streamflow performances of this new system are presented over 2019. Performances are compared to the current ECCC’s operational streamflow forecasting system, which is different from the new experimental system in many aspects. These new streamflow forecasts are also compared to persistence. Overall, the new streamflow forecasting system presents promising results, highlighting the need for an elaborated assimilation phase before performing the forecasts. However, the system is still experimental and is continuously being improved. Some major recent improvements are presented here and include, for example, the assimilation of snow cover data from remote sensing, a backward propagation of assimilated flow observations, a new numerical scheme for the routing component, and a new reservoir model.

Keywords: assimilation system, distributed physical model, offline hydro-meteorological chain, short-term streamflow forecasts

Procedia PDF Downloads 130
1119 Verification of a Simple Model for Rolling Isolation System Response

Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly

Abstract:

Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.

Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system

Procedia PDF Downloads 250
1118 LHCII Proteins Phosphorylation Changes Involved in the Dark-Chilling Response in Plant Species with Different Chilling Tolerance

Authors: Malgorzata Krysiak, Anna Wegrzyn, Maciej Garstka, Radoslaw Mazur

Abstract:

Under constantly fluctuating environmental conditions, the thylakoid membrane protein network evolved the ability to dynamically respond to changing biotic and abiotic factors. One of the most important protective mechanism is rearrangement of the chlorophyll-protein (CP) complexes, induced by protein phosphorylation. In a temperate climate, low temperature is one of the abiotic stresses that heavily affect plant growth and productivity. The aim of this study was to determine the role of LHCII antenna complex phosphorylation in the dark-chilling response. The study included an experimental model based on dark-chilling at 4 °C of detached chilling sensitive (CS) runner bean (Phaseolus coccineus L.) and chilling tolerant (CT) garden pea (Pisum sativum L.) leaves. This model is well described in the literature as used for the analysis of chilling impact without any additional effects caused by light. We examined changes in thylakoid membrane protein phosphorylation, interactions between phosphorylated LHCII (P-LHCII) and CP complexes, and their impact on the dynamics of photosystem II (PSII) under dark-chilling conditions. Our results showed that the dark-chilling treatment of CS bean leaves induced a substantial increase of phosphorylation of LHCII proteins, as well as changes in CP complexes composition and their interaction with P-LHCII. The PSII photochemical efficiency measurements showed that in bean, PSII is overloaded with light energy, which is not compensated by CP complexes rearrangements. On the contrary, no significant changes in PSII photochemical efficiency, phosphorylation pattern and CP complexes interactions were observed in CT pea. In conclusion, our results indicate that different responses of the LHCII phosphorylation to chilling stress take place in CT and CS plants, and that kinetics of LHCII phosphorylation and interactions of P-LHCII with photosynthetic complexes may be crucial to chilling stress response. Acknowledgments: presented work was financed by the National Science Centre, Poland grant No.: 2016/23/D/NZ3/01276

Keywords: LHCII, phosphorylation, chilling stress, pea, runner bean

Procedia PDF Downloads 140
1117 Adaptive Programming for Indigenous Early Learning: The Early Years Model

Authors: Rachel Buchanan, Rebecca LaRiviere

Abstract:

Context: The ongoing effects of colonialism continue to be experienced through paternalistic policies and funding processes that cause disjuncture between and across Indigenous early childhood programming on-reserve and in urban and Northern settings in Canada. While various educational organizations and social service providers have risen to address these challenges in the short, medium and long term, there continues to be a lack in nation-wide cohesive, culturally grounded, and meaningful early learning programming for Indigenous children in Canada. Indigenous-centered early learning programs tend to face one of two scaling dilemmas: their program goals are too prescriptive to enable the program to be meaningfully replicated in different cultural/ community settings, or their program goals are too broad to be meaningfully adapted to the unique cultural and contextual needs and desires of Indigenous communities (the “franchise approach”). There are over 600 First Nations communities in Canada representing more than 50 Nations and languages. Consequently, Indigenous early learning programming cannot be applied with a universal or “one size fits all” approach. Sustainable and comprehensive programming must be responsive to each community context, building upon existing strengths and assets to avoid program duplication and irrelevance. Thesis: Community-driven and culturally adapted early childhood programming is critical but cannot be achieved on a large scale within traditional program models that are constrained by prescriptive overarching program goals. Principles, rather than goals, are an effective way to navigate and evaluate complex and dynamic systems. Principles guide an intervention to be adaptable, flexible and scalable. The Martin Family Initiative (MFI) ’s Early Years program engages a principles-based approach to programming. As will be discussed in this paper, this approach enables the program to catalyze existing community-based strengths and organizational assets toward bridging gaps across and disjuncture between Indigenous early learning programs, as well as to scale programming in sustainable, context-responsive and dynamic ways. This paper argues that using a principles-driven and adaptive scaling approach, the Early Years model establishes important learnings for culturally adapted Indigenous early learning programming in Canada. Methodology: The Early Years has leveraged this approach to develop an array of programming with partner organizations and communities across the country. The Early Years began as a singular pilot project in one First Nation. In just three years, it has expanded to five different regions and community organizations. In each context, the program supports the partner organization through different means and to different ends, the extent to which is determined in partnership with each community-based organization: in some cases, this means supporting the organization to build home visiting programming from the ground-up; in others, it means offering organization-specific culturally adapted early learning resources to support the programming that already exists in communities. Principles underpin but do not define the practices of the program in each of these relationships. This paper will explore numerous examples of principles-based adaptability with the context of the Early Years, concluding that the program model offers theadaptability and dynamism necessary to respond to unique and ever-evolving community contexts and needs of Indigenous children today.

Keywords: culturally adapted programming, indigenous early learning, principles-based approach, program scaling

Procedia PDF Downloads 186
1116 Processing and Economic Analysis of Rain Tree (Samanea saman) Pods for Village Level Hydrous Bioethanol Production

Authors: Dharell B. Siano, Wendy C. Mateo, Victorino T. Taylan, Francisco D. Cuaresma

Abstract:

Biofuel is one of the renewable energy sources adapted by the Philippine government in order to lessen the dependency on foreign fuel and to reduce carbon dioxide emissions. Rain tree pods were seen to be a promising source of bioethanol since it contains significant amount of fermentable sugars. The study was conducted to establish the complete procedure in processing rain tree pods for village level hydrous bioethanol production. Production processes were done for village level hydrous bioethanol production from collection, drying, storage, shredding, dilution, extraction, fermentation, and distillation. The feedstock was sundried, and moisture content was determined at a range of 20% to 26% prior to storage. Dilution ratio was 1:1.25 (1 kg of pods = 1.25 L of water) and after extraction process yielded a sugar concentration of 22 0Bx to 24 0Bx. The dilution period was three hours. After three hours of diluting the samples, the juice was extracted using extractor with a capacity of 64.10 L/hour. 150 L of rain tree pods juice was extracted and subjected to fermentation process using a village level anaerobic bioreactor. Fermentation with yeast (Saccharomyces cerevisiae) can fasten up the process, thus producing more ethanol at a shorter period of time; however, without yeast fermentation, it also produces ethanol at lower volume with slower fermentation process. Distillation of 150 L of fermented broth was done for six hours at 85 °C to 95 °C temperature (feedstock) and 74 °C to 95 °C temperature of the column head (vapor state of ethanol). The highest volume of ethanol recovered was established at with yeast fermentation at five-day duration with a value of 14.89 L and lowest actual ethanol content was found at without yeast fermentation at three-day duration having a value of 11.63 L. In general, the results suggested that rain tree pods had a very good potential as feedstock for bioethanol production. Fermentation of rain tree pods juice can be done with yeast and without yeast.

Keywords: fermentation, hydrous bioethanol, fermentation, rain tree pods, village level

Procedia PDF Downloads 295
1115 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 150
1114 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 103
1113 The Relevance of Community Involvement in Flood Risk Governance Towards Resilience to Groundwater Flooding. A Case Study of Project Groundwater Buckinghamshire, UK

Authors: Claude Nsobya, Alice Moncaster, Karen Potter, Jed Ramsay

Abstract:

The shift in Flood Risk Governance (FRG) has moved away from traditional approaches that solely relied on centralized decision-making and structural flood defenses. Instead, there is now the adoption of integrated flood risk management measures that involve various actors and stakeholders. This new approach emphasizes people-centered approaches, including adaptation and learning. This shift to a diversity of FRG approaches has been identified as a significant factor in enhancing resilience. Resilience here refers to a community's ability to withstand, absorb, recover, adapt, and potentially transform in the face of flood events. It is argued that if the FRG merely focused on the conventional 'fighting the water' - flood defense - communities would not be resilient. The move to these people-centered approaches also implies that communities will be more involved in FRG. It is suggested that effective flood risk governance influences resilience through meaningful community involvement, and effective community engagement is vital in shaping community resilience to floods. Successful community participation not only uses context-specific indigenous knowledge but also develops a sense of ownership and responsibility. Through capacity development initiatives, it can also raise awareness and all these help in building resilience. Recent Flood Risk Management (FRM) projects have thus had increasing community involvement, with varied conceptualizations of such community engagement in the academic literature on FRM. In the context of overland floods, there has been a substantial body of literature on Flood Risk Governance and Management. Yet, groundwater flooding has gotten little attention despite its unique qualities, such as its persistence for weeks or months, slow onset, and near-invisibility. There has been a little study in this area on how successful community involvement in Flood Risk Governance may improve community resilience to groundwater flooding in particular. This paper focuses on a case study of a flood risk management project in the United Kingdom. Buckinghamshire Council is leading Project Groundwater, which is one of 25 significant initiatives sponsored by England's Department for Environment, Food and Rural Affairs (DEFRA) Flood and Coastal Resilience Innovation Programme. DEFRA awarded Buckinghamshire Council and other councils 150 million to collaborate with communities and implement innovative methods to increase resilience to groundwater flooding. Based on a literature review, this paper proposes a new paradigm for effective community engagement in Flood Risk Governance (FRG). This study contends that effective community participation can have an impact on various resilience capacities identified in the literature, including social capital, institutional capital, physical capital, natural capital, human capital, and economic capital. In the case of social capital, for example, successful community engagement can influence social capital through the process of social learning as well as through developing social networks and trust values, which are vital in influencing communities' capacity to resist, absorb, recover, and adapt. The study examines community engagement in Project Groundwater using surveys with local communities and documentary analysis to test this notion. The outcomes of the study will inform community involvement activities in Project Groundwater and may shape DEFRA policies and guidelines for community engagement in FRM.

Keywords: flood risk governance, community, resilience, groundwater flooding

Procedia PDF Downloads 70
1112 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 137
1111 An Alternative to Problem-Based Learning in a Post-Graduate Healthcare Professional Programme

Authors: Brogan Guest, Amy Donaldson-Perrott

Abstract:

The Master’s of Physician Associate Studies (MPAS) programme at St George’s, University of London (SGUL), is an intensive two-year course that trains students to become physician associates (PAs). PAs are generalized healthcare providers who work in primary and secondary care across the UK. PA programmes face the difficult task of preparing students to become safe medical providers in two short years. Our goal is to teach students to develop clinical reasoning early on in their studies and historically, this has been done predominantly though problem-based learning (PBL). We have had an increase concern about student engagement in PBL and difficulty recruiting facilitators to maintain the low student to facilitator ratio required in PBL. To address this issue, we created ‘Clinical Application of Anatomy and Physiology (CAAP)’. These peer-led, interactive, problem-based, small group sessions were designed to facilitate students’ clinical reasoning skills. The sessions were designed using the concept of Team-Based Learning (TBL). Students were divided into small groups and each completed a pre-session quiz consisting of difficult questions devised to assess students’ application of medical knowledge. The quiz was completed in small groups and they were not permitted access of external resources. After the quiz, students worked through a series of openended, clinical tasks using all available resources. They worked at their own pace and the session was peer-led, rather than facilitator-driven. For a group of 35 students, there were two facilitators who observed the sessions. The sessions utilised an infinite space whiteboard software. Each group member was encouraged to actively participate and work together to complete the 15-20 tasks. The session ran for 2 hours and concluded with a post-session quiz, identical to the pre-session quiz. We obtained subjective feedback from students on their experience with CAAP and evaluated the objective benefit of the sessions through the quiz results. Qualitative feedback from students was generally positive with students feeling the sessions increased engagement, clinical understanding, and confidence. They found the small group aspect beneficial and the technology easy to use and intuitive. They also liked the benefit of building a resource for their future revision, something unique to CAAP compared to PBL, which out students participate in weekly. Preliminary quiz results showed improvement from pre- and post- session; however, further statistical analysis will occur once all sessions are complete (final session to run December 2022) to determine significance. As a post-graduate healthcare professional programme, we have a strong focus on self-directed learning. Whilst PBL has been a mainstay in our curriculum since its inception, there are limitations and concerns about its future in view of student engagement and facilitator availability. Whilst CAAP is not TBL, it draws on the benefits of peer-led, small group work with pre- and post- team-based quizzes. The pilot of these sessions has shown that students are engaged by CAAP, and they can make significant progress in clinical reasoning in a short amount of time. This can be achieved with a high student to facilitator ratio.

Keywords: problem based learning, team based learning, active learning, peer-to-peer teaching, engagement

Procedia PDF Downloads 80
1110 Active Learning Methods in Mathematics

Authors: Daniela Velichová

Abstract:

Plenty of ideas on how to adopt active learning methods in education are available nowadays. Mathematics is a subject where the active involvement of students is required in particular in order to achieve desirable results regarding sustainable knowledge and deep understanding. The present article is based on the outcomes of an Erasmus+ project DrIVE-MATH, that was aimed at developing a novel and integrated framework to teach maths classes in engineering courses at the university level. It is fundamental for students from the early years of their academic life to have agile minds. They must be prepared to adapt to their future working environments, where enterprises’ views are always evolving, where all collaborate in teams, and relations between peers are thought for the well-being of the whole - workers and company profit. This reality imposes new requirements on higher education in terms of adaptation of different pedagogical methods, such as project-based and active-learning methods used within the course curricula. Active learning methodologies are regarded as an effective way to prepare students to meet the challenges posed by enterprises and to help them in building critical thinking, analytic reasoning, and insight to the solved complex problems from different perspectives. Fostering learning-by-doing activities in the pedagogical process can help students to achieve learning independence, as they could acquire deeper conceptual understanding by experimenting with the abstract concept in a more interesting, useful, and meaningful way. Clear information about learning outcomes and goals might help students to take more responsibility for their learning results. Active learning methods implemented by the project team members in their teaching practice, eduScrum and Jigsaw in particular, proved to provide better scientific and soft skills support to students than classical teaching methods. EduScrum method enables teachers to generate a working environment that stimulates students' working habits and self-initiative as they become aware of their responsibilities within the team, their own acquired knowledge, and their abilities to solve problems independently, though in collaboration with other team members. This method enhances collaborative learning, as students are working in teams towards a common goal - knowledge acquisition, while they are interacting with each other and evaluated individually. Teams consisting of 4-5 students work together on a list of problems - sprint; each member is responsible for solving one of them, while the group leader – a master, is responsible for the whole team. A similar principle is behind the Jigsaw technique, where the classroom activity makes students dependent on each other to succeed. Students are divided into groups, and assignments are split into pieces, which need to be assembled by the whole group to complete the (Jigsaw) puzzle. In this paper, analysis of students’ perceptions concerning the achievement of deeper conceptual understanding in mathematics and the development of soft skills, such as self-motivation, critical thinking, flexibility, leadership, responsibility, teamwork, negotiation, and conflict management, is presented. Some new challenges are discussed as brought by introducing active learning methods in the basic mathematics courses. A few examples of sprints developed and used in teaching basic maths courses at technical universities are presented in addition.

Keywords: active learning methods, collaborative learning, conceptual understanding, eduScrum, Jigsaw, soft skills

Procedia PDF Downloads 54
1109 Water Security and Transboundary Issues for Food Security of Ethiopia. The Case of Nile River

Authors: Kebron Asnake

Abstract:

Water security and transboundary issues are critical concerns for countries, particularly in regions where shared water resources are significant. This Research focuses on exploring the challenges and opportunities related to water security and transboundary issues in Ethiopia, using the case of the Nile River. Ethiopia, as a riparian country of the Nile River, faces complex water security issues due to its dependence on this transboundary water resource. This abstract aims to analyze the various factors that affect water security in Ethiopia, including population growth, climate change, and competing water demands. The Study examines the challenges linked to transboundary water management of the Nile River. It delves into the complexities of negotiating water allocations and addressing potential conflicts among the downstream riparian countries. The paper also discusses the role of international agreements and cooperation in promoting sustainable water resource management. Additionally, the paper highlights the opportunities for collaboration and sustainable development that arise from transboundary water management. It explores the potential for joint investments in water infrastructure, hydropower generation, and irrigation systems that can contribute to regional economic growth and water security. Furthermore, the study emphasizes the need for integrated water management approaches in Ethiopia to ensure the equitable and sustainable use of the Nile River's waters. It highlights the importance of involving stakeholders from diverse sectors, including agriculture, energy, and environmental conservation, in decision-making processes. By presenting the case of the Nile River in Ethiopia, this Abstract contributes to the understanding of water security and transboundary issues. It underscores the significance of regional cooperation and informed policy-making to address the challenges and opportunities presented by transboundary water resources. The paper serves as a foundation for further research and policy in water management in Ethiopia and other regions facing similar challenges.

Keywords: water, health, agriculture, medicine

Procedia PDF Downloads 85
1108 Carbon, Nitrogen Doped TiO2 Macro/Mesoporous Monoliths with High Visible Light Absorption for Photocatalytic Wastewater Treatment

Authors: Paolo Boscaro, Vasile Hulea, François Fajula, Francis Luck, Anne Galarneau

Abstract:

TiO2 based monoliths with hierarchical macropores and mesopores have been synthesized following a novel one pot sol-gel synthesis method. Taking advantage of spinodal separation that occurs between titanium isopropoxide and an acidic solution in presence of polyethylene oxide polymer, monoliths with homogeneous interconnected macropres of 3 μm in diameter and mesopores of ca. 6 nm (surface area 150 m2/g) are obtained. Furthermore, these monoliths present some carbon and nitrogen (as shown by XPS and elemental analysis), which considerably reduce titanium oxide energy gap and enable light to be absorbed up to 700 nm wavelength. XRD shows that anatase is the dominant phase with a small amount of brookite. Enhanced light absorption and high porosity of the monoliths are responsible for a remarkable photocatalytic activity. Wastewater treatment has been performed in closed reactor under sunlight using orange G dye as target molecule. Glass reactors guarantee that most of UV radiations (to almost 300 nm) of solar spectrum are excluded. TiO2 nanoparticles P25 (usually used in photocatalysis under UV) and un-doped TiO2 monoliths with similar porosity were used as comparison. C,N-doped TiO2 monolith allowed a complete colorant degradation in less than 1 hour, whereas 10 h are necessary for 40% colorant degradation with P25 and un-doped monolith. Experiment performed in the dark shows that only 3% of molecules have been adsorbed in the C,N-doped TiO2 monolith within 1 hour. The much higher efficiency of C,N-doped TiO2 monolith in comparison to P25 and un-doped monolith, proves that doping TiO2 is an essential issue and that nitrogen and carbon are effective dopants. Monoliths offer multiples advantages in respect to nanometric powders: sample can be easily removed from batch (no needs to filter or to centrifuge). Moreover flow reactions can be set up with cylindrical or flat monoliths by simple sheathing or by locking them with O-rings.

Keywords: C-N doped, sunlight photocatalytic activity, TiO2 monolith, visible absorbance

Procedia PDF Downloads 231
1107 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition

Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang

Abstract:

Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.

Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model

Procedia PDF Downloads 109
1106 China-Pakistan Nexus and Its Implication for India

Authors: Riddhi Chopra

Abstract:

While China’s friendship with a number of countries has waxed and waned over the decades, Sino-Pak relationship is said to have withstood the vicissitudes of larger international politics as well as changes in regional and domestic currents. Pakistan, one of the first countries to recognize the People’s Republic of China, thus providing China with a corridor into the energy rich Muslim states which was reciprocated with a continual stream of no-strings-attached military hardware and defense-related assistance from Beijing. The joint enmity towards India also provided the initial thrust to a burgeoning Sino-Pak friendship. This paper intends to provide a profound analysis of the strategic relation between China-Pakistan and examine India as a determining factor. The Pakistan-China strategic relationship has been conventionally viewed by India as a zero sum game, wherein any gains accrued by Pakistan or China through their partnership is seen as a direct detriment to the evolution of India-Pakistan or India-China relation. The paper evaluates various factors which were crucial for the synthesis of such a strong relation and presents a comprehensive study of the various policies and programs that have been undertaken by the two countries to tie India to South Asia and reduce its sphere of influence. The geographic dynamics is said to breed a natural coalition, dominating the strategic ambitions of both Beijing and Islamabad hence directing their relationship. In addition to the obvious geopolitical factors, there are several dense collaborations between the two nations knitting a relatively close partnership. Moreover, an attempt has been made to assess the irritants in China-Pak relations and the initiatives taken by the two to further strengthen it. Current trends in diplomatic, economic and defense cooperation – along with the staunch affinity rooted in history and consistent geo-strategic interests – points to a strong and strengthening relationship, significant in directing India’s foreign and security policies. This paper seeks to analyze the changing power dynamics of the China-Pak nexus with external actors such as US and India with an ulterior motive of their own and predict the change in power dynamics between the four countries.

Keywords: China, Pakistan, India, strategy

Procedia PDF Downloads 268
1105 Analyzing the Emergence of Conscious Phenomena by the Process-Based Metaphysics

Authors: Chia-Lin Tu

Abstract:

Towards the end of the 20th century, a reductive picture has dominated in philosophy of science and philosophy of mind. Reductive physicalism claims that all entities and properties in this world are eventually able to be reduced to the physical level. It means that all phenomena in the world are able to be explained by laws of physics. However, quantum physics provides another picture. It says that the world is undergoing change and the energy of change is, in fact, the most important part to constitute world phenomena. Quantum physics provides us another point of view to reconsider the reality of the world. Throughout the history of philosophy of mind, reductive physicalism tries to reduce the conscious phenomena to physical particles as well, meaning that the reality of consciousness is composed by physical particles. However, reductive physicalism is unable to explain conscious phenomena and mind-body causation. Conscious phenomena, e.g., qualia, is not composed by physical particles. The current popular theory for consciousness is emergentism. Emergentism is an ambiguous concept which has not had clear idea of how conscious phenomena are emerged by physical particles. In order to understand the emergence of conscious phenomena, it seems that quantum physics is an appropriate analogy. Quantum physics claims that physical particles and processes together construct the most fundamental field of world phenomena, and thus all natural processes, i.e., wave functions, have occurred within. The traditional space-time description of classical physics is overtaken by the wave-function story. If this methodology of quantum physics works well to explain world phenomena, then it is not necessary to describe the world by the idea of physical particles like classical physics did. Conscious phenomena are one kind of world phenomena. Scientists and philosophers have tried to explain the reality of them, but it has not come out any conclusion. Quantum physics tells us that the fundamental field of the natural world is processed metaphysics. The emergence of conscious phenomena is only possible within this process metaphysics and has clearly occurred. By the framework of quantum physics, we are able to take emergence more seriously, and thus we can account for such emergent phenomena as consciousness. By questioning the particle-mechanistic concept of the world, the new metaphysics offers an opportunity to reconsider the reality of conscious phenomena.

Keywords: quantum physics, reduction, emergence, qualia

Procedia PDF Downloads 164
1104 Predicting Long-Term Performance of Concrete under Sulfate Attack

Authors: Elakneswaran Yogarajah, Toyoharu Nawa, Eiji Owaki

Abstract:

Cement-based materials have been using in various reinforced concrete structural components as well as in nuclear waste repositories. The sulfate attack has been an environmental issue for cement-based materials exposed to sulfate bearing groundwater or soils, and it plays an important role in the durability of concrete structures. The reaction between penetrating sulfate ions and cement hydrates can result in swelling, spalling and cracking of cement matrix in concrete. These processes induce a reduction of mechanical properties and a decrease of service life of an affected structure. It has been identified that the precipitation of secondary sulfate bearing phases such as ettringite, gypsum, and thaumasite can cause the damage. Furthermore, crystallization of soluble salts such as sodium sulfate crystals induces degradation due to formation and phase changes. Crystallization of mirabilite (Na₂SO₄:10H₂O) and thenardite (Na₂SO₄) or their phase changes (mirabilite to thenardite or vice versa) due to temperature or sodium sulfate concentration do not involve any chemical interaction with cement hydrates. Over the past couple of decades, an intensive work has been carried out on sulfate attack in cement-based materials. However, there are several uncertainties still exist regarding the mechanism for the damage of concrete in sulfate environments. In this study, modelling work has been conducted to investigate the chemical degradation of cementitious materials in various sulfate environments. Both internal and external sulfate attack are considered for the simulation. In the internal sulfate attack, hydrate assemblage and pore solution chemistry of co-hydrating Portland cement (PC) and slag mixing with sodium sulfate solution are calculated to determine the degradation of the PC and slag-blended cementitious materials. Pitzer interactions coefficients were used to calculate the activity coefficients of solution chemistry at high ionic strength. The deterioration mechanism of co-hydrating cementitious materials with 25% of Na₂SO₄ by weight is the formation of mirabilite crystals and ettringite. Their formation strongly depends on sodium sulfate concentration and temperature. For the external sulfate attack, the deterioration of various types of cementitious materials under external sulfate ingress is simulated through reactive transport model. The reactive transport model is verified with experimental data in terms of phase assemblage of various cementitious materials with spatial distribution for different sulfate solution. Finally, the reactive transport model is used to predict the long-term performance of cementitious materials exposed to 10% of Na₂SO₄ for 1000 years. The dissolution of cement hydrates and secondary formation of sulfate-bearing products mainly ettringite are the dominant degradation mechanisms, but not the sodium sulfate crystallization.

Keywords: thermodynamic calculations, reactive transport, radioactive waste disposal, PHREEQC

Procedia PDF Downloads 163
1103 Urea and Starch Detection on a Paper-Based Microfluidic Device Enabled on a Smartphone

Authors: Shashank Kumar, Mansi Chandra, Ujjawal Singh, Parth Gupta, Rishi Ram, Arnab Sarkar

Abstract:

Milk is one of the basic and primary sources of food and energy as we start consuming milk from birth. Hence, milk quality and purity and checking the concentration of its constituents become necessary steps. Considering the importance of the purity of milk for human health, the following study has been carried out to simultaneously detect and quantify the different adulterants like urea and starch in milk with the help of a paper-based microfluidic device integrated with a smartphone. The detection of the concentration of urea and starch is based on the principle of colorimetry. In contrast, the fluid flow in the device is based on the capillary action of porous media. The microfluidic channel proposed in the study is equipped with a specialized detection zone, and it employs a colorimetric indicator undergoing a visible color change when the milk gets in touch or reacts with a set of reagents which confirms the presence of different adulterants in the milk. In our proposed work, we have used iodine to detect the percentage of starch in the milk, whereas, in the case of urea, we have used the p-DMAB. A direct correlation has been found between the color change intensity and the concentration of adulterants. A calibration curve was constructed to find color intensity and subsequent starch and urea concentration. The device has low-cost production and easy disposability, which make it highly suitable for widespread adoption, especially in resource-constrained settings. Moreover, a smartphone application has been developed to detect, capture, and analyze the change in color intensity due to the presence of adulterants in the milk. The low-cost nature of the smartphone-integrated paper-based sensor, coupled with its integration with smartphones, makes it an attractive solution for widespread use. They are affordable, simple to use, and do not require specialized training, making them ideal tools for regulatory bodies and concerned consumers.

Keywords: paper based microfluidic device, milk adulteration, urea detection, starch detection, smartphone application

Procedia PDF Downloads 65
1102 Study of Electro-Chemical Properties of ZnO Nanowires for Various Application

Authors: Meera A. Albloushi, Adel B. Gougam

Abstract:

The development in the field of piezoelectrics has led to a renewed interest in ZnO nanowires (NWs) as a promising material in the nanogenerator devices category. It can be used as a power source for self-powered electronic systems with higher density, higher efficiency, longer lifetime, as well as lower cost of fabrication. Highly aligned ZnO nanowires seem to exhibit a higher performance compared with nonaligned ones. The purpose of this study was to develop ZnO nanowires and to investigate their electrical and chemical properties for various applications. They were grown on silicon (100) and glass substrates. We have used a low temperature and non-hazardous method: aqueous chemical growth (ACG). ZnO (non-doped) and AZO (Aluminum doped) seed layers were deposited using RF magnetron sputteringunder Argon pressure of 3 mTorr and deposition power of 180 W, the times of growth were selected to obtain thicknesses in the range of 30 to 125 nm. Some of the films were subsequently annealed. The substrates were immersed tilted in an equimolar solution composed of zinc nitrate and hexamine (HMTA) of 0.02 M and 0.05 M in the temperature range of 80 to 90 ᵒC for 1.5 to 2 hours. The X-ray diffractometer shows strong peaks at 2Ө = 34.2ᵒ of ZnO films which indicates that the films have a preferred c-axis wurtzite hexagonal (002) orientation. The surface morphology of the films is investigated by atomic force microscope (AFM) which proved the uniformity of the film since the roughness is within 5 nm range. The scanning electron microscopes(SEM) (Quanta FEG 250, Quanta 3D FEG, Nova NanoSEM 650) are used to characterize both ZnO film and NWs. SEM images show forest of ZnO NWs grown vertically and have a range of length up to 2000 nm and diameter of 20-300 nm. The SEM images prove that the role of the seed layer is to enhance the vertical alignment of ZnO NWs at the pH solution of 5-6. Also electrical and optical properties of the NWs are carried out using Electrical Force Microscopy (EFM). After growing the ZnO NWs, developing the nano-generator is the second step of this study in order to determine the energy conversion efficiency and the power output.

Keywords: ZnO nanowires(NWs), aqueous chemical growth (ACG), piezoelectric NWs, harvesting enery

Procedia PDF Downloads 322
1101 Alternative Fuel Production from Sewage Sludge

Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova

Abstract:

The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.

Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge

Procedia PDF Downloads 135
1100 Quest for an Efficient Green Multifunctional Agent for the Synthesis of Metal Nanoparticles with Highly Specified Structural Properties

Authors: Niharul Alam

Abstract:

The development of energy efficient, economic and eco-friendly synthetic protocols for metal nanoparticles (NPs) with tailor-made structural properties and biocompatibility is a highly cherished goal for researchers working in the field of nanoscience and nanotechnology. In this context, green chemistry is highly relevant and the 12 principles of Green Chemistry can be explored to develop such synthetic protocols which are practically implementable. One of the most promising green chemical synthetic methods which can serve the purpose is biogenic synthetic protocol, which utilizes non-toxic multifunctional reactants derived from natural, biological sources ranging from unicellular organisms to higher plants that are often characterized as “medicinal plants”. Over the past few years, a plethora of medicinal plants have been explored as the source of this kind of multifunctional green chemical agents. In this presentation, we focus on the syntheses of stable monometallic Au and Ag NPs and also bimetallic Au/Ag alloy NPs with highly efficient catalytic property using aqueous extract of leaves of Indian Curry leaf plat (Murraya koenigii Spreng.; Fam. Rutaceae) as green multifunctional agents which is extensively used in Indian traditional medicine and cuisine. We have also studied the interaction between the synthesized metal NPs and surface-adsorbed fluorescent moieties, quercetin and quercetin glycoside which are its chemical constituents. This helped us to understand the surface property of the metal NPs synthesized by this plant based biogenic route and to predict a plausible mechanistic pathway which may help in fine-tuning green chemical methods for the controlled synthesis of various metal NPs in future. We observed that simple experimental parameters e.g. pH and temperature of the reaction medium, concentration of multifunctional agent and precursor metal ions play important role in the biogenic synthesis of Au NPs with finely tuned structures.

Keywords: green multifunctional agent, metal nanoparticles, biogenic synthesis

Procedia PDF Downloads 431
1099 Intelligent Crop Circle: A Blockchain-Driven, IoT-Based, AI-Powered Sustainable Agriculture System

Authors: Mishak Rahul, Naveen Kumar, Bharath Kumar

Abstract:

Conceived as a high-end engine to revolutionise sustainable agri-food production, the intelligent crop circle (ICC) aims to incorporate the Internet of Things (IoT), blockchain technology and artificial intelligence (AI) to bolster resource efficiency and prevent waste, increase the volume of production and bring about sustainable solutions with long-term ecosystem conservation as the guiding principle. The operating principle of the ICC relies on bringing together multidisciplinary bottom-up collaborations between producers, researchers and consumers. Key elements of the framework include IoT-based smart sensors for sensing soil moisture, temperature, humidity, nutrient and air quality, which provide short-interval and timely data; blockchain technology for data storage on a private chain, which maintains data integrity, traceability and transparency; and AI-based predictive analysis, which actively predicts resource utilisation, plant growth and environment. This data and AI insights are built into the ICC platform, which uses the resulting DSS (Decision Support System) outlined as help in decision making, delivered through an easy-touse mobile app or web-based interface. Farmers are assumed to use such a decision-making aid behind the power of the logic informed by the data pool. Building on existing data available in the farm management systems, the ICC platform is easily interoperable with other IoT devices. ICC facilitates connections and information sharing in real-time between users, including farmers, researchers and industrial partners, enabling them to cooperate in farming innovation and knowledge exchange. Moreover, ICC supports sustainable practice in agriculture by integrating gamification techniques to stimulate farm adopters, deploying VR technologies to model and visualise 3D farm environments and farm conditions, framing the field scenarios using VR headsets and Real-Time 3D engines, and leveraging edge technologies to facilitate secure and fast communication and collaboration between users involved. And through allowing blockchain-based marketplaces, ICC offers traceability from farm to fork – that is: from producer to consumer. It empowers informed decision-making through tailor-made recommendations generated by means of AI-driven analysis and technology democratisation, enabling small-scale and resource-limited farmers to get their voice heard. It connects with traditional knowledge, brings together multi-stakeholder interactions as well as establishes a participatory ecosystem to incentivise continuous growth and development towards more sustainable agro-ecological food systems. This integrated approach leverages the power of emerging technologies to provide sustainable solutions for a resilient food system, ensuring sustainable agriculture worldwide.

Keywords: blockchain, internet of things, artificial intelligence, decision support system, virtual reality, gamification, traceability, sustainable agriculture

Procedia PDF Downloads 42
1098 The Importance of School Culture in Supporting Student Mental Health Following the COVID-19 Pandemic: Insights from a Qualitative Study

Authors: Rhiannon Barker, Gregory Hartwell, Matt Egan, Karen Lock

Abstract:

Background: Evidence suggests that mental health (MH) issues in children and young people (CYP) in the UK are on the rise. Of particular concern is data that indicates that the pandemic, together with the impact of school closures, have accentuated already pronounced inequalities; children from families on low incomes or from black and minority ethnic groups are reportedly more likely to have been adversely impacted. This study aimed to help identify specific support which may facilitate the building of a positive school climate and protect student mental health, particularly in the wake of school closures following the pandemic. It has important implications for integrated working between schools and statutory health services. Methods: The research comprised of three parts; scoping, case studies, and a stakeholder workshop to explore and consolidate results. The scoping phase included a literature review alongside interviews with a range of stakeholders from government, academia, and the third sector. Case studies were then conducted in two London state schools. Results: Our research identified how student MH was being impacted by a range of factors located at different system levels, both internal to the school and in the wider community. School climate, relating both to a shared system of beliefs and values, as well as broader factors including style of leadership, teaching, discipline, safety, and relationships -all played a role in the experience of school life and, consequently, the MH of both students and staff. Participants highlighted the importance of a whole school approach and ensuring that support for student MH was not separated from academic achievement, as well as the importance of identifying and applying universal measuring systems to establish levels of MH need. Our findings suggest that a school’s climate is influenced by the style and strength of its leadership, while this school climate - together with mechanisms put in place to respond to MH needs (both statutory and non-statutory) - plays a key role in supporting student MH. Implications: Schools in England have a responsibility to decide on the nature of MH support provided for their students, and there is no requirement for them to report centrally on the form this provision takes. The reality on the ground, as our study suggests, is that MH provision varies significantly between schools, particularly in relation to ‘lower’ levels of need which are not covered by statutory requirements. A valid concern may be that in the huge raft of possible options schools have to support CYP wellbeing, too much is left to chance. Work to support schools in rebuilding their cultures post-lockdowns must include the means to identify and promote appropriate tools and techniques to facilitate regular measurement of student MH. This will help establish both the scale of the problem and monitor the effectiveness of the response. A strong vision from a school’s leadership team that emphasises the importance of student wellbeing, running alongside (but not overshadowed by) academic attainment, should help shape a school climate to promote beneficial MH outcomes. The sector should also be provided with support to improve the consistency and efficacy of MH provision in schools across the country.

Keywords: mental health, schools, young people, whole-school culture

Procedia PDF Downloads 63
1097 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 184
1096 Environmental Radioactivity Analysis by a Sequential Approach

Authors: G. Medkour Ishak-Boushaki, A. Taibi, M. Allab

Abstract:

Quantitative environmental radioactivity measurements are needed to determine the level of exposure of a population to ionizing radiations and for the assessment of the associated risks. Gamma spectrometry remains a very powerful tool for the analysis of radionuclides present in an environmental sample but the basic problem in such measurements is the low rate of detected events. Using large environmental samples could help to get around this difficulty but, unfortunately, new issues are raised by gamma rays attenuation and self-absorption. Recently, a new method has been suggested, to detect and identify without quantification, in a short time, a gamma ray of a low count source. This method does not require, as usually adopted in gamma spectrometry measurements, a pulse height spectrum acquisition. It is based on a chronological record of each detected photon by simultaneous measurements of its energy ε and its arrival time τ on the detector, the pair parameters [ε,τ] defining an event mode sequence (EMS). The EMS serials are analyzed sequentially by a Bayesian approach to detect the presence of a given radioactive source. The main object of the present work is to test the applicability of this sequential approach in radioactive environmental materials detection. Moreover, for an appropriate health oversight of the public and of the concerned workers, the analysis has been extended to get a reliable quantification of the radionuclides present in environmental samples. For illustration, we consider as an example, the problem of detection and quantification of 238U. Monte Carlo simulated experience is carried out consisting in the detection, by a Ge(Hp) semiconductor junction, of gamma rays of 63 keV emitted by 234Th (progeny of 238U). The generated EMS serials are analyzed by a Bayesian inference. The application of the sequential Bayesian approach, in environmental radioactivity analysis, offers the possibility of reducing the measurements time without requiring large environmental samples and consequently avoids the attached inconvenient. The work is still in progress.

Keywords: Bayesian approach, event mode sequence, gamma spectrometry, Monte Carlo method

Procedia PDF Downloads 495
1095 Prospective Analysis of Micromobility in the City of Medellín

Authors: Saúl Rivero, Estefanya Marín, Katherine Bolaño, Elena Urán, Juan Yepes, Andrés Cossio

Abstract:

Medellín is the Colombian city with the best public transport systems in the country, which is made up of two metro lines, five metrocables, two BRT-type bus lines, and a tram. But despite the above, the Aburrá Valley, the area in which the city is located, has about 3000 km of roads, which for the existing population of 3.2 million inhabitants, gives an indicator of 900 meters of road per 1000 inhabitants, which is lower than the country's average, which is approximately 3900 meters. In addition, given that in Medellín, there is approximately one vehicle for every three inhabitants, the problems of congestion and environmental pollution have worsened over the years. In this sense, due to the limitations of physical space, the low public investment in road infrastructure, it is necessary to opt for mobility alternatives according to the above. Among the options for the city, there is what is known as micromobility. Micromobility is understood to be those small and light means of transport that are used for short distances, that use electrical energy, such as skateboards and bicycles. Taking into account the above, in this work, the current state and future of micromobility in the city of Medellín were analyzed, carrying out a prospective analysis, supported by a PEST analysis, but also of the crossed impact matrices; of influence and dependence; and the technique of the actor's game. The analysis was carried out for two future scenarios: one normal and one optimistic. Result of the analysis, it was determined that micromobility as an alternative social practice to mobility in the city of Medellín has favorable conditions since the local government has adopted strategies such as the Metropolitan Bicycle Master Plan of Valle de Aburrá and the plan " Bicycle paths in the city: more public spaces for life,” where a projection of 400 kilometers of bicycle paths was estimated by the year 2030, as for that same year it is expected that 10% of all trips in the region will be in bike mode. The total trip indicator is an achievable goal, while that of the number of kilometers of bike paths would be close to being met.

Keywords: electric vehicles, micromobilty, public transport, sustainable transport

Procedia PDF Downloads 201
1094 Quantification of River Ravi Pollution and Oxidation Pond Treatment to Improve the Drain Water Quality

Authors: Yusra Mahfooz, Saleha Mehmood

Abstract:

With increase in industrialization and urbanization, water contaminating rivers through effluents laden with diverse chemicals in developing countries. The study was based on the waste water quality of the four drains (Outfall, Gulshan -e- Ravi, Hudiara, and Babu Sabu) which enter into river Ravi in Lahore, Pakistan. Different pollution parameters were analyzed including pH, DO, BOD, COD, turbidity, EC, TSS, nitrates, phosphates, sulfates and fecal coliform. Approximately all the water parameters of drains were exceeded the permissible level of wastewater standards. In calculation of pollution load, Hudiara drains showed highest pollution load in terms of COD i.e. 429.86 tons/day while in Babu Sabu drain highest pollution load was calculated in terms of BOD i.e. 162.82 tons/day (due to industrial and sewage discharge in it). Lab scale treatment (oxidation ponds) was designed in order to treat the waste water of Babu Sabu drain, through combination of different algae species i.e. chaetomorphasutoria, sirogoniumsticticum and zygnema sp. Two different sizes of ponds (horizontal and vertical), and three different concentration of algal samples (25g/3L, 50g/3L, and 75g/3L) were selected. After 6 days of treatment, 80 to 97% removal efficiency was found in the pollution parameters. It was observed that in the vertical pond, maximum reduction achieved i.e. turbidity 62.12%, EC 79.3%, BOD 86.6%, COD 79.72%, FC 100%, nitrates 89.6%, sulphates 96.9% and phosphates 85.3%. While in the horizontal pond, the maximum reduction in pollutant parameters, turbidity 69.79%, EC 83%, BOD 88.5%, COD 83.01%, FC 100%, nitrates 89.8%, sulphates 97% and phosphates 86.3% was observed. Overall treatment showed that maximum reduction was carried out in 50g algae setup in the horizontal pond due to large surface area, after 6 days of treatment. Results concluded that algae-based treatment are most energy efficient, which can improve drains water quality in cost effective manners.

Keywords: oxidation pond, ravi pollution, river water quality, wastewater treatment

Procedia PDF Downloads 298
1093 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.

Keywords: business model, capacity, sourcing, synergies

Procedia PDF Downloads 174