Search results for: landscape approach
19 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework
Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard
Abstract:
Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health
Procedia PDF Downloads 13918 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction
Authors: Leila Safazadeh, Brad Berron
Abstract:
Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting
Procedia PDF Downloads 22617 Gamification Beyond Competition: the Case of DPG Lab Collaborative Learning Program for High-School Girls by GameLab KBTU and UNICEF in Kazakhstan
Authors: Nazym Zhumabayeva, Aleksandr Mezin, Alexandra Knysheva
Abstract:
Women's underrepresentation in STEM is critical, worsened by ineffective engagement in educational practices. UNICEF Kazakhstan and GameLab KBTU's collaborative initiatives aim to enhance female STEM participation by fostering an inclusive environment. Learning from LEVEL UP's 2023 program, which featured a hackathon, the 2024 strategy pivots towards non-competitive gamification. Although the data from last year's project showed higher than average student engagement, observations and in-depth interviews with participants showed that the format was stressful for the girls, making them focus on points rather than on other values. This study presents a gamified educational system, DPG Lab, aimed at incentivizing young women's participation in STEM through the development of digital public goods (DPGs). By prioritizing collaborative gamification elements, the project seeks to create an inclusive learning environment that increases engagement and interest in STEM among young women. The DPG Lab aims to find a solution to minimize competition and support collaboration. The project is designed to motivate female participants towards the development of digital solutions through an introduction to the concept of DPGs. It consists of a short online course, a simulation videogame, and a real-time online quest with an offline finale at the KBTU campus. The online course offers short video lectures on open-source development and DPG standards. The game facilitates the practical application of theoretical knowledge, enriching the learning experience. Learners can also participate in a quest that encourages participants to develop DPG ideas in teams by choosing missions throughout the quest path. At the offline quest finale, the participants will meet in person to exchange experiences and accomplishments without engaging in comparative assessments: the quest ensures that each team’s trajectory is distinct by design. This marks a shift from competitive hackathons to a collaborative format, recognizing the unique contributions and achievements of each participant. The pilot batch of students is scheduled to commence in April 2024, with the finale anticipated in June. It is projected that this group will comprise 50 female high-school students from various regions across Kazakhstan. Expected outcomes include increased engagement and interest in STEM fields among young female participants, positive emotional and psychological impact through an emphasis on collaborative learning environments, and improved understanding and skills in DPG development. GameLab KBTU intends to undertake a hypothesis evaluation, employing a methodology similar to that utilized in the preceding LEVEL UP project. This approach will encompass the compilation of quantitative metrics (conversion funnels, test results, and surveys) and qualitative data from in-depth interviews and observational studies. For comparative analysis, a select group of participants from the previous year's project will be recruited to engage in the DPG Lab. By developing and implementing a gamified framework that emphasizes inclusion, engagement, and collaboration, the study seeks to provide practical knowledge about effective gamification strategies for promoting gender diversity in STEM. The expected outcomes of this initiative can contribute to the broader discussion on gamification in education and gender equality in STEM by offering a replicable and scalable model for similar interventions around the world.Keywords: collaborative learning, competitive learning, digital public goods, educational gamification, emerging regions, STEM, underprivileged groups
Procedia PDF Downloads 6216 Employee Engagement
Authors: Jai Bakliya, Palak Dhamecha
Abstract:
Today customer satisfaction is given utmost priority be it any industry. But when it comes to hospitality industry this applies even more as they come in direct contact with customers while providing them services. Employee engagement is new concept adopted by Human Resource Department which impacts customer satisfactions. To satisfy your customers, it is necessary to see that the employees in the organisation are satisfied and engaged enough in their work that they meet the company’s expectations and contribute in the process of achieving company’s goals and objectives. After all employees is human capital of the organisation. Employee engagement has become a top business priority for every organisation. In this fast moving economy, business leaders know that having a potential and high-performing human resource is important for growth and survival. They recognize that a highly engaged manpower can increase innovation, productivity, and performance, while reducing costs related to retention and hiring in highly competitive talent markets. But while most executives see a clear need to improve employee engagement, many have yet to develop tangible ways to measure and tackle this goal. Employee Engagement is an approach which is applied to establish an emotional connection between an employee and the organisation which ensures the employee’s commitment towards his work which affects the productivity and overall performance of the organisation. The study was conducted in hospitality industry. A popular branded hotel was chosen as a sample unit. Data were collected, both qualitative and quantitative from respondents. It is found that employee engagement level of the organisation (Hotel) is quite low. This means that employees are not emotionally connected with the organisation which may in turn, affect performance of the employees it is important to note that in hospitality industry individual employee’s performance specifically in terms of emotional engagement is critical and, therefore, a low engagement level may contribute to low organisation performance. An attempt to this study was made to identify employee engagement level. Another objective to take this study was to explore the factors impeding employee engagement and to explore employee engagement facilitation. While in the hospitality industry where people tend to work for as long as 16 to 18 hours concepts like employee engagement is essential. Because employees get tired of their routine job and in case where job rotation cannot be done employee engagement acts as a solution. The study was conducted at Trident Hotel, Udaipur. It was conducted on the sample size of 30 in-house employees from 6 different departments. The various departments were: Accounts and General, Front Office, Food & Beverage Service, Housekeeping, Food & Beverage Production and Engineering. It was conducted with the help of research instrument. The research instrument was Questionnaire. Data collection source was primary source. Trident Udaipur is one of the busiest hotels in Udaipur. The occupancy rate of the guest over there is nearly 80%. Due the high occupancy rate employees or staff of the hotel used to remain very busy and occupied all the time in their work. They worked for their remuneration only. As a result, they do not have any encouragement for their work nor they are interested in going an extra mile for the organisation. The study result shows working environment factors including recognition and appreciation, opinions of the employee, counselling, feedback from superiors, treatment of managers and respect from the organisation are capable of increasing employee engagement level in the hotel. The above study result encouraged us to explore the factors contributed to low employee engagement. It is being found that factors such as recognition and appreciation, feedback from supervisors, opinion of the employee, counselling, feedback from supervisors, treatment from managers has contributed negatively to employee engagement level. Probable reasons for the low contribution are number of employees gave the negative feedback in accordance to the factors stated above of the organisation. It seems that the structure of organisation itself is responsible for the low contribution of employee engagement. The scope of this study is limited to trident hotel situated in the Udaipur. The limitation of the study was that that the results or findings were only based on the responses of respondents of Trident, Udaipur. And so the recommendations were also applicable in Trident, Udaipur and not to all the like organisations across the country. Through the data collected was further analysed, interpreted and concluded. On the basis of the findings, suggestions were provided to the hotel for improvisation.Keywords: human resource, employee engagement, research, study
Procedia PDF Downloads 30815 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context
Authors: Anjula Negi, Sanjay Gupta
Abstract:
Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy
Procedia PDF Downloads 8114 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data
Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard
Abstract:
Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset
Procedia PDF Downloads 613 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 15012 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 47811 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines
Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri
Abstract:
This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.Keywords: wind turbines, aeroelasticity, repetitive control, periodic systems
Procedia PDF Downloads 24910 Unleashing Potential in Pedagogical Innovation for STEM Education: Applying Knowledge Transfer Technology to Guide a Co-Creation Learning Mechanism for the Lingering Effects Amid COVID-19
Authors: Lan Cheng, Harry Qin, Yang Wang
Abstract:
Background: COVID-19 has induced the largest digital learning experiment in history. There is also emerging research evidence that students have paid a high cost of learning loss from virtual learning. University-wide survey results demonstrate that digital learning remains difficult for students who struggle with learning challenges, isolation, or a lack of resources. Large-scale efforts are therefore increasingly utilized for digital education. To better prepare students in higher education for this grand scientific and technological transformation, STEM education has been prioritized and promoted as a strategic imperative in the ongoing curriculum reform essential for unfinished learning needs and whole-person development. Building upon five key elements identified in the STEM education literature: Problem-based Learning, Community and Belonging, Technology Skills, Personalization of Learning, Connection to the External Community, this case study explores the potential of pedagogical innovation that integrates computational and experimental methodologies to support, enrich, and navigate STEM education. Objectives: The goal of this case study is to create a high-fidelity prototype design for STEM education with knowledge transfer technology that contains a Cooperative Multi-Agent System (CMAS), which has the objectives of (1) conduct assessment to reveal a virtual learning mechanism and establish strategies to facilitate scientific learning engagement, accessibility, and connection within and beyond university setting, (2) explore and validate an interactional co-creation approach embedded in project-based learning activities under the STEM learning context, which is being transformed by both digital technology and student behavior change,(3) formulate and implement the STEM-oriented campaign to guide learning network mapping, mitigate the loss of learning, enhance the learning experience, scale-up inclusive participation. Methods: This study applied a case study strategy and a methodology informed by Social Network Analysis Theory within a cross-disciplinary communication paradigm (students, peers, educators). Knowledge transfer technology is introduced to address learning challenges and to increase the efficiency of Reinforcement Learning (RL) algorithms. A co-creation learning framework was identified and investigated in a context-specific way with a learning analytic tool designed in this study. Findings: The result shows that (1) CMAS-empowered learning support reduced students’ confusion, difficulties, and gaps during problem-solving scenarios while increasing learner capacity empowerment, (2) The co-creation learning phenomenon have examined through the lens of the campaign and reveals that an interactive virtual learning environment fosters students to navigate scientific challenge independently and collaboratively, (3) The deliverables brought from the STEM educational campaign provide a methodological framework both within the context of the curriculum design and external community engagement application. Conclusion: This study brings a holistic and coherent pedagogy to cultivates students’ interest in STEM and develop them a knowledge base to integrate and apply knowledge across different STEM disciplines. Through the co-designing and cross-disciplinary educational content and campaign promotion, findings suggest factors to empower evidence-based learning practice while also piloting and tracking the impact of the scholastic value of co-creation under the dynamic learning environment. The data nested under the knowledge transfer technology situates learners’ scientific journey and could pave the way for theoretical advancement and broader scientific enervators within larger datasets, projects, and communities.Keywords: co-creation, cross-disciplinary, knowledge transfer, STEM education, social network analysis
Procedia PDF Downloads 1149 Supply Side Readiness for Universal Health Coverage: Assessing the Availability and Depth of Essential Health Package in Rural, Remote and Conflict Prone District
Authors: Veenapani Rajeev Verma
Abstract:
Context: Assessing facility readiness is paramount as it can indicate capacity of facilities to provide essential care for resilience to health challenges. In the context of decentralization, estimation of supply side readiness indices at sub national level is imperative for effective evidence based policy but remains a colossal challenge due to lack of dependable and representative data sources. Setting: District Poonch of Jammu and Kashmir was selected for this study. It is remote, rural district with unprecedented topographical barriers and is identified as high priority by government. It is also a fragile area as is bounded by Line of Control with Pakistan bearing the brunt of cease fire violations, military skirmishes and sporadic militant attacks. Hilly geographical terrain, rudimentary/absence of road network and impoverishment are quintessential to this area. Objectives: Objective of the study is to a) Evaluate the service readiness of health facilities and create a concise index subsuming plethora of discrete indicators and b) Ascertain supply side barriers in service provisioning via stakeholder’s analysis. Study also strives to expand analytical domain unravelling context and area specific intricacies associated with service delivery. Methodology: Mixed method approach was employed to triangulate quantitative analysis with qualitative nuances. Facility survey encompassing 90 Subcentres, 44 Primary health centres, 3 Community health centres and 1 District hospital was conducted to gauge general service availability and service specific availability (depth of coverage). Compendium of checklist was designed using Indian Public Health Standards (IPHS) in form of standard core questionnaire and scorecard generated for each facility. Information was collected across dimensions of amenities, equipment, medicines, laboratory and infection control protocols as proposed in WHO’s Service Availability and Readiness Assesment (SARA). Two stage polychoric principal component analysis employed to generate a parsimonious index by coalescing an array of tracer indicators. OLS regression method used to determine factors explaining composite index generated from PCA. Stakeholder analysis was conducted to discern qualitative information. Myriad of techniques like observations, key informant interviews and focus group discussions using semi structured questionnaires on both leaders and laggards were administered for critical stakeholder’s analysis. Results: General readiness score of health facilities was found to be 0.48. Results indicated poorest readiness for subcentres and PHC’s (first point of contact) with composite score of 0.47 and 0.41 respectively. For primary care facilities; principal component was characterized by basic newborn care as well as preparedness for delivery. Results revealed availability of equipment and surgical preparedness having lowest score (0.46 and 0.47) for facilities providing secondary care. Presence of contractual staff, more than 1 hr walk to facility, facilities in zone A (most vulnerable) to cross border shelling and facilities inaccessible due to snowfall and thick jungles was negatively associated with readiness index. Nonchalant staff attitude, unavailability of staff quarters, leakages and constraint in supply chain of drugs and consumables were other impediments identified. Conclusions/Policy Implications: It is pertinent to first strengthen primary care facilities in this setting. Complex dimensions such as geographic barriers, user and provider behavior is not under precinct of this methodology.Keywords: effective coverage, principal component analysis, readiness index, universal health coverage
Procedia PDF Downloads 1218 Mapping the Neurotoxic Effects of Sub-Toxic Manganese Exposure: Behavioral Outcomes, Imaging Biomarkers, and Dopaminergic System Alterations
Authors: Katie M. Clark, Adriana A. Tienda, Krista C. Paffenroth, Lindsey N. Brigante, Daniel C. Colvin, Jose Maldonado, Erin S. Calipari, Fiona E. Harrison
Abstract:
Manganese (Mn) is an essential trace element required for human health and is important in antioxidant defenses, as well as in the development and function of dopaminergic neurons. However, chronic low-level Mn exposure, such as through contaminated drinking water, poses risks that may contribute to neurodevelopmental and neurodegenerative conditions, including attention deficit hyperactivity disorder (ADHD). Pharmacological inhibition of the dopamine transporter (DAT) blocks reuptake, elevates synaptic dopamine, and alleviates ADHD symptoms. This study aimed to determine whether Mn exposure in juvenile mice modifies their response to DAT blockers, amphetamine, and methylphenidate and utilize neuroimaging methods to visualize and quantify Mn distribution across dopaminergic brain regions. Male and female heterozygous DATᵀ³⁵⁶ᴹ and wild-type littermates were randomly assigned to receive control (2.5% Stevia) or high Manganese (2.5 mg/ml Mn + 2.5% Stevia) via water ad libitum from weaning (21-28 days) for 4-5 weeks. Mice underwent repeated testing in locomotor activity chambers for three consecutive days (60 mins.) to ensure that they were fully habituated to the environments. On the fourth day, a 3-hour activity session was conducted following treatment with amphetamine (3 mg/kg) or methylphenidate (5 mg/kg). The second drug was administered in a second 3-hour activity session following a 1-week washout period. Following the washout, the mice were given one last injection of amphetamine and euthanized one hour later. Using the ex-vivo brains, magnetic resonance relaxometry (MRR) was performed on a 7Telsa imaging system to map T1- and T2-weighted (T1W, T2W) relaxation times. Mn inherent paramagnetic properties shorten both T1W and T2W times, which enhances the signal intensity and contrast, enabling effective visualization of Mn accumulation in the entire brain. A subset of mice was treated with amphetamine 1 hour before euthanasia. SmartSPIM light sheet microscopy with cleared whole brains and cFos and tyrosine hydroxylase (TH) labeling enabled an unbiased automated counting and densitometric analysis of TH and cFos positive cells. Immunohistochemistry was conducted to measure synaptic protein markers and quantify changes in neurotransmitter regulation. Mn exposure elevated Mn brain levels and potentiated stimulant effects in males. The globus pallidus, substantia nigra, thalamus, and striatum exhibited more pronounced T1W shortening, indicating regional susceptibility to Mn accumulation (p<0.0001, 2-Way ANOVA). In the cleared whole brains, initial analyses suggest that TH and c-Fos co-staining mirrors behavioral data with decreased co-staining in DATT356M+/- mice. Ongoing studies will identify the molecular basis of the effect of Mn, including changes to DAergic metabolism and transport and post-translational modification to the DAT. These findings demonstrate that alterations in T1W relaxation times, as measured by MRR, may serve as an early biomarker for Mn neurotoxicity. This neuroimaging approach exhibits remarkable accuracy in identifying Mn-susceptible brain regions, with a spatial resolution and sensitivity that surpasses current conventional dissection and mass spectrometry approaches. The capability to label and map TH and cFos expression across the entire brain provides insights into whole-brain neuronal activation and its connections to functional neural circuits and behavior following amphetamine and methylphenidate administration.Keywords: manganese, environmental toxicology, dopamine dysfunction, biomarkers, drinking water, light sheet microscopy, magnetic resonance relaxometry (MRR)
Procedia PDF Downloads 97 Design and Construction of a Solar Dehydration System as a Technological Strategy for Food Sustainability in Difficult-to-Access Territories
Authors: Erika T. Fajardo-Ariza, Luis A. Castillo-Sanabria, Andrea Nieto-Veloza, Carlos M. Zuluaga-Domínguez
Abstract:
The growing emphasis on sustainable food production and preservation has driven the development of innovative solutions to minimize postharvest losses and improve market access for small-scale farmers. This project focuses on designing, constructing, and selecting materials for solar dryers in certain regions of Colombia where inadequate infrastructure limits access to major commercial hubs. Postharvest losses pose a significant challenge, impacting food security and farmer income. Addressing these losses is crucial for enhancing the value of agricultural products and supporting local economies. A comprehensive survey of local farmers revealed substantial challenges, including limited market access, inefficient transportation, and significant postharvest losses. For crops such as coffee, bananas, and citrus fruits, losses range from 0% to 50%, driven by factors like labor shortages, adverse climatic conditions, and transportation difficulties. To address these issues, the project prioritized selecting effective materials for the solar dryer. Various materials, recovered acrylic, original acrylic, glass, and polystyrene, were tested for their performance. The tests showed that recovered acrylic and glass were most effective in increasing the temperature difference between the interior and the external environment. The solar dryer was designed using Fusion 360® software (Autodesk, USA) and adhered to architectural guidelines from Architectural Graphic Standards. It features up to sixteen aluminum trays, each with a maximum load capacity of 3.5 kg, arranged in two levels to optimize drying efficiency. The constructed dryer was then tested with two locally available plant materials: green plantains (Musa paradisiaca L.) and snack bananas (Musa AA Simonds). To monitor performance, Thermo hygrometers and an Arduino system recorded internal and external temperature and humidity at one-minute intervals. Despite challenges such as adverse weather conditions and delays in local government funding, the active involvement of local producers was a significant advantage, fostering ownership and understanding of the project. The solar dryer operated under conditions of 31°C dry bulb temperature (Tbs), 55% relative humidity, and 21°C wet bulb temperature (Tbh). The drying curves showed a consistent drying period with critical moisture content observed between 200 and 300 minutes, followed by a sharp decrease in moisture loss, reaching an equilibrium point after 3,400 minutes. Although the solar dryer requires more time and is highly dependent on atmospheric conditions, it can approach the efficiency of an electric dryer when properly optimized. The successful design and construction of solar dryer systems in difficult-to-access areas represent a significant advancement in agricultural sustainability and postharvest loss reduction. By choosing effective materials such as recovered acrylic and implementing a carefully planned design, the project provides a valuable tool for local farmers. The initiative not only improves the quality and marketability of agricultural products but also offers broader environmental benefits, such as reduced reliance on fossil fuels and decreased waste. Additionally, it supports economic growth by enhancing the value of crops and potentially increasing farmer income. The successful implementation and testing of the dryer, combined with the engagement of local stakeholders, highlight its potential for replication and positive impact in similar contexts.Keywords: drying technology, postharvest loss reduction, solar dryers, sustainable agriculture
Procedia PDF Downloads 296 Blue Economy and Marine Mining
Authors: Fani Sakellariadou
Abstract:
The Blue Economy includes all marine-based and marine-related activities. They correspond to established, emerging as well as unborn ocean-based industries. Seabed mining is an emerging marine-based activity; its operations depend particularly on cutting-edge science and technology. The 21st century will face a crisis in resources as a consequence of the world’s population growth and the rising standard of living. The natural capital stored in the global ocean is decisive for it to provide a wide range of sustainable ecosystem services. Seabed mineral deposits were identified as having a high potential for critical elements and base metals. They have a crucial role in the fast evolution of green technologies. The major categories of marine mineral deposits are deep-sea deposits, including cobalt-rich ferromanganese crusts, polymetallic nodules, phosphorites, and deep-sea muds, as well as shallow-water deposits including marine placers. Seabed mining operations may take place within continental shelf areas of nation-states. In international waters, the International Seabed Authority (ISA) has entered into 15-year contracts for deep-seabed exploration with 21 contractors. These contracts are for polymetallic nodules (18 contracts), polymetallic sulfides (7 contracts), and cobalt-rich ferromanganese crusts (5 contracts). Exploration areas are located in the Clarion-Clipperton Zone, the Indian Ocean, the Mid Atlantic Ridge, the South Atlantic Ocean, and the Pacific Ocean. Potential environmental impacts of deep-sea mining include habitat alteration, sediment disturbance, plume discharge, toxic compounds release, light and noise generation, and air emissions. They could cause burial and smothering of benthic species, health problems for marine species, biodiversity loss, reduced photosynthetic mechanism, behavior change and masking acoustic communication for mammals and fish, heavy metals bioaccumulation up the food web, decrease of the content of dissolved oxygen, and climate change. An important concern related to deep-sea mining is our knowledge gap regarding deep-sea bio-communities. The ecological consequences that will be caused in the remote, unique, fragile, and little-understood deep-sea ecosystems and inhabitants are still largely unknown. The blue economy conceptualizes oceans as developing spaces supplying socio-economic benefits for current and future generations but also protecting, supporting, and restoring biodiversity and ecological productivity. In that sense, people should apply holistic management and make an assessment of marine mining impacts on ecosystem services, including the categories of provisioning, regulating, supporting, and cultural services. The variety in environmental parameters, the range in sea depth, the diversity in the characteristics of marine species, and the possible proximity to other existing maritime industries cause a span of marine mining impact the ability of ecosystems to support people and nature. In conclusion, the use of the untapped potential of the global ocean demands a liable and sustainable attitude. Moreover, there is a need to change our lifestyle and move beyond the philosophy of single-use. Living in a throw-away society based on a linear approach to resource consumption, humans are putting too much pressure on the natural environment. Applying modern, sustainable and eco-friendly approaches according to the principle of circular economy, a substantial amount of natural resource savings will be achieved. Acknowledgement: This work is part of the MAREE project, financially supported by the Division VI of IUPAC. This work has been partly supported by the University of Piraeus Research Center.Keywords: blue economy, deep-sea mining, ecosystem services, environmental impacts
Procedia PDF Downloads 835 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 1314 Regenerative Agriculture Standing at the Intersection of Design, Mycology, and Soil Fertility
Authors: Andrew Gennett
Abstract:
Designing for fungal development means embracing the symbiotic relationship between the living system and built environment. The potential of mycelium post-colonization is explored for the fabrication of advanced pure mycelium products, going beyond the conventional methods of aggregating materials. Fruiting induction imparts desired material properties such as enhanced environmental resistance. Production approach allows for simultaneous generation of multiple products while scaling up raw materials supply suitable for architectural applications. The following work explores the integration of fungal environmental perception with computational design of built fruiting chambers. Polyporales, are classified by their porous reproductive tissues supported by a wood-like context tissue covered by a hard waterproofing coat of hydrobpobins. Persisting for years in the wild, these species represent material properties that would be highly desired in moving beyond flat sheets of arial mycelium as with leather or bacon applications. Understanding the inherent environmental perception of fungi has become the basis for working with and inducing desired hyphal differentiation. Working within the native signal interpretation of a mycelium mass during fruiting induction provides the means to apply textures and color to the final finishing coat. A delicate interplay between meeting human-centered goals while designing around natural processes of living systems represents a blend of art and science. Architecturally, physical simulations inform model design for simple modular fruiting chambers that change as fungal growth progresses, while biological life science principles describe the internal computations occurring within the fungal hyphae. First, a form filling phase of growth is controlled by growth chamber environment. Second, an initiation phase of growth forms the final exterior finishing texture. Hyphal densification induces cellular cascades, in turn producing the classical hardened cuticle, UV protective molecule production, as well, as waterproofing finish. Upon fruiting process completion, the fully colonized spent substrate holds considerable value and is not considered waste. Instead, it becomes a valuable resource in the next cycle of production scale-up. However, the acquisition of new substrate resources poses a critical question, particularly as these resources become increasingly scarce. Pursuing a regenerative design paradigm from the environmental perspective, the usage of “agricultural waste” for architectural materials would prove a continuation of the destructive practices established by the previous industrial regime. For these residues from fields and forests serve a vital ecological role protecting the soil surface in combating erosion while reducing evaporation and fostering a biologically diverse food web. Instead, urban centers have been identified as abundant sources of new substrate material. Diverting the waste from secondary locations such as food processing centers, papers mills, and recycling facilities not only reduces landfill burden but leverages the latent value of these waste steams as precious resources for mycelium cultivation. In conclusion, working with living systems through innovative built environments for fungal development, provides the needed gain of function and resilience of mycelium products. The next generation of sustainable fungal products will go beyond the current binding process, with a focus upon reducing landfill burden from urban centers. In final considerations, biophilic material builds to an ecologically regenerative recycling production cycle.Keywords: regenerative agriculture, mycelium fabrication, growth chamber design, sustainable resource acquisition, fungal morphogenesis, soil fertility
Procedia PDF Downloads 663 The Impact of the Macro-Level: Organizational Communication in Undergraduate Medical Education
Authors: Julie M. Novak, Simone K. Brennan, Lacey Brim
Abstract:
Undergraduate medical education (UME) curriculum notably addresses micro-level communications (e.g., patient-provider, intercultural, inter-professional), yet frequently under-examines the role and impact of organizational communication, a more macro-level. Organizational communication, however, functions as foundation and through systemic structures of an organization and thereby serves as hidden curriculum and influences learning experiences and outcomes. Yet, little available research exists fully examining how students experience organizational communication while in medical school. Extant literature and best practices provide insufficient guidance for UME programs, in particular. The purpose of this study was to map and examine current organizational communication systems and processes in a UME program. Employing a phenomenology-grounded and participatory approach, this study sought to understand the organizational communication system from medical students' perspective. The research team consisted of a core team and 13 medical student co-investigators. This research employed multiple methods, including focus groups, individual interviews, and two surveys (one reflective of focus group questions, the other requesting students to submit ‘examples’ of communications). To provide context for student responses, nonstudent participants (faculty, administrators, and staff) were sampled, as they too express concerns about communication. Over 400 students across all cohorts and 17 nonstudents participated. Data were iteratively analyzed and checked for triangulation. Findings reveal the complex nature of organizational communication and student-oriented communications. They reveal program-impactful strengths, weaknesses, gaps, and tensions and speak to the role of organizational communication practices influencing both climate and culture. With regard to communications, students receive multiple, simultaneous communications from multiple sources/channels, both formal (e.g., official email) and informal (e.g., social media). Students identified organizational strengths including the desire to improve student voice, and message frequency. They also identified weaknesses related to over-reliance on emails, numerous platforms with inconsistent utilization, incorrect information, insufficient transparency, assessment/input fatigue, tacit expectations, scheduling/deadlines, responsiveness, and mental health confidentiality concerns. Moreover, they noted gaps related to lack of coordination/organization, ambiguous point-persons, student ‘voice-only’, open communication loops, lack of core centralization and consistency, and mental health bridges. Findings also revealed organizational identity and cultural characteristics as impactful on the medical school experience. Cultural characteristics included program size, diversity, urban setting, student organizations, community-engagement, crisis framing, learning for exams, inefficient bureaucracy, and professionalism. Moreover, they identified system structures that do not always leverage cultural strengths or reduce cultural problematics. Based on the results, opportunities for productive change are identified. These include leadership visibly supporting and enacting overall organizational narratives, making greater efforts in consistently ‘closing the loop’, regularly sharing how student input effects change, employing strategies of crisis communication more often, strengthening communication infrastructure, ensuring structures facilitate effective operations and change efforts, and highlighting change efforts in informational communication. Organizational communication and communications are not soft-skills, or of secondary concern within organizations, rather they are foundational in nature and serve to educate/inform all stakeholders. As primary stakeholders, students and their success directly affect the accomplishment of organizational goals. This study demonstrates how inquiries about how students navigate their educational experience extends research-based knowledge and provides actionable knowledge for the improvement of organizational operations in UME.Keywords: medical education programs, organizational communication, participatory research, qualitative mixed methods
Procedia PDF Downloads 1152 Enhancing Disaster Resilience: Advanced Natural Hazard Assessment and Monitoring
Authors: Mariza Kaskara, Stella Girtsou, Maria Prodromou, Alexia Tsouni, Christodoulos Mettas, Stavroula Alatza, Kyriaki Fotiou, Marios Tzouvaras, Charalampos Kontoes, Diofantos Hadjimitsis
Abstract:
Natural hazard assessment and monitoring are crucial in managing the risks associated with fires, floods, and geohazards, particularly in regions prone to these natural disasters, such as Greece and Cyprus. Recent advancements in technology, developed by the BEYOND Center of Excellence of the National Observatory of Athens, have been successfully applied in Greece and are now set to be transferred to Cyprus. The implementation of these advanced technologies in Greece has significantly improved the country's ability to respond to these natural hazards. For wildfire risk assessment, a scalar wildfire occurrence risk index is created based on the predictions of machine learning models. Predicting fire danger is crucial for the sustainable management of forest fires as it provides essential information for designing effective prevention measures and facilitating response planning for potential fire incidents. A reliable forecast of fire danger is a key component of integrated forest fire management and is heavily influenced by various factors that affect fire ignition and spread. The fire risk model is validated by the sensitivity and specificity metric. For flood risk assessment, a multi-faceted approach is employed, including the application of remote sensing techniques, the collection and processing of data from the most recent population and building census, technical studies and field visits, as well as hydrological and hydraulic simulations. All input data are used to create precise flood hazard maps according to various flooding scenarios, detailed flood vulnerability and flood exposure maps, which will finally produce the flood risk map. Critical points are identified, and mitigation measures are proposed for the worst-case scenario, namely, refuge areas are defined, and escape routes are designed. Flood risk maps can assist in raising awareness and save lives. Validation is carried out through historical flood events using remote sensing data and records from the civil protection authorities. For geohazards monitoring (e.g., landslides, subsidence), Synthetic Aperture Radar (SAR) and optical satellite imagery are combined with geomorphological and meteorological data and other landslide/ground deformation contributing factors. To monitor critical infrastructures, including dams, advanced InSAR methodologies are used for identifying surface movements through time. Monitoring these hazards provides valuable information for understanding processes and could lead to early warning systems to protect people and infrastructure. Validation is carried out through both geotechnical expert evaluations and visual inspections. The success of these systems in Greece has paved the way for their transfer to Cyprus to enhance Cyprus's capabilities in natural hazard assessment and monitoring. This transfer is being made through capacity building activities, fostering continuous collaboration between Greek and Cypriot experts. Apart from the knowledge transfer, small demonstration actions are implemented to showcase the effectiveness of these technologies in real-world scenarios. In conclusion, the transfer of advanced natural hazard assessment technologies from Greece to Cyprus represents a significant step forward in enhancing the region's resilience to disasters. EXCELSIOR project funds knowledge exchange, demonstration actions and capacity-building activities and is committed to empower Cyprus with the tools and expertise to effectively manage and mitigate the risks associated with these natural hazards. Acknowledgement:Authors acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project.Keywords: earth observation, monitoring, natural hazards, remote sensing
Procedia PDF Downloads 381 Recent Developments in E-waste Management in India
Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay, Ananya Mukhopadhyay, Harendra Nath Bhattacharya
Abstract:
This study investigates the global issue of electronic waste (e-waste), focusing on its prevalence in India and other regions. E-waste has emerged as a significant worldwide problem, with India contributing a substantial share of annual e-waste generation. The primary sources of e-waste in India are computer equipment and mobile phones. Many developed nations utilize India as a dumping ground for their e-waste, with major contributions from the United States, China, Europe, Taiwan, South Korea, and Japan. The study identifies Maharashtra, Tamil Nadu, Mumbai, and Delhi as prominent contributors to India's e-waste crisis. This issue is contextualized within the broader framework of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals (SDGs) and 169 associated targets to address poverty, environmental preservation, and universal prosperity. The study underscores the interconnectedness of e-waste management with several SDGs, including health, clean water, economic growth, sustainable cities, responsible consumption, and ocean conservation. Central Pollution Control Board (CPCB) data reveals that e-waste generation surpasses that of plastic waste, increasing annually at a rate of 31%. However, only 20% of electronic waste is recycled through organized and regulated methods in underdeveloped nations. In Europe, efficient e-waste management stands at just 35%. E-waste pollution poses serious threats to soil, groundwater, and public health due to toxic components such as mercury, lead, bromine, and arsenic. Long-term exposure to these toxins, notably arsenic in microchips, has been linked to severe health issues, including cancer, neurological damage, and skin disorders. Lead exposure, particularly concerning for children, can result in brain damage, kidney problems, and blood disorders. The study highlights the problematic transboundary movement of e-waste, with approximately 352,474 metric tonnes of electronic waste illegally shipped from Europe to developing nations annually, mainly to Africa, including Nigeria, Ghana, and Tanzania. Effective e-waste management, underpinned by appropriate infrastructure, regulations, and policies, offers opportunities for job creation and aligns with the objectives of the 2030 Agenda for SDGs, especially in the realms of decent work, economic growth, and responsible production and consumption. E-waste represents hazardous pollutants and valuable secondary resources, making it a focal point for anthropogenic resource exploitation. The United Nations estimates that e-waste holds potential secondary raw materials worth around 55 billion Euros. The study also identifies numerous challenges in e-waste management, encompassing the sheer volume of e-waste, child labor, inadequate legislation, insufficient infrastructure, health concerns, lack of incentive schemes, limited awareness, e-waste imports, high costs associated with recycling plant establishment, and more. To mitigate these issues, the study offers several solutions, such as providing tax incentives for scrap dealers, implementing reward and reprimand systems for e-waste management compliance, offering training on e-waste handling, promoting responsible e-waste disposal, advancing recycling technologies, regulating e-waste imports, and ensuring the safe disposal of domestic e-waste. A mechanism, Buy-Back programs, will compensate customers in cash when they deposit unwanted digital products. This E-waste could contain any portable electronic device, such as cell phones, computers, tablets, etc. Addressing the e-waste predicament necessitates a multi-faceted approach involving government regulations, industry initiatives, public awareness campaigns, and international cooperation to minimize environmental and health repercussions while harnessing the economic potential of recycling and responsible management.Keywords: e-waste management, sustainable development goal, e-waste disposal, recycling technology, buy-back policy
Procedia PDF Downloads 85