Search results for: 3D models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6749

Search results for: 3D models

59 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients

Authors: Ainura Tursunalieva, Irene Hudson

Abstract:

Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.

Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence

Procedia PDF Downloads 152
58 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance

Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi

Abstract:

This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.

Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building

Procedia PDF Downloads 32
57 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 313
56 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains

Authors: Jing Jin

Abstract:

The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.

Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry

Procedia PDF Downloads 64
55 A Multimodal Discourse Analysis of Gender Representation on Health and Fitness Magazine Cover Pages

Authors: Nashwa Elyamany

Abstract:

In visual cultures, namely that of the United States, media representations are such influential and pervasive reflections of societal norms and expectations to the extent that they impact the manner in which both genders view themselves. Health and fitness magazines fall within the realm of visual culture. Since the main goal of communication is to ensure proper dissemination of information in order for the target audience to grasp the intended messages, it becomes imperative that magazine publishers, editors, advertisers and image producers use different modes of communication within their reach to convey messages to their readers and viewers. A rapid waxing flow of multimodality floods popular discourse, particularly health and fitness magazine cover pages. The use of well-crafted cover lines and visual images is imbued with agendas, consumerist ideologies and properties capable of effectively conveying implicit and explicit meaning to potential readers and viewers. In essence, the primary goal of this thesis is to interrogate the multi-semiotic operations and manifestations of hegemonic masculinity and femininity in male and female body culture, particularly on the cover pages of the twin American magazines Men's Health and Women's Health using corpora that spanned from 2011 to the mid of 2016. The researcher explores the semiotic resources that contribute to shaping and legitimizing a new form of postmodern, consumerist, gendered discourse that positions the reader-viewer ideologically. Methodologically, the researcher carries out analysis on the macro and micro levels. On the macro level, the researcher takes on a critical stance to illuminate the ideological nature of the multimodal ensemble of the cover pages, and, on the micro level, seeks to put forward new theoretical and methodological routes through which the semiotic choices well invested on the media texts can be more objectively scrutinized. On the macro level, a 'themes' analysis is initially conducted to isolate the overarching themes that dominate the fitness discourse on the cover pages under study. It is argued that variation in terms of frequencies of such themes is indicative, broadly speaking, of which facets of hegemonic masculinity and femininity are infused in the fitness discourse on the cover pages. On the micro level, this research work encompasses three sub-levels of analysis. The researcher follows an SF-MMDA approach, drawing on a trio of analytical frameworks: Halliday's SFG for the verbal analysis; Kress & van Leeuween's VG for the visual analysis; and CMT in relation to Sperber & Wilson's RT for the pragma-cognitive analysis of multimodal metaphors and metonymies. The data is presented in terms of detailed descriptions in conjunction with frequency tables, ANOVA with alpha=0.05 and MANOVA in the multiple phases of analysis. Insights and findings from this multi-faceted, social-semiotic analysis are interpreted in light of Cultivation Theory, Self-objectification Theory and the literature to date. Implications for future research include the implementation of a multi-dimensional approach whereby linguistic and visual analytical models are deployed with special regards to cultural variation.

Keywords: gender, hegemony, magazine cover page, multimodal discourse analysis, multimodal metaphor, multimodal metonymy, systemic functional grammar, visual grammar

Procedia PDF Downloads 349
54 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 218
53 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations

Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai

Abstract:

Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.

Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile

Procedia PDF Downloads 142
52 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 33
51 Effects of School Culture and Curriculum on Gifted Adolescent Moral, Social, and Emotional Development: A Longitudinal Study of Urban Charter Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Pat J. Austin, Marc P. Bonis, Richard B. Speaker, Jr.

Abstract:

Using two psychometric instruments, this study examined social and emotional intelligence and moral judgment levels of more than 300 gifted and talented high school students enrolled in arts-integrated, academic acceleration, and creative arts charter schools in an ethnically diverse large city in the southeastern United States. Gifted and talented individuals possess distinguishable characteristics; these frequently appear as strengths, but often serious problems accompany them. Although many gifted adolescents thrive in their environments, some struggle in their school and community due to emotional intensity, motivation and achievement issues, lack of peers and isolation, identification problems, sensitivity to expectations and feelings, perfectionism, and other difficulties. These gifted students endure and survive in school rather than flourish. Gifted adolescents face special intrapersonal, interpersonal, and environmental problems. Furthermore, they experience greater levels of stress, disaffection, and isolation than non-gifted individuals due to their advanced cognitive abilities. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of these adolescents. Numerous studies have researched moral, social, and emotional development in the areas of cognitive-developmental, psychoanalytic, and behavioral learning; however, in almost all cases, these three facets have been studied separately leading to many divergent theories. Additionally, various frameworks and models purporting to encourage the different socio-affective branches of development have been debated in curriculum theory, yet research is inconclusive on the effectiveness of these programs. Most often studied is the socio-affective domain, which includes development and regulation of emotions; empathy development; interpersonal relations and social behaviors; personal and gender identity construction; and moral development, thinking, and judgment. Examining development in these domains can provide insight into why some gifted and talented adolescents are not always successful in adulthood despite advanced IQ scores. Particularly whether emotional, social and moral capabilities of gifted and talented individuals are as advanced as their intellectual abilities and how these are related to each other. This mixed methods longitudinal study examined students in urban gifted and talented charter schools for (1) socio-affective development levels and (2) whether a particular environment encourages developmental growth. Research questions guiding the study: (1) How do academically and artistically gifted 10th and 11th grade students perform on psychological scales of social and emotional intelligence and moral judgment? Do they differ from the normative sample? Do gender differences exist among gifted students? (2) Do adolescents who attend distinctive gifted charter schools differ in developmental profiles? Students’ performances on psychometric instruments were compared over time and by program type. Assessing moral judgment (DIT-2) and socio-emotional intelligence (BarOn EQ-I: YV), participants took pre-, mid-, and post-tests during one academic school year. Quantitative differences in growth on these psychological scales (individuals and school-wide) were examined. If a school showed change, qualitative artifacts (culture, curricula, instructional methodology, stakeholder interviews) provided insight for environmental correlation.

Keywords: gifted and talented programs, moral judgment, social and emotional intelligence, socio-affective education

Procedia PDF Downloads 193
50 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 286
49 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 63
48 Optimizing Productivity and Quality through the Establishment of a Learning Management System for an Agency-Based Graduate School

Authors: Maria Corazon Tapang-Lopez, Alyn Joy Dela Cruz Baltazar, Bobby Jones Villanueva Domdom

Abstract:

The requisite for an organization implementing quality management system to sustain its compliance to the requirements and commitment for continuous improvement is even higher. It is expected that the offices and units has high and consistent compliance to the established processes and procedures. The Development Academy of the Philippines has been operating under project management to which is has a quality management certification. To further realize its mandate as a think-tank and capacity builder of the government, DAP expanded its operation and started to grant graduate degree through its Graduate School of Public and Development Management (GSPDM). As the academic arm of the Academy, GSPDM offers graduate degree programs on public management and productivity & quality aligned to the institutional trusts. For a time, the documented procedures and processes of a project management seem to fit the Graduate School. However, there has been a significant growth in the operations of the GSPDM in terms of the graduate programs offered that directly increase the number of students. There is an apparent necessity to align the project management system into a more educational system otherwise it will no longer be responsive to the development that are taking place. The strongly advocate and encourage its students to pursue internal and external improvement to cope up with the challenges of providing quality service to their own clients and to our country. If innovation will not take roots in the grounds of GSPDM, then how will it serve the purpose of “walking the talk”? This research was conducted to assess the diverse flow of the existing internal operations and processes of the DAP’s project management and GSPDM’s school management that will serve as basis to develop a system that will harmonize into one, the Learning Management System. The study documented the existing process of GSPDM following the project management phases of conceptualization & development, negotiation & contracting, mobilization, implementation, and closure into different flow charts of the key activities. The primary source of information as respondents were the different groups involved into the delivery of graduate programs - the executive, learning management team and administrative support offices. The Learning Management System (LMS) shall capture the unique and critical processes of the GSPDM as a degree-granting unit of the Academy. The LMS is the harmonized project management and school management system that shall serve as the standard system and procedure for all the programs within the GSPDM. The unique processes cover the three important areas of school management – student, curriculum, and faculty. The required processes of these main areas such as enrolment, course syllabus development, and faculty evaluation were appropriately placed within the phases of the project management system. Further, the research shall identify critical reports and generate manageable documents and records to ensure accuracy, consistency and reliable information. The researchers had an in-depth review of the DAP-GSDPM’s mandate, analyze the various documents, and conducted series of focused group discussions. A comprehensive review on flow chart system prior and various models of school management systems were made. Subsequently, the final output of the research is a work instructions manual that will be presented to the Academy’s Quality Management Council and eventually an additional scope for ISO certification. The manual shall include documented forms, iterative flow charts and program Gantt chart that will have a parallel development of automated systems.

Keywords: productivity, quality, learning management system, agency-based graduate school

Procedia PDF Downloads 319
47 Digital Geological Map of the Loki Crystalline Massif (The Caucasus) and Its Multi-Informative Explanatory Note

Authors: Irakli Gamkrelidze, David Shengelia, Giorgi Chichinadze, Tamara Tsutsunava, Giorgi Beridze, Tamara Tsamalashvili, Ketevan Tedliashvili, Irakli Javakhishvili

Abstract:

The Caucasus is situated between the Eurasian and Africa-Arabian plates and represents a component of the Mediterranean (Alpine-Himalayan) collision belt. The Loki crystalline massif crops out within one of the terranes of the Caucasus – Baiburt-Sevanian terrane. By the end of 2018, a digital geological map (1:50 000) of the Loki massif was compiled. The presented map is of great importance for the region since there is no large-scale geological map which reflects the present standards of the geological study of the massif up to the last time. The existing State Geological Map of the Loki massif is very outdated. A new map drown by using GIS (Geographic Information System) technology is loaded with multi-informative details that include: specified contours of geological units and separate tectonic scales, key mineral assemblages and facies of metamorphism, temperature conditions of metamorphism, ages of metamorphism events and the massif rocks, genetic-geodynamic types of magmatic rocks. Explanatory note, attached to the map includes the large specter of scientific information. It contains characterization of the geological setting, composition and petrogenetic and geodynamic models of the massif formation. To create a geological map of the Loki crystalline massif, appropriate methodologies were applied: a sampling of rocks, GIS technology-based mapping of geological units, microscopic description of the material, composition analysis of rocks, microprobe analysis of minerals and a new interpretation of obtained data. To prepare a digital version of the map the appropriated activities were held including the creation of a common database. Finally, the design was created that includes the elaboration of legend and the final visualization of the map. The results of the study presented in the explanatory note are given below. The autochthonous gneissose quartz diorites of normal alkalinity and sub-alkaline gabbro-diorites included in them belong to different phases of magmatism. They represent “igneous” granites corresponding to mixed mantle-crustal type granites. Four tectonic plates of the allochthonous metamorphic complex–Lower Gorastskali, Sapharlo–Lok-Jandari, Moshevani, and Lower Gorastskali differ from each other by structure and degree of metamorphism. The initial rocks of these plates are formed in different geodynamic conditions and during the Early Bretonian orogeny while overthrusting due to tectonic compression they form a thick tectonic sheet. The Lower Gorastskali overthrust sheet is a fragment of ophiolitic association corresponding to the Paleotethys oceanic crust. The protolith of the ophiolitic complex basites corresponds to the tholeiitic series of basalts. The Sapharlo–Lok-Jandari overthrust sheet is metapelites, metamorphosed in conditions of greenschist facies of regional metamorphism. The regional metamorphism of Moshevani overthrust sheet crystalline schists quartzites corresponds to a range from greenschist to hornfels facies. The “mélange” is built of rock fragments and blocks of above-mentioned overthrust sheets. Sub-alkaline and normal alkaline post-metamorphic granites of the Loki crystalline massif belong to “igneous” and rarely to “sialic” and “anorogenic” types of granites.

Keywords: digital geological map, 1:50 000 scale, crystalline massif, the caucasus

Procedia PDF Downloads 172
46 WASH Governance Opportunity for Inspiring Innovation and a Circular Economy in Karnali Province of Nepal

Authors: Nirajan Shrestha

Abstract:

Karnali is one of the most vulnerable provinces in Nepal, facing challenges from climate change, poverty, and natural calamities across different regions. In recent years, the province has been severely impacted by climate change stress such as temperature rises in glacier lake of mountainous region and spring source water shortages, particularly in hilly areas where settlements are located, and water sources have depleted from their original ground levels. As a result, Karnali could face a future without enough water for all. Deep causes of sustainable safe water supply have always been neglected in rural areas of Nepal, and communities are unfairly burdened with a challenge of keeping water facilities functioning in areas affected by frequent natural disasters where there is a substantial, well-documented funding gap between the revenues from user payments and the full cost of sustained services. The key importance of a permanent system to support communities in service delivery has been always underrated so far. The complexity of water service sustainability as a topic should be simplified to one clear indicator: the functionality rate, which can be expressed as uptime or the percentage of time that the service is delivered over the total time. For example, a functionality rate of 80% means that the water service is operational 80% of the time, while 20% of the time the system is not functioning. This represents 0.2 multiplied by 365, which equals 73 days every year, or roughly two and a half months without water. This percentage should be widely understood and used in Karnali. All local governments should report their targets and performance in improving it, and there should be a broader discussion about what target is acceptable and what can be realistically achieved. In response to these challenges, the Sustainable WASH for All (SUSWA) project has introduced innovative models and policy formulation strategies in various working local government. SUSWA’s approach, which delegates rural water supply and sanitation responsibilities to local governments, has been instrumental in addressing these issues. To keep pace with the growing demand, the province has adopted a service support center model, linking local governments with federal authorities to ensure effective service delivery to the communities By enhancing WASH governance through local governments engagement, capacity building and inclusive WASH policy frameworks, there is potential to address WASH gaps while fostering a circular economy. This strategy emphasizes resource recovery, waste minimization and the creation of local employment generation opportunities. The research highlights key governance mechanisms, innovative practices and policy interventions that can be scaled up across other regions. It also provides recommendations on how to leverage Karnali’s unique socio-economic and environmental context nature-based solutions to inspire innovation and drive sustainable WASH solutions. Key findings suggest that with strong ownership and leadership of local governments, community engagement and appropriate technology, Karnali Province can become a model for integrating WASH governance with circular economy concept, providing broader lessons for other regions in Nepal.

Keywords: vulnerable provinces, natural calamities, climate change stres, spring source depletion, resources recovery, governance mechanisms, appropriate technology, community engagement, innovation

Procedia PDF Downloads 14
45 Obesity and Lifestyle of Students in Roumanian Southeastern Region

Authors: Mariana Stuparu-Cretu, Doina-Carina Voinescu, Rodica-Mihaela Dinica, Daniela Borda, Camelia Vizireanu, Gabriela Iordachescu, Camelia Busila

Abstract:

Obesity is involved in the etiology or acceleration of progression of important non-communicable diseases, such as: metabolic, cardiovascular, rheumatological, oncological and depression. It is a need to prevent the obesity occurrence, like a key link in disease management. From this point of view, the best approach is to early educate youngsters upon the need for a healthy nutrition lifestyle associated with constant physical activities. The objective of the study was to assess correlations between weight condition, physical activities and food preferences of students from South East Romania. Questionnaires were applied on high school students in Galati: 1006 girls and 880 boys, aged between 14 and 19 years (being approved by Local School Inspectorate and the Ethics Committee of the 'Dunarea de Jos' University of Galati). The collected answers have been statistically processed by using the multivariate regression method (PLS2) by Unscramble X program (Camo, Norway). Multiple variables such as age group, body mass index, nutritional habits and physical activities were separately analysed, depending on gender and general mathematical models were proposed to explain the obesity trend at an early age. The study results show that overweight and obesity are present in less than a fifth of the adolescents who were surveyed. With a very small variation and a strong correlation of over 86% for 99% of the cases, a general preference for sweet foods, nocturnal eating associated with computer work and a reduced period of physical activity is noticed for girls. In addition, the overweight girls consume sweet juices and alcohol, although a percentage of them also practice the gym. There is also a percentage of the normoponderal girls that consume high caloric foods which predispose this group to turn into overweight cases in time. Within the studied group, statistics for the boys show a positive correlation of almost 87% for over 96% of cases. They prefer high calories foods, fast food, and sweet juices, and perform medium physical activities. Both overweight and underweight boys are more sedentary. Over 15% of girls and over a quarter of boys consume alcohol. All these bad eating habits seem to increase with age, for both sexes. To conclude, obesity and overweight assessed in adolescents in S-E Romania reveal nonsignificant percentage differences between boys and girls. However, young people in this area of the country are sedentary in general; a significant percentage prefers sweets / sweet juices / fast-food and practice computer nourishing. The authors consider that at this age, it is very useful to adapt nutritional education by new methods of food processing and market supply. This would require an early understanding of the difference among foods and nutrients and the benefits of physical activities integrated into the healthy current lifestyle, as a measure for preventing and managing non-communicable chronic diseases related to nutritional errors and sedentarism. Acknowledgment— This study has been partial founded by the Francophone University Agency, Project Réseau régional dans le domaine de la santé, la nutrition et la sécurité alimentaire (SaIN), no.21899/ 06.09.2017.

Keywords: adolescents, body mass index, nutritional habits, obesity, physical activity

Procedia PDF Downloads 258
44 A Case Study Report on Acoustic Impact Assessment and Mitigation of the Hyprob Research Plant

Authors: D. Bianco, A. Sollazzo, M. Barbarino, G. Elia, A. Smoraldi, N. Favaloro

Abstract:

The activities, described in the present paper, have been conducted in the framework of the HYPROB-New Program, carried out by the Italian Aerospace Research Centre (CIRA) promoted and funded by the Italian Ministry of University and Research (MIUR) in order to improve the National background on rocket engine systems for space applications. The Program has the strategic objective to improve National system and technology capabilities in the field of liquid rocket engines (LRE) for future Space Propulsion Systems applications, with specific regard to LOX/LCH4 technology. The main purpose of the HYPROB program is to design and build a Propulsion Test Facility (HIMP) allowing test activities on Liquid Thrusters. The development of skills in liquid rocket propulsion can only pass through extensive test campaign. Following its mission, CIRA has planned the development of new testing facilities and infrastructures for space propulsion characterized by adequate sizes and instrumentation. The IMP test cell is devoted to testing articles representative of small combustion chambers, fed with oxygen and methane, both in liquid and gaseous phase. This article describes the activities that have been carried out for the evaluation of the acoustic impact, and its consequent mitigation. The impact of the simulated acoustic disturbance has been evaluated, first, using an approximated method based on experimental data by Baumann and Coney, included in “Noise and Vibration Control Engineering” edited by Vér and Beranek. This methodology, used to evaluate the free-field radiation of jet in ideal acoustical medium, analyzes in details the jet noise and assumes sources acting at the same time. It considers as principal radiation sources the jet mixing noise, caused by the turbulent mixing of jet gas and the ambient medium. Empirical models, allowing a direct calculation of the Sound Pressure Level, are commonly used for rocket noise simulation. The model named after K. Eldred is probably one of the most exploited in this area. In this paper, an improvement of the Eldred Standard model has been used for a detailed investigation of the acoustical impact of the Hyprob facility. This new formulation contains an explicit expression for the acoustic pressure of each equivalent noise source, in terms of amplitude and phase, allowing the investigation of the sources correlation effects and their propagation through wave equations. In order to enhance the evaluation of the facility acoustic impact, including an assessment of the mitigation strategies to be set in place, a more advanced simulation campaign has been conducted using both an in-house code for noise propagation and scattering, and a commercial code for industrial noise environmental impact, CadnaA. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach allowing the evaluation of the barrier mitigation effect, at the design. This approach has been compared with the analogous empirical/ray-acoustics approach, implemented within CadnaA using a customized definition of sources and directivity factor. The resulting impact evaluation study is reported here, along with the design-level barrier optimization for noise mitigation.

Keywords: acoustic impact, industrial noise, mitigation, rocket noise

Procedia PDF Downloads 146
43 Targeting Matrix Metalloprotease-9 to Reduce Coronary Artery Manifestations of Kawasaki’s Disease

Authors: Mohammadjavad Sotoudeheian, Navid Farahmandian

Abstract:

Kawasaki disease (KD) is the primary cause of acquired pediatric heart disease as an acute vasculitis. In children with prolonged fever, rash, and inflammation of the mucosa KD must be considered as a clinical diagnosis. There is a persuasive suggestion of immune-mediated damage as the pathophysiologic cascade of KD. For example, the invasion of cytotoxic T-cells supports a viral etiology and the inflammasome of the innate immune system is a critical component in the vasculitis formation in KD. Animal models of KD propose the cytokine profiles, such as increased IL-1 and GM-CSF, which cause vascular damage. CRP and IFN-γ elevated expression and the upregulation of IL-6, and IL-10 production are also described in previous studies. Untreated KD is a critical risk factor for coronary artery diseases and myocardial infarction. Vascular damage may encompass amplified T-cell activity. SMAD3 is an essential molecule in down-regulating T-cells and increasing expression of FoxP3. It has a critical effect in the differentiation of regulatory T-cells. The discrepancy of regulatory T-cells and pro-inflammatory Th17 has been studied in acute coronary syndrome during KD. However in the coronary artery damaged lymphocytes and IgA plasma cells are seen at the lesion locations, the major immune cells in the coronary lesions are monocytes/macrophages and neutrophils. These cells secrete TNF-α, and activates matrix metalloprotease (MMP)-9, reducing the integrity of vessels and prompting patients to arise aneurysm. MMPs can break down the components of the extracellular matrix and assist immune cell movement. IVIG as an effective form of treatment clarified the role of the immune system, which may target pathogenic antigens and regulate cytokine production. Several reports have revealed that in the coronary arteries, high expression of MMP-9 in monocyte/macrophage results in pathologic cascades. Curcumin is a potent antioxidant and anti-inflammatory molecule. Curcumin decreases the production of reactive oxygen and nitrogen species and inhibits transcription factors like AP-1 and NF-κB. Curcumin also contains the characteristics of inhibitory effects on MMPs, especially MMP-9. The upregulation of MMP-9 is an important cellular response. Curcumin treatment caused a reverse effect and down-regulates MMP-9 gene expression which may fund the anti-inflammatory effect. Curcumin inhibits MMP-9 expression via PKC and AMPK-dependent pathways in Human monocytes cells. Elevated expression and activity of MMP-9 are correlated with advanced vascular lesions. AMPK controls lipid metabolism and oxidation, and protein synthesis. AMPK is also necessary for the MMP-9 activity and THP-1 cell adhesion to endothelial cells. Curcumin was shown to inhibit the activation of AMPKα. Compound C (AMPK inhibitor) inhibits MMP-9 expression level. Therefore, through inactivating AMPKs and PKC, curcumin decreases the MMP-9 level, which results in inhibiting monocyte/macrophage differentiation. Compound C also suppress the phosphorylation of three major classes of MAP kinase signaling, suggesting that curcumin may suppress MMP-9 level by inactivation of MAPK pathways. MAPK cascades are activated to induce the expression of MMP-9. Curcumin inhibits MAPKs phosphorylation, which contributes to the down-regulation of MMP-9. This study demonstrated that the potential inhibitory properties of curcumin over MMP-9 lead to a therapeutic strategy to reduce the risk of coronary artery involvement during KD.

Keywords: MMP-9, coronary artery aneurysm, Kawasaki’s disease, curcumin, AMPK, immune system, NF-κB, MAPK

Procedia PDF Downloads 304
42 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 223
41 Smart Laboratory for Clean Rivers in India - An Indo-Danish Collaboration

Authors: Nikhilesh Singh, Shishir Gaur, Anitha K. Sharma

Abstract:

Climate change and anthropogenic stress have severely affected ecosystems all over the globe. Indian rivers are under immense pressure, facing challenges like pollution, encroachment, extreme fluctuation in the flow regime, local ignorance and lack of coordination between stakeholders. To counter all these issues a holistic river rejuvenation plan is needed that tests, innovates and implements sustainable solutions in the river space for sustainable river management. Smart Laboratory for Clean Rivers (SLCR) an Indo-Danish collaboration project, provides a living lab setup that brings all the stakeholders (government agencies, academic and industrial partners and locals) together to engage, learn, co-creating and experiment for a clean and sustainable river that last for ages. Just like every mega project requires piloting, SLCR has opted for a small catchment of the Varuna River, located in the Middle Ganga Basin in India. Considering the integrated approach of river rejuvenation, SLCR embraces various techniques and upgrades for rejuvenation. Likely, maintaining flow in the channel in the lean period, Managed Aquifer Recharge (MAR) is a proven technology. In SLCR, Floa-TEM high-resolution lithological data is used in MAR models to have better decision-making for MAR structures nearby of the river to enhance the river aquifer exchanges. Furthermore, the concerns of quality in the river are a big issue. A city like Varanasi which is located in the last stretch of the river, generates almost 260 MLD of domestic waste in the catchment. The existing STP system is working at full efficiency. Instead of installing a new STP for the future, SLCR is upgrading those STPs with an IoT-based system that optimizes according to the nutrient load and energy consumption. SLCR also advocate nature-based solutions like a reed bed for the drains having less flow. In search of micropollutants, SLCR uses fingerprint analysis involves employing advanced techniques like chromatography and mass spectrometry to create unique chemical profiles. However, rejuvenation attempts cannot be possible without involving the entire catchment. A holistic water management plan that includes storm management, water harvesting structure to efficiently manage the flow of water in the catchment and installation of several buffer zones to restrict pollutants entering into the river. Similarly, carbon (emission and sequestration) is also an important parameter for the catchment. By adopting eco-friendly practices, a ripple effect positively influences the catchment's water dynamics and aids in the revival of river systems. SLCR has adopted 4 villages to make them carbon-neutral and water-positive. Moreover, for the 24×7 monitoring of the river and the catchment, robust IoT devices are going to be installed to observe, river and groundwater quality, groundwater level, river discharge and carbon emission in the catchment and ultimately provide fuel for the data analytics. In its completion, SLCR will provide a river restoration manual, which will strategise the detailed plan and way of implementation for stakeholders. Lastly, the entire process is planned in such a way that will be managed by local administrations and stakeholders equipped with capacity-building activity. This holistic approach makes SLCR unique in the field of river rejuvenation.

Keywords: sustainable management, holistic approach, living lab, integrated river management

Procedia PDF Downloads 60
40 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 231
39 Impacts of School-Wide Positive Behavioral Interventions and Supports on Student Academics, Behavior and Mental Health

Authors: Catherine Bradshaw

Abstract:

Educators often report difficulty managing behavior problems and other mental health concerns that students display at school. These concerns also interfere with the learning process and can create distraction for teachers and other students. As such, schools play an important role in both preventing and intervening with students who experience these types of challenges. A number of models have been proposed to serve as a framework for delivering prevention and early intervention services in schools. One such model is called Positive Behavioral Interventions and Supports (PBIS), which has been scaled-up to over 26,000 schools in the U.S. and many other countries worldwide. PBIS aims to improve a range of student outcomes through early detection of and intervention related to behavioral and mental health symptoms. PBIS blends and applies social learning, behavioral, and organizational theories to prevent disruptive behavior and enhance the school’s organizational health. PBIS focuses on creating and sustaining tier 1 (universal), tier 2 (selective), and tier 3 (individual) systems of support. Most schools using PBIS have focused on the core elements of the tier 1 supports, which includes the following critical features. The formation of a PBIS team within the school to lead implementation. Identification and training of a behavioral support ‘coach’, who serves as a on-site technical assistance provider. Many of the individuals identified to serve as a PBIS coach are also trained as a school psychologist or guidance counselor; coaches typically have prior PBIS experience and are trained to conduct functional behavioral assessments. The PBIS team also identifies a set of three to five positive behavioral expectations that are implemented for all students and by all staff school-wide (e.g., ‘be respectful, responsible, and ready to learn’); these expectations are posted in all settings across the school, including in the classroom, cafeteria, playground etc. All school staff define and teach the school-wide behavioral expectations to all students and review them regularly. Finally, PBIS schools develop or adopt a school-wide system to reward or reinforce students who demonstrate those 3-5 positive behavioral expectations. Staff and administrators create an agreed upon system for responding to behavioral violations that include definitions about what constitutes a classroom-managed vs. an office-managed discipline problem. Finally, a formal system is developed to collect, analyze, and use disciplinary data (e.g., office discipline referrals) to inform decision-making. This presentation provides a brief overview of PBIS and reports findings from a series of four U.S. based longitudinal randomized controlled trials (RCTs) documenting the impacts of PBIS on school climate, discipline problems, bullying, and academic achievement. The four RCTs include 80 elementary, 40 middle, and 58 high schools and results indicate a broad range of impacts on multiple student and school-wide outcomes. The session will highlight lessons learned regarding PBIS implementation and scale-up. We also review the ways in which PBIS can help educators and school leaders engage in data-based decision-making and share data with other decision-makers and stakeholders (e.g., students, parents, community members), with the overarching goal of increasing use of evidence-based programs in schools.

Keywords: positive behavioral interventions and supports, mental health, randomized trials, school-based prevention

Procedia PDF Downloads 230
38 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 108
37 Determination of Aquifer Geometry Using Geophysical Methods: A Case Study from Sidi Bouzid Basin, Central Tunisia

Authors: Dhekra Khazri, Hakim Gabtni

Abstract:

Because of Sidi Bouzid water table overexploitation, this study aims at integrating geophysical methods to determinate aquifers geometry assessing their geological situation and geophysical characteristics. However in highly tectonic zones controlled by Atlassic structural features with NE-SW major directions (central Tunisia), Bouguer gravimetric responses of some areas can be as much dominated by the regional structural tendency, as being non-identified or either defectively interpreted such as the case of Sidi Bouzid basin. This issue required a residual gravity anomaly elaboration isolating the Sidi Bouzid basin gravity response ranging between -8 and -14 mGal and crucial for its aquifers geometry characterization. Several gravity techniques helped constructing the Sidi Bouzid basin's residual gravity anomaly, such as Upwards continuation compared to polynomial regression trends and power spectrum analysis detecting deep basement sources at (3km), intermediate (2km) and shallow sources (1km). A 3D Euler Deconvolution was also performed detecting deepest accidents trending NE-SW, N-S and E-W with depth values reaching 5500 m and delineating the main outcropping structures of the study area. Further gravity treatments highlighted the subsurface geometry and structural features of Sidi Bouzid basin over Horizontal and vertical gradient, and also filters based on them such as Tilt angle and Source Edge detector locating rooted edges or peaks from potential field data detecting a new E-W lineament compartmentalizing the Sidi Bouzid gutter into two unequally residual anomaly and subsiding domains. This subsurface morphology is also detected by the used 2D seismic reflection sections defining the Sidi Bouzid basin as a deep gutter within a tectonic set of negative flower structures, and collapsed and tilted blocks. Furthermore, these structural features were confirmed by forward gravity modeling process over several modeled residual gravity profiles crossing the main area. Sidi Bouzid basin (central Tunisia) is also of a big interest cause of the unknown total thickness and the undefined substratum of its siliciclastic Tertiary package, and its aquifers unbounded structural subsurface features and deep accidents. The Combination of geological, hydrogeological and geophysical methods is then of an ultimate need. Therefore, a geophysical methods integration based on gravity survey supporting available seismic data through forward gravity modeling, enhanced lateral and vertical extent definition of the basin's complex sedimentary fill via 3D gravity models, improved depth estimation by a depth to basement modeling approach, and provided 3D isochronous seismic mapping visualization of the basin's Tertiary complex refining its geostructural schema. A subsurface basin geomorphology mapping, over an ultimate matching between the basin's residual gravity map and the calculated theoretical signature map, was also displayed over the modeled residual gravity profiles. An ultimate multidisciplinary geophysical study of the Sidi Bouzid basin aquifers can be accomplished via an aeromagnetic survey and a 4D Microgravity reservoir monitoring offering temporal tracking of the target aquifer's subsurface fluid dynamics enhancing and rationalizing future groundwater exploitation in this arid area of central Tunisia.

Keywords: aquifer geometry, geophysics, 3D gravity modeling, improved depths, source edge detector

Procedia PDF Downloads 283
36 Anajaa-Visual Substitution System: A Navigation Assistive Device for the Visually Impaired

Authors: Juan Pablo Botero Torres, Alba Avila, Luis Felipe Giraldo

Abstract:

Independent navigation and mobility through unknown spaces pose a challenge for the autonomy of visually impaired people (VIP), who have relied on the use of traditional assistive tools like the white cane and trained dogs. However, emerging visually assistive technologies (VAT) have proposed several human-machine interfaces (HMIs) that could improve VIP’s ability for self-guidance. Hereby, we introduce the design and implementation of a visually assistive device, Anajaa – Visual Substitution System (AVSS). This system integrates ultrasonic sensors with custom electronics, and computer vision models (convolutional neural networks), in order to achieve a robust system that acquires information of the surrounding space and transmits it to the user in an intuitive and efficient manner. AVSS consists of two modules: the sensing and the actuation module, which are fitted to a chest mount and belt that communicate via Bluetooth. The sensing module was designed for the acquisition and processing of proximity signals provided by an array of ultrasonic sensors. The distribution of these within the chest mount allows an accurate representation of the surrounding space, discretized in three different levels of proximity, ranging from 0 to 6 meters. Additionally, this module is fitted with an RGB-D camera used to detect potentially threatening obstacles, like staircases, using a convolutional neural network specifically trained for this purpose. Posteriorly, the depth data is used to estimate the distance between the stairs and the user. The information gathered from this module is then sent to the actuation module that creates an HMI, by the means of a 3x2 array of vibration motors that make up the tactile display and allow the system to deliver haptic feedback. The actuation module uses vibrational messages (tactones); changing both in amplitude and frequency to deliver different awareness levels according to the proximity of the obstacle. This enables the system to deliver an intuitive interface. Both modules were tested under lab conditions, and the HMI was additionally tested with a focal group of VIP. The lab testing was conducted in order to establish the processing speed of the computer vision algorithms. This experimentation determined that the model can process 0.59 frames per second (FPS); this is considered as an adequate processing speed taking into account that the walking speed of VIP is 1.439 m/s. In order to test the HMI, we conducted a focal group composed of two females and two males between the ages of 35-65 years. The subject selection was aided by the Colombian Cooperative of Work and Services for the Sightless (COOTRASIN). We analyzed the learning process of the haptic messages throughout five experimentation sessions using two metrics: message discrimination and localization success. These correspond to the ability of the subjects to recognize different tactones and locate them within the tactile display. Both were calculated as the mean across all subjects. Results show that the focal group achieved message discrimination of 70% and a localization success of 80%, demonstrating how the proposed HMI leads to the appropriation and understanding of the feedback messages, enabling the user’s awareness of its surrounding space.

Keywords: computer vision on embedded systems, electronic trave aids, human-machine interface, haptic feedback, visual assistive technologies, vision substitution systems

Procedia PDF Downloads 81
35 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings

Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch

Abstract:

It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.

Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry

Procedia PDF Downloads 168
34 Phenotype and Psychometric Characterization of Phelan-Mcdermid Syndrome Patients

Authors: C. Bel, J. Nevado, F. Ciceri, M. Ropacki, T. Hoffmann, P. Lapunzina, C. Buesa

Abstract:

Background: The Phelan-McDermid syndrome (PMS) is a genetic disorder caused by the deletion of the terminal region of chromosome 22 or mutation of the SHANK3 gene. Shank3 disruption in mice leads to dysfunction of synaptic transmission, which can be restored by epigenetic regulation with both Lysine Specific Demethylase 1 (LSD1) inhibitors. PMS subjects result in a variable degree of intellectual disability, delay or absence of speech, autistic spectrum disorders symptoms, low muscle tone, motor delays and epilepsy. Vafidemstat is an LSD1 inhibitor in Phase II clinical development with a well-established and favorable safety profile, and data supporting the restoration of memory and cognition defects as well as reduction of agitation and aggression in several animal models and clinical studies. Therefore, vafidemstat has the potential to become a first-in-class precision medicine approach to treat PMS patients. Aims: The goal of this research is to perform an observational trial to psychometrically characterize individuals carrying deletions in SHANK3 and build a foundation for subsequent precision psychiatry clinical trials with vafidemstat. Methodology: This study is characterizing the clinical profile of 20 to 40 subjects, > 16-year-old, with genotypically confirmed PMS diagnosis. Subjects will complete a battery of neuropsychological scales, including the Repetitive Behavior Questionnaire (RBQ), Vineland Adaptive Behavior Scales, Escala de Observación para el Diagnostico del Autismo (Autism Diagnostic Observational Scale) (ADOS)-2, the Battelle Developmental Inventory and the Behavior Problems Inventory (BPI). Results: By March 2021, 19 patients have been enrolled. Unsupervised hierarchical clustering of the results obtained so far identifies 3 groups of patients, characterized by different profiles of cognitive and behavioral scores. The first cluster is characterized by low Battelle age, high ADOS and low Vineland, RBQ and BPI scores. Low Vineland, RBQ and BPI scores are also detected in the second cluster, which in contrast has high Battelle age and low ADOS scores. The third cluster is somewhat in the middle for the Battelle, Vineland and ADOS scores while displaying the highest levels of aggression (high BPI) and repeated behaviors (high RBQ). In line with the observation that female patients are generally affected by milder forms of autistic symptoms, no male patients are present in the second cluster. Dividing the results by gender highlights that male patients in the third cluster are characterized by a higher frequency of aggression, whereas female patients from the same cluster display a tendency toward higher repetitive behavior. Finally, statistically significant differences in deletion sizes are detected comparing the three clusters (also after correcting for gender), and deletion size appears to be positively correlated with ADOS and negatively correlated with Vineland A and C scores. No correlation is detected between deletion size and the BPI and RBQ scores. Conclusions: Precision medicine may open a new way to understand and treat Central Nervous System disorders. Epigenetic dysregulation has been proposed to be an important mechanism in the pathogenesis of schizophrenia and autism. Vafidemstat holds exciting therapeutic potential in PMS, and this study will provide data regarding the optimal endpoints for a future clinical study to explore vafidemstat ability to treat shank3-associated psychiatric disorders.

Keywords: autism, epigenetics, LSD1, personalized medicine

Procedia PDF Downloads 165
33 Digital Mapping of First-Order Drainages and Springs of the Guajiru River, Northeast of Brazil, Based on Satellite and Drone Images

Authors: Sebastião Milton Pinheiro da Silva, Michele Barbosa da Rocha, Ana Lúcia Fernandes Campos, Miquéias Rildo de Souza Silva

Abstract:

Water is an essential natural resource for life on Earth. Rivers, lakes, lagoons and dams are the main sources of water storage for human consumption. The costs of extracting and using these water sources are lower than those of exploiting groundwater on transition zones to semi-arid terrains. However, the volume of surface water has decreased over time, with the depletion of first-order drainage and the disappearance of springs, phenomena which are easily observed in the field. Climate change worsens water scarcity, compromising supply and hydric security for rural populations. To minimize the expected impacts, producing and storing water through watershed management planning requires detailed cartographic information on the relief and topography, and updated data on the stage and intensity of catchment basin environmental degradation problems. The cartography available of the Brazilian northeastern territory dates to the 70s, with topographic maps, printed, at a scale of 1:100,000 which does not meet the requirements to execute this project. Exceptionally, there are topographic maps at scales of 1:50,000 and 1:25,000 of some coastal regions in northeastern Brazil. Still, due to scale limitations and outdatedness, they are products of little utility for mapping low-order watersheds drainage and springs. Remote sensing data and geographic information systems can contribute to guiding the process of mapping and environmental recovery by integrating detailed relief and topographic data besides social and other environmental information in the Guajiru River Basin, located on the east coast of Rio Grande do Norte, on the Northeast region of Brazil. This study aimed to recognize and map catchment basin, springs and low-order drainage features along estimating morphometric parameters. Alos PALSAR and Copernicus DEM digital elevation models were evaluated and provided regional drainage features and the watersheds limits extracted with Terraview/Terrahidro 5.0 software. CBERS 4A satellite images with 2 m spatial resolution, processed with ESA SNAP Toolbox, allowed generating land use land cover map of Guajiru River. A Mappir Survey 3 multiespectral camera onboard of a DJI Phantom 4, a Mavic 2 Pro PPK Drone and an X91 GNSS receiver to collect the precised position of selected points were employed to detail mapping. Satellite images enabled a first knowledge approach of watershed areas on a more regional scale, yet very current, and drone images were essential in mapping details of catchment basins. The drone multispectral image mosaics, the digital elevation model, the contour lines and geomorphometric parameters were generated using OpenDroneMap/ODM and QGis softwares. The drone images generated facilitated the location, understanding and mapping of watersheds, recharge areas and first-order ephemeral watercourses on an adequate scale and will be used in the following project’s phases: watershed management planning, recovery and environmental protection of Rio's springs Guajiru. Environmental degradation is being analyzed from the perspective of the availability and quality of surface water supply.

Keywords: imaging, relief, UAV, water

Procedia PDF Downloads 31
32 TeleEmergency Medicine: Transforming Acute Care through Virtual Technology

Authors: Ashley L. Freeman, Jessica D. Watkins

Abstract:

TeleEmergency Medicine (TeleEM) is an innovative approach leveraging virtual technology to deliver specialized emergency medical care across diverse healthcare settings, including internal acute care and critical access hospitals, remote patient monitoring, and nurse triage escalation, in addition to external emergency departments, skilled nursing facilities, and community health centers. TeleEM represents a significant advancement in the delivery of emergency medical care, providing healthcare professionals the capability to deliver expertise that closely mirrors in-person emergency medicine, exceeding geographical boundaries. Through qualitative research, the extension of timely, high-quality care has proven to address the critical needs of patients in remote and underserved areas. TeleEM’s service design allows for the expansion of existing services and the establishment of new ones in diverse geographic locations. This ensures that healthcare institutions can readily scale and adapt services to evolving community requirements by leveraging on-demand (non-scheduled) telemedicine visits through the deployment of multiple video solutions. In terms of financial management, TeleEM currently employs billing suppression and subscription models to enhance accessibility for a wide range of healthcare facilities. Plans are in motion to transition to a billing system routing charges through a third-party vendor, further enhancing financial management flexibility. To address state licensure concerns, a patient location verification process has been integrated through legal counsel and compliance authorities' guidance. The TeleEM workflow is designed to terminate if the patient is not physically located within licensed regions at the time of the virtual connection, alleviating legal uncertainties. A distinctive and pivotal feature of TeleEM is the introduction of the TeleEmergency Medicine Care Team Assistant (TeleCTA) role. TeleCTAs collaborate closely with TeleEM Physicians, leading to enhanced service activation, streamlined coordination, and workflow and data efficiencies. In the last year, more than 800 TeleEM sessions have been conducted, of which 680 were initiated by internal acute care and critical access hospitals, as evidenced by quantitative research. Without this service, many of these cases would have necessitated patient transfers. Barriers to success were examined through thorough medical record review and data analysis, which identified inaccuracies in documentation leading to activation delays, limitations in billing capabilities, and data distortion, as well as the intricacies of managing varying workflows and device setups. TeleEM represents a transformative advancement in emergency medical care that nurtures collaboration and innovation. Not only has advanced the delivery of emergency medicine care virtual technology through focus group participation with key stakeholders, rigorous attention to legal and financial considerations, and the implementation of robust documentation tools and the TeleCTA role, but it’s also set the stage for overcoming geographic limitations. TeleEM assumes a notable position in the field of telemedicine by enhancing patient outcomes and expanding access to emergency medical care while mitigating licensure risks and ensuring compliant billing.

Keywords: emergency medicine, TeleEM, rural healthcare, telemedicine

Procedia PDF Downloads 83
31 Increasing Student Engagement through Culturally-Responsive Classroom Management

Authors: Catherine P. Bradshaw, Elise T. Pas, Katrina J. Debnam, Jessika H. Bottiani, Michael Rosenberg

Abstract:

Worldwide, ethnically and culturally diverse students are at increased risk for school failure, discipline problems, and dropout. Despite decades of concern about this issue of disparities in education and other fields (e.g., 'school to prison pipeline'), there has been limited empirical examination of models that can actually reduce these gaps in schools. Moreover, few studies have examined the effectiveness of in-service teacher interventions and supports specifically designed to reduce discipline disparities and improve student engagement. This session provides an overview of the evidence-based Double Check model which serves as a framework for teachers to use culturally-responsive strategies to engage ethnically and culturally diverse students in the classroom and reduce discipline problems. Specifically, Double Check is a school-based prevention program which includes three core components: (a) enhancements to the school-wide Positive Behavioral Interventions and Supports (PBIS) tier-1 level of support; (b) five one-hour professional development training sessions, each of which addresses five domains of cultural competence (i.e., connection to the curriculum, authentic relationships, reflective thinking, effective communication, and sensitivity to students’ culture); and (c) coaching of classroom teachers using an adapted version of the Classroom Check-Up, which intends to increase teachers’ use of effective classroom management and culturally-responsive strategies using research-based motivational interviewing and data-informed problem-solving approaches. This paper presents findings from a randomized controlled trial (RCT) testing the impact of Double Check, on office discipline referrals (disaggregated by race) and independently observed and self-reported culturally-responsive practices and classroom behavior management. The RCT included 12 elementary and middle schools; 159 classroom teachers were randomized either to receive coaching or serve as comparisons. Specifically, multilevel analyses indicated that teacher self-reported culturally responsive behavior management improved over the course of the school year for teachers who received the coaching and professional development. However, the average annual office discipline referrals issued to black students were reduced among teachers who were randomly assigned to receive coaching relative to comparison teachers. Similarly, observations conducted by trained external raters indicated significantly more teacher proactive behavior management and anticipation of student problems, higher student compliance, less student non-compliance, and less socially disruptive behaviors in classrooms led by coached teachers than classrooms led teachers randomly assigned to the non-coached condition. These findings indicated promising effects of the Double Check model on a range of teacher and student outcomes, including disproportionality in office discipline referrals among Black students. These results also suggest that the Double Check model is one of only a few systematic approaches to promoting culturally-responsive behavior management which has been rigorously tested and shown to be associated with improvements in either student or staff outcomes indicated significant reductions in discipline problems and improvements in behavior management. Implications of these findings are considered within the broader context of globalization and demographic shifts, and their impacts on schools. These issues are particularly timely, given growing concerns about immigration policies in the U.S. and abroad.

Keywords: ethnically and culturally diverse students, student engagement, school-based prevention, academic achievement

Procedia PDF Downloads 282
30 Working at the Interface of Health and Criminal Justice: An Interpretative Phenomenological Analysis Exploration of the Experiences of Liaison and Diversion Nurses – Emerging Findings

Authors: Sithandazile Masuku

Abstract:

Introduction: Public health approaches to offender mental health are driven by international policies and frameworks in response to the disproportionately large representation of people with mental health problems within the offender pathway compared to the general population. Public health service innovations include mental health courts in the US, restorative models in Singapore and, liaison and diversion services in Australia, the UK, and some other European countries. Mental health nurses are at the forefront of offender health service innovations. In the U.K. context, police custody has been identified as an early point within the offender pathway where nurses can improve outcomes by offering assessments and share information with criminal justice partners. This scope of nursing practice has introduced challenges related to skills and support required for nurses working at the interface of health and the criminal justice system. Parallel literature exploring experiences of nurses working in forensic settings suggests the presence of compassion fatigue, burnout and vicarious trauma that may impede risk harm to the nurses in these settings. Published research explores mainly service-level outcomes including monitoring of figures indicative of a reduction in offending behavior. There is minimal research exploring the experiences of liaison and diversion nurses who are situated away from a supportive clinical environment and engaged in complex autonomous decision-making. Aim: This paper will share qualitative findings (in progress) from a PhD study that aims to explore the experiences of liaison and diversion nurses in one service in the U.K. Methodology: This is a qualitative interview study conducted using an Interpretative Phenomenological Analysis to gain an in-depth analysis of lived experiences. Methods: A purposive sampling technique was used to recruit n=8 mental health nurses registered with the UK professional body, Nursing and Midwifery Council, from one UK Liaison and Diversion service. All participants were interviewed online via video call using semi-structured interview topic guide. Data were recorded and transcribed verbatim. Data were analysed using the seven steps of the Interpretative Phenomenological Analysis data analysis method. Emerging Findings Analysis to date has identified pertinent themes: • Difficulties of meaning-making for nurses because of the complexity of their boundary spanning role. • Emotional burden experienced in a highly emotive and fast-changing environment. • Stress and difficulties with role identity impacting on individual nurses’ ability to be resilient. • Challenges to wellbeing related to a sense of isolation when making complex decisions. Conclusion Emerging findings have highlighted the lived experiences of nurses working in liaison and diversion as challenging. The nature of the custody environment has an impact on role identity and decision making. Nurses left feeling isolated and unsupported are less resilient and may go on to experience compassion fatigue. The findings from this study thus far point to a need to connect nurses working in these boundary spanning roles with a supportive infrastructure where the complexity of their role is acknowledged, and they can be connected with a health agenda. In doing this, the nurses would be protected from harm and the likelihood of sustained positive outcomes for service users is optimised.

Keywords: liaison and diversion, nurse experiences, offender health, staff wellbeing

Procedia PDF Downloads 135