Search results for: critical hydropolitics
62 Understanding the Impact of Resilience Training on Cognitive Performance in Military Personnel
Authors: Haji Mohammad Zulfan Farhi Bin Haji Sulaini, Mohammad Azeezudde’en Bin Mohd Ismaon
Abstract:
The demands placed on military athletes extend beyond physical prowess to encompass cognitive resilience in high-stress environments. This study investigates the effects of resilience training on the cognitive performance of military athletes, shedding light on the potential benefits and implications for optimizing their overall readiness. In a rapidly evolving global landscape, armed forces worldwide are recognizing the importance of cognitive resilience alongside physical fitness. The study employs a mixed-methods approach, incorporating quantitative cognitive assessments and qualitative data from military athletes undergoing resilience training programs. Cognitive performance is evaluated through a battery of tests, including measures of memory, attention, decision-making, and reaction time. The participants, drawn from various branches of the military, are divided into experimental and control groups. The experimental group undergoes a comprehensive resilience training program, while the control group receives traditional physical training without a specific focus on resilience. The initial findings indicate a substantial improvement in cognitive performance among military athletes who have undergone resilience training. These improvements are particularly evident in domains such as attention and decision-making. The experimental group demonstrated enhanced situational awareness, quicker problem-solving abilities, and increased adaptability in high-stress scenarios. These results suggest that resilience training not only bolsters mental toughness but also positively impacts cognitive skills critical to military operations. In addition to quantitative assessments, qualitative data is collected through interviews and surveys to gain insights into the subjective experiences of military athletes. Preliminary analysis of these narratives reveals that participants in the resilience training program report higher levels of self-confidence, emotional regulation, and an improved ability to manage stress. These psychological attributes contribute to their enhanced cognitive performance and overall readiness. Moreover, this study explores the potential long-term benefits of resilience training. By tracking participants over an extended period, we aim to assess the durability of cognitive improvements and their effects on overall mission success. Early results suggest that resilience training may serve as a protective factor against the detrimental effects of prolonged exposure to stressors, potentially reducing the risk of burnout and psychological trauma among military athletes. This research has significant implications for military organizations seeking to optimize the performance and well-being of their personnel. The findings suggest that integrating resilience training into the training regimen of military athletes can lead to a more resilient and cognitively capable force. This, in turn, may enhance mission success, reduce the risk of injuries, and improve the overall effectiveness of military operations. In conclusion, this study provides compelling evidence that resilience training positively impacts the cognitive performance of military athletes. The preliminary results indicate improvements in attention, decision-making, and adaptability, as well as increased psychological resilience. As the study progresses and incorporates long-term follow-ups, it is expected to provide valuable insights into the enduring effects of resilience training on the cognitive readiness of military athletes, contributing to the ongoing efforts to optimize military personnel's physical and mental capabilities in the face of ever-evolving challenges.Keywords: military athletes, cognitive performance, resilience training, cognitive enhancement program
Procedia PDF Downloads 8161 A Comparative Evaluation of Cognitive Load Management: Case Study of Postgraduate Business Students
Authors: Kavita Goel, Donald Winchester
Abstract:
In a world of information overload and work complexities, academics often struggle to create an online instructional environment enabling efficient and effective student learning. Research has established that students’ learning styles are different, some learn faster when taught using audio and visual methods. Attributes like prior knowledge and mental effort affect their learning. ‘Cognitive load theory’, opines learners have limited processing capacity. Cognitive load depends on the learner’s prior knowledge, the complexity of content and tasks, and instructional environment. Hence, the proper allocation of cognitive resources is critical for students’ learning. Consequently, a lecturer needs to understand the limits and strengths of the human learning processes, various learning styles of students, and accommodate these requirements while designing online assessments. As acknowledged in the cognitive load theory literature, visual and auditory explanations of worked examples potentially lead to a reduction of cognitive load (effort) and increased facilitation of learning when compared to conventional sequential text problem solving. This will help learner to utilize both subcomponents of their working memory. Instructional design changes were introduced at the case site for the delivery of the postgraduate business subjects. To make effective use of auditory and visual modalities, video recorded lectures, and key concept webinars were delivered to students. Videos were prepared to free up student limited working memory from irrelevant mental effort as all elements in a visual screening can be viewed simultaneously, processed quickly, and facilitates greater psychological processing efficiency. Most case study students in the postgraduate programs are adults, working full-time at higher management levels, and studying part-time. Their learning style and needs are different from other tertiary students. The purpose of the audio and visual interventions was to lower the students cognitive load and provide an online environment supportive to their efficient learning. These changes were expected to impact the student’s learning experience, their academic performance and retention favourably. This paper posits that these changes to instruction design facilitates students to integrate new knowledge into their long-term memory. A mixed methods case study methodology was used in this investigation. Primary data were collected from interviews and survey(s) of students and academics. Secondary data were collected from the organisation’s databases and reports. Some evidence was found that the academic performance of students does improve when new instructional design changes are introduced although not statistically significant. However, the overall grade distribution of student’s academic performance has changed and skewed higher which shows deeper understanding of the content. It was identified from feedback received from students that recorded webinars served as better learning aids than material with text alone, especially with more complex content. The recorded webinars on the subject content and assessments provides flexibility to students to access this material any time from repositories, many times, and this enhances students learning style. Visual and audio information enters student’s working memory more effectively. Also as each assessment included the application of the concepts, conceptual knowledge interacted with the pre-existing schema in the long-term memory and lowered student’s cognitive load.Keywords: cognitive load theory, learning style, instructional environment, working memory
Procedia PDF Downloads 14660 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil
Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes
Abstract:
Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey
Procedia PDF Downloads 17459 Nurturing Minds, Shaping Futures: A Reflective Journey of 32 Years as a Teacher Educator
Authors: Mary Isobelle Mullaney
Abstract:
The maxim "an unexamined life is not worth living," attributed to Socrates, prompts a contemplative reflection spanning over 32 years as a teacher educator in the Republic of Ireland. Taking time to contemplate the changes that have occurred and the current landscape provides valuable insights into the dynamic terrain of teacher preparation. The reflective journey traverses the impacts of global and societal shifts, responding to challenges, embracing advancements, and navigating the delicate balance between responsiveness to the world and the active shaping of it. The transformative events of the COVID-19 pandemic spotlighted the indispensable role of teachers in Ireland, reinforcing the critical nature of education for the well-being of pupils. Research solidifies the understanding that teachers matter and so it is worth exploring the pivotal role of the teacher educator. This reflective piece examines the changes in teacher education and explores the juxtapositions that have emerged in response to three decades of profound change. The attractiveness of teaching as a career is juxtaposed against the reality of the demands of the job, with conditions for public servants in Ireland undergoing a shift. High-level strategic discussions about increasing teacher numbers now contrast with a previous oversupply. The delicate balance between the imperative to increase enrolment (getting "bums on seats") and the gatekeeper role of teacher educators is explored, raising questions about maintaining high standards amid changing student profiles. Another poignant dichotomy involves the high demand for teachers versus the hurdles candidates face in becoming teachers. The rising cost and duration of teacher education courses raise concerns about attracting quality candidates. The perceived attractiveness of teaching as a career contends with the reality of increased demands on educators. One notable juxtaposition centres around the rapid evolution of Irish initial teacher education versus the potential risk of change overload. The Teaching Council of Ireland has spearheaded considerable changes, raising questions about the timing and evaluation of these changes. This reflection contemplates the vision of a professional teaching council versus its evolving reality and the challenges posed by the value placed on school placement in teacher preparation. The juxtapositions extend to the classroom, where theory may not seamlessly align with the lived experience. Inconsistencies between college expectations and the classroom reality prompt reflection on the effectiveness of teacher preparation programs. Addressing the changing demographic landscape of society and schools, there is a persistent incongruity between the diversity of Irish society and the profile of second-level teachers. As education undergoes a digital revolution, the enduring philosophies of education confront technological advances. This reflection highlights the tension between established practices and contemporary demands, acknowledging the irreplaceable value of face-to-face interaction while integrating technology into teacher training programs. In conclusion, this reflective journey encapsulates the intricate web of juxtapositions in Irish Initial Teacher Education. It emphasises the enduring commitment to fostering education, recognising the profound influence educators wield, and acknowledging the challenges and gratifications inherent in shaping the minds and futures of generations to come.Keywords: Irish post primary teaching, juxtapositions, reflection, teacher education
Procedia PDF Downloads 5758 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces
Authors: Martin Alexander Eder, Sergei Semenov
Abstract:
Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.Keywords: adhesive, fatigue, interface, multiaxial stress
Procedia PDF Downloads 17057 Characterizing and Developing the Clinical Grade Microbiome Assay with a Robust Bioinformatics Pipeline for Supporting Precision Medicine Driven Clinical Development
Authors: Danyi Wang, Andrew Schriefer, Dennis O'Rourke, Brajendra Kumar, Yang Liu, Fei Zhong, Juergen Scheuenpflug, Zheng Feng
Abstract:
Purpose: It has been recognized that the microbiome plays critical roles in disease pathogenesis, including cancer, autoimmune disease, and multiple sclerosis. To develop a clinical-grade assay for exploring microbiome-derived clinical biomarkers across disease areas, a two-phase approach is implemented. 1) Identification of the optimal sample preparation reagents using pre-mixed bacteria and healthy donor stool samples coupled with proprietary Sigma-Aldrich® bioinformatics solution. 2) Exploratory analysis of patient samples for enabling precision medicine. Study Procedure: In phase 1 study, we first compared the 16S sequencing results of two ATCC® microbiome standards (MSA 2002 and MSA 2003) across five different extraction kits (Kit A, B, C, D & E). Both microbiome standards samples were extracted in triplicate across all extraction kits. Following isolation, DNA quantity was determined by Qubit assay. DNA quality was assessed to determine purity and to confirm extracted DNA is of high molecular weight. Bacterial 16S ribosomal ribonucleic acid (rRNA) amplicons were generated via amplification of the V3/V4 hypervariable region of the 16S rRNA. Sequencing was performed using a 2x300 bp paired-end configuration on the Illumina MiSeq. Fastq files were analyzed using the Sigma-Aldrich® Microbiome Platform. The Microbiome Platform is a cloud-based service that offers best-in-class 16S-seq and WGS analysis pipelines and databases. The Platform and its methods have been extensively benchmarked using microbiome standards generated internally by MilliporeSigma and other external providers. Data Summary: The DNA yield using the extraction kit D and E is below the limit of detection (100 pg/µl) of Qubit assay as both extraction kits are intended for samples with low bacterial counts. The pre-mixed bacterial pellets at high concentrations with an input of 2 x106 cells for MSA-2002 and 1 x106 cells from MSA-2003 were not compatible with the kits. Among the remaining 3 extraction kits, kit A produced the greatest yield whereas kit B provided the least yield (Kit-A/MSA-2002: 174.25 ± 34.98; Kit-A/MSA-2003: 179.89 ± 30.18; Kit-B/MSA-2002: 27.86 ± 9.35; Kit-B/MSA-2003: 23.14 ± 6.39; Kit-C/MSA-2002: 55.19 ± 10.18; Kit-C/MSA-2003: 35.80 ± 11.41 (Mean ± SD)). Also, kit A produced the greatest yield, whereas kit B provided the least yield. The PCoA 3D visualization of the Weighted Unifrac beta diversity shows that kits A and C cluster closely together while kit B appears as an outlier. The kit A sequencing samples cluster more closely together than both the other kits. The taxonomic profiles of kit B have lower recall when compared to the known mixture profiles indicating that kit B was inefficient at detecting some of the bacteria. Conclusion: Our data demonstrated that the DNA extraction method impacts DNA concentration, purity, and microbial communities detected by next-generation sequencing analysis. Further microbiome analysis performance comparison of using healthy stool samples is underway; also, colorectal cancer patients' samples will be acquired for further explore the clinical utilities. Collectively, our comprehensive qualification approach, including the evaluation of optimal DNA extraction conditions, the inclusion of positive controls, and the implementation of a robust qualified bioinformatics pipeline, assures accurate characterization of the microbiota in a complex matrix for deciphering the deep biology and enabling precision medicine.Keywords: 16S rRNA sequencing, analytical validation, bioinformatics pipeline, metagenomics
Procedia PDF Downloads 17056 Advancing UAV Operations with Hybrid Mobile Network and LoRa Communications
Authors: Annika J. Meyer, Tom Piechotta
Abstract:
Unmanned Aerial Vehicles (UAVs) have increasingly become vital tools in various applications, including surveillance, search and rescue, and environmental monitoring. One common approach to ensure redundant communication systems when flying beyond visual line of sight is for UAVs to employ multiple mobile data modems by different providers. Although widely adopted, this approach suffers from several drawbacks, such as high costs, added weight and potential increases in signal interference. In light of these challenges, this paper proposes a communication framework intermeshing mobile networks and LoRa (Long Range) technology—a low-power, long-range communication protocol. LoRaWAN (Long Range Wide Area Network) is commonly used in Internet of Things applications, relying on stationary gateways and Internet connectivity. This paper, however, utilizes the underlying LoRa protocol, taking advantage of the protocol’s low power and long-range capabilities while ensuring efficiency and reliability. Conducted in collaboration with the Potsdam Fire Department, the implementation of mobile network technology in combination with the LoRa protocol in small UAVs (take-off weight < 0.4 kg), specifically designed for search and rescue and area monitoring missions, is explored. This research aims to test the viability of LoRa as an additional redundant communication system during UAV flights as well as its intermeshing with the primary, mobile network-based controller. The methodology focuses on direct UAV-to-UAV and UAV-to-ground communications, employing different spreading factors optimized for specific operational scenarios—short-range for UAV-to-UAV interactions and long-range for UAV-to-ground commands. This explored use case also dramatically reduces one of the major drawbacks of LoRa communication systems, as a line of sight between the modules is necessary for reliable data transfer. Something that UAVs are uniquely suited to provide, especially when deployed as a swarm. Additionally, swarm deployment may enable UAVs that have lost contact with their primary network to reestablish their connection through another, better-situated UAV. The experimental setup involves multiple phases of testing, starting with controlled environments to assess basic communication capabilities and gradually advancing to complex scenarios involving multiple UAVs. Such a staged approach allows for meticulous adjustment of parameters and optimization of the communication protocols to ensure reliability and effectiveness. Furthermore, due to the close partnership with the Fire Department, the real-world applicability of the communication system is assured. The expected outcomes of this paper include a detailed analysis of LoRa's performance as a communication tool for UAVs, focusing on aspects such as signal integrity, range, and reliability under different environmental conditions. Additionally, the paper seeks to demonstrate the cost-effectiveness and operational efficiency of using a single type of communication technology that reduces UAV payload and power consumption. By shifting from traditional cellular network communications to a more robust and versatile cellular and LoRa-based system, this research has the potential to significantly enhance UAV capabilities, especially in critical applications where reliability is paramount. The success of this paper could pave the way for broader adoption of LoRa in UAV communications, setting a new standard for UAV operational communication frameworks.Keywords: LoRa communication protocol, mobile network communication, UAV communication systems, search and rescue operations
Procedia PDF Downloads 4455 Delivering Safer Clinical Trials; Using Electronic Healthcare Records (EHR) to Monitor, Detect and Report Adverse Events in Clinical Trials
Authors: Claire Williams
Abstract:
Randomised controlled Trials (RCTs) of efficacy are still perceived as the gold standard for the generation of evidence, and whilst advances in data collection methods are well developed, this progress has not been matched for the reporting of adverse events (AEs). Assessment and reporting of AEs in clinical trials are fraught with human error and inefficiency and are extremely time and resource intensive. Recent research conducted into the quality of reporting of AEs during clinical trials concluded it is substandard and reporting is inconsistent. Investigators commonly send reports to sponsors who are incorrectly categorised and lacking in critical information, which can complicate the detection of valid safety signals. In our presentation, we will describe an electronic data capture system, which has been designed to support clinical trial processes by reducing the resource burden on investigators, improving overall trial efficiencies, and making trials safer for patients. This proprietary technology was developed using expertise proven in the delivery of the world’s first prospective, phase 3b real-world trial, ‘The Salford Lung Study, ’ which enabled robust safety monitoring and reporting processes to be accomplished by the remote monitoring of patients’ EHRs. This technology enables safety alerts that are pre-defined by the protocol to be detected from the data extracted directly from the patients EHR. Based on study-specific criteria, which are created from the standard definition of a serious adverse event (SAE) and the safety profile of the medicinal product, the system alerts the investigator or study team to the safety alert. Each safety alert will require a clinical review by the investigator or delegate; examples of the types of alerts include hospital admission, death, hepatotoxicity, neutropenia, and acute renal failure. This is achieved in near real-time; safety alerts can be reviewed along with any additional information available to determine whether they meet the protocol-defined criteria for reporting or withdrawal. This active surveillance technology helps reduce the resource burden of the more traditional methods of AE detection for the investigators and study teams and can help eliminate reporting bias. Integration of multiple healthcare data sources enables much more complete and accurate safety data to be collected as part of a trial and can also provide an opportunity to evaluate a drug’s safety profile long-term, in post-trial follow-up. By utilising this robust and proven method for safety monitoring and reporting, a much higher risk of patient cohorts can be enrolled into trials, thus promoting inclusivity and diversity. Broadening eligibility criteria and adopting more inclusive recruitment practices in the later stages of drug development will increase the ability to understand the medicinal products risk-benefit profile across the patient population that is likely to use the product in clinical practice. Furthermore, this ground-breaking approach to AE detection not only provides sponsors with better-quality safety data for their products, but it reduces the resource burden on the investigator and study teams. With the data taken directly from the source, trial costs are reduced, with minimal data validation required and near real-time reporting enables safety concerns and signals to be detected more quickly than in a traditional RCT.Keywords: more comprehensive and accurate safety data, near real-time safety alerts, reduced resource burden, safer trials
Procedia PDF Downloads 8654 In-situ Mental Health Simulation with Airline Pilot Observation of Human Factors
Authors: Mumtaz Mooncey, Alexander Jolly, Megan Fisher, Kerry Robinson, Robert Lloyd, Dave Fielding
Abstract:
Introduction: The integration of the WingFactors in-situ simulation programme has transformed the education landscape at the Whittington Health NHS Trust. To date, there have been a total of 90 simulations - 19 aimed at Paediatric trainees, including 2 Child and Adolescent Mental Health (CAMHS) scenarios. The opportunity for joint debriefs provided by clinical faculty and airline pilots, has created a new exciting avenue to explore human factors within psychiatry. Through the use of real clinical environments and primed actors; the benefits of high fidelity simulation, interdisciplinary and interprofessional learning has been highlighted. The use of in-situ simulation within Psychiatry is a newly emerging concept and its success here has been recognised by unanimously positive feedback from participants and acknowledgement through nomination for the Health Service Journal (HSJ) Award (Best Education Programme 2021). Methodology: The first CAMHS simulation featured a collapsed patient in the toilet with a ligature tied around her neck, accompanied by a distressed parent. This required participants to consider:; emergency physical management of the case, alongside helping to contain the mother and maintaining situational awareness when transferring the patient to an appropriate clinical area. The second simulation was based on a 17- year- old girl attempting to leave the ward after presenting with an overdose, posing potential risk to herself. The safe learning environment enabled participants to explore techniques to engage the young person and understand their concerns, and consider the involvement of other members of the multidisciplinary team. The scenarios were followed by an immediate ‘hot’ debrief, combining technical feedback with Human Factors feedback from uniformed airline pilots and clinicians. The importance of psychological safety was paramount, encouraging open and honest contributions from all participants. Key learning points were summarized into written documents and circulated. Findings: The in-situ simulations demonstrated the need for practical changes both in the Emergency Department and on the Paediatric ward. The presence of airline pilots provided a novel way to debrief on Human Factors. The following key themes were identified: -Team-briefing (‘Golden 5 minutes’) - Taking a few moments to establish experience, initial roles and strategies amongst the team can reduce the need for conversations in front of a distressed patient or anxious relative. -Use of checklists / guidelines - Principles associated with checklist usage (control of pace, rigor, team situational awareness), instead of reliance on accurate memory recall when under pressure. -Read-back - Immediate repetition of safety critical instructions (e.g. drug / dosage) to mitigate the risks associated with miscommunication. -Distraction management - Balancing the risk of losing a team member to manage a distressed relative, versus it impacting on the care of the young person. -Task allocation - The value of the implementation of ‘The 5A’s’ (Availability, Address, Allocate, Ask, Advise), for effective task allocation. Conclusion: 100% of participants have requested more simulation training. Involvement of airline pilots has led to a shift in hospital culture, bringing to the forefront the value of Human Factors focused training and multidisciplinary simulation. This has been of significant value in not only physical health, but also mental health simulation.Keywords: human factors, in-situ simulation, inter-professional, multidisciplinary
Procedia PDF Downloads 10953 Development of Portable Hybrid Renewable Energy System for Sustainable Electricity Supply to Rural Communities in Nigeria
Authors: Abdulkarim Nasir, Alhassan T. Yahaya, Hauwa T. Abdulkarim, Abdussalam El-Suleiman, Yakubu K. Abubakar
Abstract:
The need for sustainable and reliable electricity supply in rural communities of Nigeria remains a pressing issue, given the country's vast energy deficit and the significant number of inhabitants lacking access to electricity. This research focuses on the development of a portable hybrid renewable energy system designed to provide a sustainable and efficient electricity supply to these underserved regions. The proposed system integrates multiple renewable energy sources, specifically solar and wind, to harness the abundant natural resources available in Nigeria. The design and development process involves the selection and optimization of components such as photovoltaic panels, wind turbines, energy storage units (batteries), and power management systems. These components are chosen based on their suitability for rural environments, cost-effectiveness, and ease of maintenance. The hybrid system is designed to be portable, allowing for easy transportation and deployment in remote locations with limited infrastructure. Key to the system's effectiveness is its hybrid nature, which ensures continuous power supply by compensating for the intermittent nature of individual renewable sources. Solar energy is harnessed during the day, while wind energy is captured whenever wind conditions are favourable, thus ensuring a more stable and reliable energy output. Energy storage units are critical in this setup, storing excess energy generated during peak production times and supplying power during periods of low renewable generation. These studies include assessing the solar irradiance, wind speed patterns, and energy consumption needs of rural communities. The simulation results inform the optimization of the system's design to maximize energy efficiency and reliability. This paper presents the development and evaluation of a 4 kW standalone hybrid system combining wind and solar power. The portable device measures approximately 8 feet 5 inches in width, 8 inches 4 inches in depth, and around 38 feet in height. It includes four solar panels with a capacity of 120 watts each, a 1.5 kW wind turbine, a solar charge controller, remote power storage, batteries, and battery control mechanisms. Designed to operate independently of the grid, this hybrid device offers versatility for use in highways and various other applications. It also presents a summary and characterization of the device, along with photovoltaic data collected in Nigeria during the month of April. The construction plan for the hybrid energy tower is outlined, which involves combining a vertical-axis wind turbine with solar panels to harness both wind and solar energy. Positioned between the roadway divider and automobiles, the tower takes advantage of the air velocity generated by passing vehicles. The solar panels are strategically mounted to deflect air toward the turbine while generating energy. Generators and gear systems attached to the turbine shaft enable power generation, offering a portable solution to energy challenges in Nigerian communities. The study also addresses the economic feasibility of the system, considering the initial investment costs, maintenance, and potential savings from reduced fossil fuel use. A comparative analysis with traditional energy supply methods highlights the long-term benefits and sustainability of the hybrid system.Keywords: renewable energy, solar panel, wind turbine, hybrid system, generator
Procedia PDF Downloads 4452 Burkholderia Cepacia ST 767 Causing a Three Years Nosocomial Outbreak in a Hemodialysis Unit
Authors: Gousilin Leandra Rocha Da Silva, Stéfani T. A. Dantas, Bruna F. Rossi, Erika R. Bonsaglia, Ivana G. Castilho, Terue Sadatsune, Ary Fernandes Júnior, Vera l. M. Rall
Abstract:
Kidney failure causes decreased diuresis and accumulation of nitrogenous substances in the body. To increase patient survival, hemodialysis is used as a partial substitute for renal function. However, contamination of the water used in this treatment, causing bacteremia in patients, is a worldwide concern. The Burkholderia cepacia complex (Bcc), a group of bacteria with more than 20 species, is frequently isolated from hemodialysis water samples and comprises opportunistic bacteria, affecting immunosuppressed patients, due to its wide variety of virulence factors, in addition to innate resistance to several antimicrobial agents, contributing to the permanence in the hospital environment and to the pathogenesis in the host. The objective of the present work was to characterize molecularly and phenotypically Bcc isolates collected from the water and dialysate of the Hemodialysis Unit and from the blood of patients at a Public Hospital in Botucatu, São Paulo, Brazil, between 2019 and 2021. We used 33 Bcc isolates, previously obtained from blood cultures from patients with bacteremia undergoing hemodialysis treatment (2019-2021) and 24 isolates obtained from water and dialysate samples in a Hemodialysis Unit (same period). The recA gene was sequenced to identify the specific species among the Bcc group. All isolates were tested for the presence of some genes that encode virulence factors such as cblA, esmR, zmpA and zmpB. Considering the epidemiology of the outbreak, the Bcc isolates were molecularly characterized by Multi Locus Sequence Type (MLST) and by pulsed-field gel electrophoresis (PFGE). The verification and quantification of biofilm in a polystyrene microplate were performed by submitting the isolates to different incubation temperatures (20°C, average water temperature and 35°C, optimal temperature for group growth). The antibiogram was performed with disc diffusion tests on agar, using discs impregnated with cefepime (30µg), ceftazidime (30µg), ciprofloxacin (5µg), gentamicin (10µg), imipenem (10µg), amikacin 30µg), sulfametazol/trimethoprim (23.75/1.25µg) and ampicillin/sulbactam (10/10µg). The presence of ZmpB was identified in all isolates, while ZmpA was observed in 96.5% of the isolates, while none of them presented the cblA and esmR genes. The antibiogram of the 33 human isolates indicated that all were resistant to gentamicin, colistin, ampicillin/sulbactam and imipenem. 16 (48.5%) isolates were resistant to amikacin and lower rates of resistance were observed for meropenem, ceftazidime, cefepime, ciprofloxacin and piperacycline/tazobactam (6.1%). All isolates were sensitive to sulfametazol/trimethoprim, levofloxacin and tigecycline. As for the water isolates, resistance was observed only to gentamicin (34.8%) and imipenem (17.4%). According to PFGE results, all isolates obtained from humans and water belonged to the same pulsotype (1), which was identified by recA sequencing as B. cepacia¸, belonging to sequence type ST-767. By observing a single pulse type over three years, one can observe the persistence of this isolate in the pipeline, contaminating patients undergoing hemodialysis, despite the routine disinfection of water with peracetic acid. This persistence is probably due to the production of biofilm, which protects bacteria from disinfectants and, making this scenario more critical, several isolates proved to be multidrug-resistant (resistance to at least three groups of antimicrobials), turning the patient care even more difficult.Keywords: hemodialysis, burkholderia cepacia, PFGE, MLST, multi drug resistance
Procedia PDF Downloads 10151 Case Report: Peripartum Cardiomyopathy, a Rare but Fatal Condition in Pregnancy and Puerperium
Authors: Sadaf Abbas, HimGauri Sabnis
Abstract:
Introduction: Peripartum cardiomyopathy is a rare but potentially life-threatening condition that presents as heart failure during the last month of pregnancy or within five months postpartum. The incidence of postpartum cardiomyopathy ranges from 1 in 1300 to 1 in 15,000 pregnancies. Risk factors include multiparty, advanced maternal age, multiple pregnancies, pre-eclampsia, and chronic hypertension. Study: A 30-year-old Para3+0 presented to the Emergency Department of St’Marry Hospital, Isle of Wight, on the seventh day postpartum, with acute shortness of breath (SOB), chest pain, cough, and a temperature of 38 degrees. The risk factors were smoking and class II obesity (BMI of 40.62). The patient had mild pre-eclampsia in the last pregnancy and was on labetalol and aspirin during an antenatal period, which was stopped postnatally. There was also a history of pre-eclampsia and haemolysis, elevated liver enzymes, low platelets (HELLP syndrome) in previous pregnancies, which led to preterm delivery at 35 weeks in the second pregnancy, and the first baby was stillborn at 24 weeks. On assessment, there was a national early warning score (NEWS score) of 3, persistent tachycardia, and mild crepitation in the lungs. Initial investigations revealed an enlarged heart on chest X-ray, and a CT pulmonary angiogram indicated bilateral basal pulmonary congestion without pulmonary embolism, suggesting fluid overload. Laboratory results showed elevated CRP and normal troponin levels initially, which later increased, indicating myocardial involvement. Echocardiography revealed a severely dilated left ventricle with an ejection fraction (EF) of 31%, consistent with severely impaired systolic function. The cardiology team reviewed the patient and admitted to the Coronary Care Unit. As sign and symptoms were suggestive of fluid overload and congestive cardiac failure, management was done with diuretics, beta-blockers, angiotensin-converting enzyme inhibitors (ACE inhibitors), proton pump inhibitors, and supportive care. During admission, there was complications such as acute kidney injury, but then recovered well. Chest pain had resolved following the treatment. After being admitted for eight days, there was an improvement in the symptoms, and the patient was discharged home with a further plan of cardiac MRI and genetic testing due to a family history of sudden cardiac death. Regular appointment has been made with the Cardiology team to follow-up on the symptoms. Since discharge, the patient made a good recovery. A cardiac MRI was done, which showed severely impaired left ventricular function, ejection fraction (EF) of 38% with mild left ventricular dilatation, and no evidence of previous infarction. Overall appearance is of non-ischemic dilated cardiomyopathy. The main challenge at the time of admission was the non-availability of a cardiac radiology team, so the definitive diagnosis was delayed. The long-term implications include risk of recurrence, chronic heart failure, and, consequently, an effect on quality of life. Therefore, regular follow-up is critical in patient’s management. Conclusions: Peripartum cardiomyopathy is one of the cardiovascular diseases whose causes are still unknown yet and, in some cases, are uncontrolled. By raising awareness about the symptoms and management of this complication it will reduce morbidity and mortality rates and also the length of stay in the hospital.Keywords: cardiomyopathy, cardiomegaly, pregnancy, puerperium
Procedia PDF Downloads 3650 Health and Climate Changes: "Ippocrate" a New Alert System to Monitor and Identify High Risk
Authors: A. Calabrese, V. F. Uricchio, D. di Noia, S. Favale, C. Caiati, G. P. Maggi, G. Donvito, D. Diacono, S. Tangaro, A. Italiano, E. Riezzo, M. Zippitelli, M. Toriello, E. Celiberti, D. Festa, A. Colaianni
Abstract:
Climate change has a severe impact on human health. There is a vast literature demonstrating temperature increase is causally related to cardiovascular problem and represents a high risk for human health, but there are not study that improve a solution. In this work, it is studied how the clime influenced the human parameter through the analysis of climatic conditions in an area of the Apulia Region: Capurso Municipality. At the same time, medical personnel involved identified a set of variables useful to define an index describing health condition. These scientific studies are the base of an innovative alert system, IPPOCRATE, whose aim is to asses climate risk and share information to population at risk to support prevention and mitigation actions. IPPOCRATE is an e-health system, it is designed to provide technological support to analysis of health risk related to climate and provide tools for prevention and management of critical events. It is the first integrated system of prevention of human risk caused by climate change. IPPOCRATE calculates risk weighting meteorological data with the vulnerability of monitored subjects and uses mobile and cloud technologies to acquire and share information on different data channels. It is composed of four components: Multichannel Hub. Multichannel Hub is the ICT infrastructure used to feed IPPOCRATE cloud with a different type of data coming from remote monitoring devices, or imported from meteorological databases. Such data are ingested, transformed and elaborated in order to be dispatched towards mobile app and VoIP phone systems. IPPOCRATE Multichannel Hub uses open communication protocols to create a set of APIs useful to interface IPPOCRATE with 3rd party applications. Internally, it uses non-relational paradigm to create flexible and highly scalable database. WeHeart and Smart Application The wearable device WeHeart is equipped with sensors designed to measure following biometric variables: heart rate, systolic blood pressure and diastolic blood pressure, blood oxygen saturation, body temperature and blood glucose for diabetic subjects. WeHeart is designed to be easy of use and non-invasive. For data acquisition, users need only to wear it and connect it to Smart Application by Bluetooth protocol. Easy Box was designed to take advantage from new technologies related to e-health care. EasyBox allows user to fully exploit all IPPOCRATE features. Its name, Easy Box, reveals its purpose of container for various devices that may be included depending on user needs. Territorial Registry is the IPPOCRATE web module reserved to medical personnel for monitoring, research and analysis activities. Territorial Registry allows to access to all information gathered by IPPOCRATE using GIS system in order to execute spatial analysis combining geographical data (climatological information and monitored data) with information regarding the clinical history of users and their personal details. Territorial Registry was designed for different type of users: control rooms managed by wide area health facilities, single health care center or single doctor. Territorial registry manages such hierarchy diversifying the access to system functionalities. IPPOCRATE is the first e-Health system focused on climate risk prevention.Keywords: climate change, health risk, new technological system
Procedia PDF Downloads 86949 Sinhala Sign Language to Grammatically Correct Sentences using NLP
Authors: Anjalika Fernando, Banuka Athuraliya
Abstract:
This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired communityKeywords: Sinhala sign language, sign Language, NLP, LSTM, NMT
Procedia PDF Downloads 10748 Evaluating Forecasting Strategies for Day-Ahead Electricity Prices: Insights From the Russia-Ukraine Crisis
Authors: Alexandra Papagianni, George Filis, Panagiotis Papadopoulos
Abstract:
The liberalization of the energy market and the increasing penetration of fluctuating renewables (e.g., wind and solar power) have heightened the importance of the spot market for ensuring efficient electricity supply. This is further emphasized by the EU’s goal of achieving net-zero emissions by 2050. The day-ahead market (DAM) plays a key role in European energy trading, accounting for 80-90% of spot transactions and providing critical insights for next-day pricing. Therefore, short-term electricity price forecasting (EPF) within the DAM is crucial for market participants to make informed decisions and improve their market positioning. Existing literature highlights out-of-sample performance as a key factor in assessing EPF accuracy, with influencing factors such as predictors, forecast horizon, model selection, and strategy. Several studies indicate that electricity demand is a primary price determinant, while renewable energy sources (RES) like wind and solar significantly impact price dynamics, often lowering prices. Additionally, incorporating data from neighboring countries, due to market coupling, further improves forecast accuracy. Most studies predict up to 24 steps ahead using hourly data, while some extend forecasts using higher-frequency data (e.g., half-hourly or quarter-hourly). Short-term EPF methods fall into two main categories: statistical and computational intelligence (CI) methods, with hybrid models combining both. While many studies use advanced statistical methods, particularly through different versions of traditional AR-type models, others apply computational techniques such as artificial neural networks (ANNs) and support vector machines (SVMs). Recent research combines multiple methods to enhance forecasting performance. Despite extensive research on EPF accuracy, a gap remains in understanding how forecasting strategy affects prediction outcomes. While iterated strategies are commonly used, they are often chosen without justification. This paper contributes by examining whether the choice of forecasting strategy impacts the quality of day-ahead price predictions, especially for multi-step forecasts. We evaluate both iterated and direct methods, exploring alternative ways of conducting iterated forecasts on benchmark and state-of-the-art forecasting frameworks. The goal is to assess whether these factors should be considered by end-users to improve forecast quality. We focus on the Greek DAM using data from July 1, 2021, to March 31, 2022. This period is chosen due to significant price volatility in Greece, driven by its dependence on natural gas and limited interconnection capacity with larger European grids. The analysis covers two phases: pre-conflict (January 1, 2022, to February 23, 2022) and post-conflict (February 24, 2022, to March 31, 2022), following the Russian-Ukraine conflict that initiated an energy crisis. We use the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (sMAPE) for evaluation, as well as the Direction of Change (DoC) measure to assess the accuracy of price movement predictions. Our findings suggest that forecasters need to apply all strategies across different horizons and models. Different strategies may be required for different horizons to optimize both accuracy and directional predictions, ensuring more reliable forecasts.Keywords: short-term electricity price forecast, forecast strategies, forecast horizons, recursive strategy, direct strategy
Procedia PDF Downloads 1147 The Ecuador Healthy Food Environment Policy Index (Food-EPI)
Authors: Samuel Escandón, María J. Peñaherrera-Vélez, Signe Vargas-Rosvik, Carlos Jerves Córdova, Ximena Vélez-Calvo, Angélica Ochoa-Avilés
Abstract:
Overweight and obesity are considered risk factors in childhood for developing nutrition-related non-communicable diseases (NCDs), such as diabetes, cardiovascular diseases, and cancer. In Ecuador, 35.4% of 5- to 11-year-olds and 29.6% of 12- to 19-year-olds are overweight or obese. Globally, unhealthy food environments characterized by high consumption of processed/ultra-processed food and rapid urbanization are highly related to the increasing nutrition-related non-communicable diseases. The evidence shows that in low- and middle-income countries (LMICs), fiscal policies and regulatory measures significantly reduce unhealthy food environments, achieving substantial advances in health. However, in some LMICs, little is known about the impact of governments' action to implement healthy food-environment policies. This study aimed to generate evidence on the state of implementation of public policy focused on food environments for the prevention of overweight and obesity in children and adolescents in Ecuador compared to global best practices and to target key recommendations for reinforcing the current strategies. After adapting the INFORMAS' Healthy Food Environment Policy Index (Food‐EPI) to the Ecuadorian context, the Policy and Infrastructure support components were assessed. Individual online interviews were performed using fifty-one indicators to analyze the level of implementation of policies directly or indirectly related to preventing overweight and obesity in children and adolescents compared to international best practices. Additionally, a participatory workshop was conducted to identify the critical indicators and generate recommendations to reinforce or improve the political action around them. In total, 17 government and non-government experts were consulted. From 51 assessed indicators, only the one corresponding to the nutritional information and ingredients labelling registered an implementation level higher than 60% (67%) compared to the best international practices. Among the 17 indicators determined as priorities by the participants, those corresponding to the provision of local products in school meals and the limitation of unhealthy-products promotion in traditional and digital media had the lowest level of implementation (34% and 11%, respectively) compared to global best practices. The participants identified more barriers (e.g., lack of continuity of effective policies across government administrations) than facilitators (e.g., growing interest from the Ministry of Environment because of the eating-behavior environmental impact) for Ecuador to move closer to the best international practices. Finally, within the participants' recommendations, we highlight the need for policy-evaluation systems, information transparency on the impact of the policies, transformation of successful strategies into laws or regulations to make them mandatory, and regulation of power and influence from the food industry (conflicts of interest). Actions focused on promoting a more active role of society in the stages of policy formation and achieving more articulated actions between the different government levels/institutions for implementing the policy are necessary to generate a noteworthy impact on preventing overweight and obesity in children and adolescents. Including systems for internal evaluation of existing strategies to strengthen successful actions, create policies to fill existing gaps and reform policies that do not generate significant impact should be a priority for the Ecuadorian government to improve the country's food environments.Keywords: children and adolescents, food-EPI, food policies, healthy food environment
Procedia PDF Downloads 6546 Pulmonary Complication of Chronic Liver Disease and the Challenges Identifying and Managing Three Patients
Authors: Aidan Ryan, Nahima Miah, Sahaj Kaur, Imogen Sutherland, Mohamed Saleh
Abstract:
Pulmonary symptoms are a common presentation to the emergency department. Due to a lack of understanding of the underlying pathophysiology, chronic liver disease is not often considered a cause of dyspnea. We present three patients who were admitted with significant respiratory distress secondary to hepatopulmonary syndrome, portopulmonary hypertension, and hepatic hydrothorax. The first is a 27-year-old male with a 6-month history of progressive dyspnea. The patient developed a severe type 1 respiratory failure with a PaO₂ of 6.3kPa and was escalated to critical care, where he was managed with non-invasive ventilation to maintain oxygen saturation. He had an agitated saline contrast echocardiogram, which showed the presence of a possible shunt. A CT angiogram revealed significant liver cirrhosis, portal hypertension, and large para esophageal varices. Ultrasound of the abdomen showed coarse liver echo patter and enlarged spleen. Along with these imaging findings, his biochemistry demonstrated impaired synthetic liver function with an elevated international normalized ratio (INR) of 1.4 and hypoalbuminaemia of 28g/L. The patient was then transferred to a tertiary center for further management. Further investigations confirmed a shunt of 56%, and liver biopsy confirmed cirrhosis suggestive of alpha-1-antitripsyin deficiency. The findings were consistent with a diagnosis of hepatopulmonary syndrome, and the patient is awaiting a liver transplant. The second patient is a 56-year-old male with a 12-month history of worsening dyspnoea, jaundice, confusion. His medical history included liver cirrhosis, portal hypertension, and grade 1 oesophageal varices secondary to significant alcohol excess. On admission, he developed a type 1 respiratory failure with PaO₂ of 6.8kPa requiring 10L of oxygen. CT pulmonary angiogram was negative for pulmonary embolism but showed evidence of chronic pulmonary hypertension, liver cirrhosis, and portal hypertension. An echocardiogram revealed a grossly dilated right heart with reduced function, pulmonary and tricuspid regurgitation, and pulmonary artery pressures estimated at 78mmHg. His biochemical markers showed impaired synthetic liver function with an INR of 3.2, albumin of 29g/L, along with raised bilirubin of 148mg/dL. During his long admission, he was managed with diuretics with little improvement. After three weeks, he was diagnosed with portopulmonary hypertension and was commenced on terlipressin. This resulted in successfully weaning off oxygen, and he was discharged home. The third patient is a 61-year-old male who presented to the local ambulatory care unit for therapeutic paracentesis on a background of decompensated liver cirrhosis. On presenting, he complained of a 2-day history of worsening dyspnoea and a productive cough. Chest x-ray showed a large pleural effusion, increasing in size over the previous eight months, and his abdomen was visibly distended with ascitic fluid. Unfortunately, the patient deteriorated, developing a larger effusion along with an increase in oxygen demand, and passed away. Without underlying cardiorespiratory disease, in the presence of a persistent pleural effusion with underlying decompensated cirrhosis, he was diagnosed with hepatic hydrothorax. While each presented with dyspnoea, the cause and underlying pathophysiology differ significantly from case to case. By describing these complications, we hope to improve awareness and aid prompt and accurate diagnosis, vital for improving outcomes.Keywords: dyspnea, hepatic hydrothorax, hepatopulmonary syndrome, portopulmonary syndrome
Procedia PDF Downloads 12345 3D Non-Linear Analyses by Using Finite Element Method about the Prediction of the Cracking in Post-Tensioned Dapped-End Beams
Authors: Jatziri Y. Moreno-Martínez, Arturo Galván, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado
Abstract:
In recent years, for the elevated viaducts in Mexico City, a construction system based on precast/pre-stressed concrete elements has been used, in which the bridge girders are divided in two parts by imposing a hinged support in sections where the bending moments that are originated by the gravity loads in a continuous beam are minimal. Precast concrete girders with dapped ends are a representative sample of a behavior that has complex configurations of stresses that make them more vulnerable to cracking due to flexure–shear interaction. The design procedures for ends of the dapped girders are well established and are based primarily on experimental tests performed for different configurations of reinforcement. The critical failure modes that can govern the design have been identified, and for each of them, the methods for computing the reinforcing steel that is needed to achieve adequate safety against failure have been proposed. Nevertheless, the design recommendations do not include procedures for controlling diagonal cracking at the entrant corner under service loading. These cracks could cause water penetration and degradation because of the corrosion of the steel reinforcement. The lack of visual access to the area makes it difficult to detect this damage and take timely corrective actions. Three-dimensional non-linear numerical models based on Finite Element Method to study the cracking at the entrant corner of dapped-end beams were performed using the software package ANSYS v. 11.0. The cracking was numerically simulated by using the smeared crack approach. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The longitudinal post-tension was modeled using LINK8 elements with multilinear isotropic hardening behavior using von Misses plasticity. The reinforcement was introduced with smeared approach. The numerical models were calibrated using experimental tests carried out in “Instituto de Ingeniería, Universidad Nacional Autónoma de México”. In these numerical models the characteristics of the specimens were considered: typical solution based on vertical stirrups (hangers) and on vertical and horizontal hoops with a post-tensioned steel which contributed to a 74% of the flexural resistance. The post-tension is given by four steel wires with a 5/8’’ (16 mm) diameter. Each wire was tensioned to 147 kN and induced an average compressive stress of 4.90 MPa on the concrete section of the dapped end. The loading protocol consisted on applying symmetrical loading to reach the service load (180 kN). Due to the good correlation between experimental and numerical models some additional numerical models were proposed by considering different percentages of post-tension in order to find out how much it influences in the appearance of the cracking in the reentrant corner of the dapped-end beams. It was concluded that the increasing of percentage of post-tension decreases the displacements and the cracking in the reentrant corner takes longer to appear. The authors acknowledge at “Universidad de Guanajuato, Campus Celaya-Salvatierra” and the financial support of PRODEP-SEP (UGTO-PTC-460) of the Mexican government. The first author acknowledges at “Instituto de Ingeniería, Universidad Nacional Autónoma de México”.Keywords: concrete dapped-end beams, cracking control, finite element analysis, postension
Procedia PDF Downloads 22644 Towards Better Integration: Qualitative Study on Perceptions of Russian-Speaking Immigrants in Australia
Authors: Oleg Shovkovyy
Abstract:
This research conducted in response to one of the most pressing questions on the agenda of many public administration offices around the world: “What could be done for better integration and assimilation of immigrants into hosting communities?” In author’s view, the answer could be suggested by immigrants themselves. They, often ‘bogged down in the past,’ snared by own idols and demons, perceive things differently, which, in turn, may result in their inability to integrate smoothly into hosting communities. Brief literature review suggests that perceptions of immigrants are completely neglected or something unsought in the current research on migrants, which, often, based on opinion polls by members of hosting communities themselves or superficial research data by various research organizations. Even those specimens that include voices of immigrants, unlikely to shed any additional light onto the problem simply because certain things are not made to speak out loud, especially to those in whose hands immigrants’ fate is (authorities). In this regard, this qualitative study, conducted by an insider to a few Russian-speaking communities, represents a unique opportunity for all stakeholders to look at the question of integration through the eyes of immigrants, from a different perspective and thus, makes research findings especially valuable for better understanding of the problem. Case study research employed ethnographic methods of gathering data where, approximately 200 Russian-speaking immigrants of first and second generations were closely observed by the Russian-speaking researcher in their usual setting, for eight months, and at different venues. The number of informal interviews with 27 key informants, with whom the researcher managed to establish a good rapport and who were keen enough to share their experiences voluntarily, were conducted. The field notes were taken at 14 locations (study sites) within the Brisbane region of Queensland, Australia. Moreover, all this time, researcher lived in dwelling of one of the immigrants and was an active participant in the social life (worship, picnics, dinners, weekend schools, concerts, cultural events, social gathering, etc.) of observed communities, whose members, to a large extent, belong to various religious lines of the Russian and Protestant Church. It was found that the majority of immigrants had experienced some discrimination in matters of hiring, employment, recognition of educational qualifications from home countries, and simply felt a sort of dislike from society in various everyday situations. Many noted complete absences or very limited state assistance in terms of employment, training, education, and housing. For instance, the Australian Government Department of Human Services not only does not stimulate job search but, on the contrary, encourages to refuse short-term works and employment. On the other hand, offered free courses on adaptation, and the English language proved to be ineffective and unpopular amongst immigrants. Many interviewees have reported overstated requirements for English proficiency and local work experience, whereas it was not critical for the given task or job. Based on the result of long-term monitoring, the researcher also had the courage to assert the negative and decelerating roles of immigrants’ communities, particularly religious communities, on processes of integration and assimilation. The findings suggest that governments should either change current immigration policies in the direction of their toughening or to take more proactive and responsible role in dealing with immigrant-related issues; for instance, increasing assistance and support to all immigrants and probably, paying more attention to and taking stake in managing and organizing lives of immigrants’ communities rather, simply leaving it all to chance.Keywords: Australia, immigration, integration, perceptions
Procedia PDF Downloads 22143 Artificial Intelligence Impact on the Australian Government Public Sector
Authors: Jessica Ho
Abstract:
AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.Keywords: artificial inteligence, machine learning, rules, governance, government
Procedia PDF Downloads 7142 Closing down the Loop Holes: How North Korea and Other Bad Actors Manipulate Global Trade in Their Favor
Authors: Leo Byrne, Neil Watts
Abstract:
In the complex and evolving landscape of global trade, maritime sanctions emerge as a critical tool wielded by the international community to curb illegal activities and alter the behavior of non-compliant states and entities. These sanctions, designed to restrict or prohibit trade by sea with sanctioned jurisdictions, entities, or individuals, face continuous challenges due to the sophisticated evasion tactics employed by countries like North Korea. As the Democratic People's Republic of Korea (DPRK) diverts significant resources to circumvent these measures, understanding the nuances of their methodologies becomes imperative for maintaining the integrity of global trade systems. The DPRK, one of the most sanctioned nations globally, has developed an intricate network to facilitate its trade in illicit goods, ensuring the flow of revenue from designated activities continues unabated. Given its geographic and economic conditions, North Korea predominantly relies on maritime routes, utilizing foreign ports to route its illicit trade. This reliance on the sea is exploited through various sophisticated methods, including the use of front companies, falsification of documentation, commingling of bulk cargos, and physical alterations to vessels. These tactics enable the DPRK to navigate through the gaps in regulatory frameworks and lax oversight, effectively undermining international sanctions regimes Maritime sanctions carry significant implications for global trade, imposing heightened risks in the maritime domain. The deceptive practices employed not only by the DPRK but also by other high-risk jurisdictions, necessitate a comprehensive understanding of UN targeted sanctions. For stakeholders in the maritime sector—including maritime authorities, vessel owners, shipping companies, flag registries, and financial institutions serving the shipping industry—awareness and compliance are paramount. Violations can lead to severe consequences, including reputational damage, sanctions, hefty fines, and even imprisonment. To mitigate risks associated with these deceptive practices, it is crucial for maritime sector stakeholders to employ rigorous due diligence and regulatory compliance screening measures. Effective sanctions compliance serves as a protective shield against legal, financial, and reputational risks, preventing exploitation by international bad actors. This requires not only a deep understanding of the sanctions landscape but also the capability to identify and manage risks through informed decision-making and proactive risk management practices. As the DPRK and other sanctioned entities continue to evolve their sanctions evasion tactics, the international community must enhance its collective efforts to demystify and counter these practices. By leveraging more stringent compliance measures, stakeholders can safeguard against the illicit use of the maritime domain, reinforcing the effectiveness of maritime sanctions as a tool for global security. This paper seeks to dissect North Korea's adaptive strategies in the face of maritime sanctions. By examining up-to-date, geographically, and temporally relevant case studies, it aims to shed light on the primary nodes through which Pyongyang evades sanctions and smuggles goods via third-party ports. The goal is to propose multi-level interaction strategies, ranging from governmental interventions to localized enforcement mechanisms, to counteract these evasion tactics.Keywords: maritime, maritime sanctions, international sanctions, compliance, risk
Procedia PDF Downloads 7141 Stabilizing Additively Manufactured Superalloys at High Temperatures
Authors: Keivan Davami, Michael Munther, Lloyd Hackel
Abstract:
The control of properties and material behavior by implementing thermal-mechanical processes is based on mechanical deformation and annealing according to a precise schedule that will produce a unique and stable combination of grain structure, dislocation substructure, texture, and dispersion of precipitated phases. The authors recently developed a thermal-mechanical technique to stabilize the microstructure of additively manufactured nickel-based superalloys even after exposure to high temperatures. However, the mechanism(s) that controls this stability is still under investigation. Laser peening (LP), also called laser shock peening (LSP), is a shock based (50 ns duration) post-processing technique used for extending performance levels and improving service life of critical components by developing deep levels of plastic deformation, thereby generating high density of dislocations and inducing compressive residual stresses in the surface and deep subsurface of components. These compressive residual stresses are usually accompanied with an increase in hardness and enhance the material’s resistance to surface-related failures such as creep, fatigue, contact damage, and stress corrosion cracking. While the LP process enhances the life span and durability of the material, the induced compressive residual stresses relax at high temperatures (>0.5Tm, where Tm is the absolute melting temperature), limiting the applicability of the technology. At temperatures above 0.5Tm, the compressive residual stresses relax, and yield strength begins to drop dramatically. The principal reason is the increasing rate of solid-state diffusion, which affects both the dislocations and the microstructural barriers. Dislocation configurations commonly recover by mechanisms such as climbing and recombining rapidly at high temperatures. Furthermore, precipitates coarsen, and grains grow; virtually all of the available microstructural barriers become ineffective.Our results indicate that by using “cyclic” treatments with sequential LP and annealing steps, the compressive stresses survive, and the microstructure is stable after exposure to temperatures exceeding 0.5Tm for a long period of time. When the laser peening process is combined with annealing, dislocations formed as a result of LPand precipitates formed during annealing have a complex interaction that provides further stability at high temperatures. From a scientific point of view, this research lays the groundwork for studying a variety of physical, materials science, and mechanical engineering concepts. This research could lead to metals operating at higher sustained temperatures enabling improved system efficiencies. The strengthening of metals by a variety of means (alloying, work hardening, and other processes) has been of interest for a wide range of applications. However, the mechanistic understanding of the often complex processes of interactionsbetween dislocations with solute atoms and with precipitates during plastic deformation have largely remained scattered in the literature. In this research, the elucidation of the actual mechanisms involved in the novel cyclic LP/annealing processes as a scientific pursuit is investigated through parallel studies of dislocation theory and the implementation of advanced experimental tools. The results of this research help with the validation of a novel laser processing technique for high temperature applications. This will greatly expand the applications of the laser peening technology originally devised only for temperatures lower than half of the melting temperature.Keywords: laser shock peening, mechanical properties, indentation, high temperature stability
Procedia PDF Downloads 15140 Evaluation of Functional Properties of Protein Hydrolysate from the Fresh Water Mussel Lamellidens marginalis for Nutraceutical Therapy
Authors: Jana Chakrabarti, Madhushrita Das, Ankhi Haldar, Roshni Chatterjee, Tanmoy Dey, Pubali Dhar
Abstract:
High incidences of Protein Energy Malnutrition as a consequence of low protein intake are quite prevalent among the children in developing countries. Thus prevention of under-nutrition has emerged as a critical challenge to India’s developmental Planners in recent times. Increase in population over the last decade has led to greater pressure on the existing animal protein sources. But these resources are currently declining due to persistent drought, diseases, natural disasters, high-cost of feed, and low productivity of local breeds and this decline in productivity is most evident in some developing countries. So the need of the hour is to search for efficient utilization of unconventional low-cost animal protein resources. Molluscs, as a group is regarded as under-exploited source of health-benefit molecules. Bivalve is the second largest class of phylum Mollusca. Annual harvests of bivalves for human consumption represent about 5% by weight of the total world harvest of aquatic resources. The freshwater mussel Lamellidens marginalis is widely distributed in ponds and large bodies of perennial waters in the Indian sub-continent and well accepted as food all over India. Moreover, ethno-medicinal uses of the flesh of Lamellidens among the rural people to treat hypertension have been documented. Present investigation thus attempts to evaluate the potential of Lamellidens marginalis as functional food. Mussels were collected from freshwater ponds and brought to the laboratory two days before experimentation for acclimatization in laboratory conditions. Shells were removed and fleshes were preserved at- 20oC until analysis. Tissue homogenate was prepared for proximate studies. Fatty acids and amino acids composition were analyzed. Vitamins, Minerals and Heavy metal contents were also studied. Mussel Protein hydrolysate was prepared using Alcalase 2.4 L and degree of hydrolysis was evaluated to analyze its Functional properties. Ferric Reducing Antioxidant Power (FRAP) and DPPH Antioxidant assays were performed. Anti-hypertensive property was evaluated by measuring Angiotensin Converting Enzyme (ACE) inhibition assay. Proximate analysis indicates that mussel meat contains moderate amount of protein (8.30±0.67%), carbohydrate (8.01±0.38%) and reducing sugar (4.75±0.07%), but less amount of fat (1.02±0.20%). Moisture content is quite high but ash content is very low. Phospholipid content is significantly high (19.43 %). Lipid constitutes, substantial amount of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have proven prophylactic values. Trace elements are found present in substantial amount. Comparative study of proximate nutrients between Labeo rohita, Lamellidens and cow’s milk indicates that mussel meat can be used as complementary food source. Functionality analyses of protein hydrolysate show increase in Fat absorption, Emulsification, Foaming capacity and Protein solubility. Progressive anti-oxidant and anti-hypertensive properties have also been documented. Lamellidens marginalis can thus be regarded as a functional food source as this may combine effectively with other food components for providing essential elements to the body. Moreover, mussel protein hydrolysate provides opportunities for utilizing it in various food formulations and pharmaceuticals. The observations presented herein should be viewed as a prelude to what future holds.Keywords: functional food, functional properties, Lamellidens marginalis, protein hydrolysate
Procedia PDF Downloads 41839 Older Consumer’s Willingness to Trust Social Media Advertising: An Australian Case
Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant
Abstract:
Social media networks have become the hotbed for advertising activities, due mainly to their increasing consumer/user base, and secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel-specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. The purpose of this exploratory paper is to investigate the extent to which social media users trust social media advertising. Understanding this relationship will fundamentally assist marketers in better understanding social media interactions and their implications for society. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional different media, such as broadcast media and print media, and more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilised as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: Gen Z/Millennials Reliability = 4.90/7 vs Gen X/Boomers Reliability = 4.34/7; Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads, when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioural intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users, in an attempt to foster positive behavioural responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.Keywords: social media advertising, trust, older consumers, online
Procedia PDF Downloads 8338 Investigation on Pull-Out-Behavior and Interface Critical Parameters of Polymeric Fibers Embedded in Concrete and Their Correlation with Particular Fiber Characteristics
Authors: Michael Sigruener, Dirk Muscat, Nicole Struebbe
Abstract:
Fiber reinforcement is a state of the art to enhance mechanical properties in plastics. For concrete and civil engineering, steel reinforcements are commonly used. Steel reinforcements show disadvantages in their chemical resistance and weight, whereas polymer fibers' major problems are in fiber-matrix adhesion and mechanical properties. In spite of these facts, longevity and easy handling, as well as chemical resistance motivate researches to develop a polymeric material for fiber reinforced concrete. Adhesion and interfacial mechanism in fiber-polymer-composites are already studied thoroughly. For polymer fibers used as concrete reinforcement, the bonding behavior still requires a deeper investigation. Therefore, several differing polymers (e.g., polypropylene (PP), polyamide 6 (PA6) and polyetheretherketone (PEEK)) were spun into fibers via single screw extrusion and monoaxial stretching. Fibers then were embedded in a concrete matrix, and Single-Fiber-Pull-Out-Tests (SFPT) were conducted to investigate bonding characteristics and microstructural interface of the composite. Differences in maximum pull-out-force, displacement and slope of the linear part of force vs displacement-function, which depicts the adhesion strength and the ductility of the interfacial bond were studied. In SFPT fiber, debonding is an inhomogeneous process, where the combination of interfacial bonding and friction mechanisms add up to a resulting value. Therefore, correlations between polymeric properties and pull-out-mechanisms have to be emphasized. To investigate these correlations, all fibers were introduced to a series of analysis such as differential scanning calorimetry (DSC), contact angle measurement, surface roughness and hardness analysis, tensile testing and scanning electron microscope (SEM). Of each polymer, smooth and abraded fibers were tested, first to simulate the abrasion and damage caused by a concrete mixing process and secondly to estimate the influence of mechanical anchoring of rough surfaces. In general, abraded fibers showed a significant increase in maximum pull-out-force due to better mechanical anchoring. Friction processes therefore play a major role to increase the maximum pull-out-force. The polymer hardness affects the tribological behavior and polymers with high hardness lead to lower surface roughness verified by SEM and surface roughness measurements. This concludes into a decreased maximum pull-out-force for hard polymers. High surface energy polymers show better interfacial bonding strength in general, which coincides with the conducted SFPT investigation. Polymers such as PEEK or PA6 show higher bonding strength in smooth and roughened fibers, revealed through high pull-out-force and concrete particles bonded on the fiber surface pictured via SEM analysis. The surface energy divides into dispersive and polar part, at which the slope is correlating with the polar part. Only polar polymers increase their SFPT-function slope due to better wetting abilities when showing a higher bonding area through rough surfaces. Hence, the maximum force and the bonding strength of an embedded fiber is a function of polarity, hardness, and consequently surface roughness. Other properties such as crystallinity or tensile strength do not affect bonding behavior. Through the conducted analysis, it is now feasible to understand and resolve different effects in pull-out-behavior step-by-step based on the polymer properties itself. This investigation developed a roadmap on how to engineer high adhering polymeric materials for fiber reinforcement of concrete.Keywords: fiber-matrix interface, polymeric fibers, fiber reinforced concrete, single fiber pull-out test
Procedia PDF Downloads 11337 Mining and Ecological Events and its Impact on the Genesis and Geo-Distribution of Ebola Outbreaks in Africa
Authors: E Tambo, O. O. Olalubi, E. C. Ugwu, J. Y. Ngogang
Abstract:
Despite the World Health Organization (WHO) declaration of international health emergency concern, the status quo of responses and efforts to stem the worst-recorded Ebola epidemic Ebola outbreak is still precariously inadequate in most of the affected in West. Mining natural resources have been shown to play a key role in both motivating and fuelling ethnic, civil and armed conflicts that have plagued a number of African countries over the last decade. Revenues from the exploitation of natural resources are not only used in sustaining the national economy but also armies, personal enrichment and building political support. Little is documented on the mining and ecological impact on the emergence and geographical distribution of Ebola in Africa over time and space. We aimed to provide a better understanding of the interconnectedness among issues of mining natural, resource management, mining conflict and post-conflict on Ebola outbreak and how wealth generated from abundant natural resources could be better managed in promoting research and development towards strengthening environmental, socioeconomic and health systems sustainability on Ebola outbreak and other emerging diseases surveillance and responses systems prevention and control, early warning alert, durable peace and sustainable development rather than to fuel conflicts, resurgence and emerging diseases epidemics in the perspective of community and national/regional approach. Our results showed the first assessment of systematic impact of all major minerals conflict events diffusion over space and time and mining activities on nine Ebola genesis and geo-distribution in affected countries across Africa. We demonstrate how, where and when mining activities in Africa increase ecological degradation, conflicts at the local level and then spreads violence across territory and time by enhancing the financial capacities of fighting groups/ethnics and diseases onset. In addition, led process of developing minimum standards for natural resource governance; improving governmental and civil society capacity for natural resource management, including the strengthening of monitoring and enforcement mechanisms; understanding the post-mining and conflicts community or national reconstruction and rehabilitation programmes in strengthening or developing community health systems and regulatory mechanisms. In addition the quest for the control over these resources and illegal mining across the landscape forest incursion provided increase environmental and ecological instability and displacement and disequilibrium, therefore affecting the intensity and duration of mining and conflict/wars and episode of Ebola outbreaks over time and space. We highlight the key findings and lessons learnt in promoting country or community-led process in transforming natural resource wealth from a peace liability to a peace asset. The imperative necessity for advocacy and through facilitating intergovernmental deliberations on critical issues and challenges affecting Africa community transforming exploitation of natural resources from a peace liability to outbreak prevention and control. The vital role of mining in increasing government revenues and expenditures, equitable distribution of wealth and health to all stakeholders, in particular local communities requires coordination, cooperative leadership and partnership in fostering sustainable developmental initiatives from mining context to outbreak and other infectious diseases surveillance responses systems in prevention and control, and judicious resource management.Keywords: mining, mining conflicts, mines, ecological, Ebola, outbreak, mining companies, miners, impact
Procedia PDF Downloads 30236 The Study of Adsorption of RuP onto TiO₂ (110) Surface Using Photoemission Deposited by Electrospray
Authors: Tahani Mashikhi
Abstract:
Countries worldwide rely on electric power as a critical economic growth and progress factor. Renewable energy sources, often referred to as alternative energy sources, such as wind, solar energy, geothermal energy, biomass, and hydropower, have garnered significant interest in response to the rising consumption of fossil fuels. Dye-sensitized solar cells (DSSCs) are a highly promising alternative for energy production as they possess numerous advantages compared to traditional silicon solar cells and thin-film solar cells. These include their low cost, high flexibility, straightforward preparation methodology, ease of production, low toxicity, different colors, semi-transparent quality, and high power conversion efficiency. A solar cell, also known as a photovoltaic cell, is a device that converts the energy of light from the sun into electrical energy through the photovoltaic effect. The Gratzel cell is the initial dye-sensitized solar cell made from colloidal titanium dioxide. The operational mechanism of DSSCs relies on various key elements, such as a layer composed of wide band gap semiconducting oxide materials (e.g. titanium dioxide [TiO₂]), as well as a photosensitizer or dye that absorbs sunlight to inject electrons into the conduction band, the electrolyte utilizes the triiodide/iodide redox pair (I− /I₃−) to regenerate dye molecules and a counter electrode made of carbon or platinum facilitates the movement of electrons across the circuit. Electrospray deposition permits the deposition of fragile, non-volatile molecules in a vacuum environment, including dye sensitizers, complex molecules, nanoparticles, and biomolecules. Surface science techniques, particularly X-ray photoelectron spectroscopy, are employed to examine dye-sensitized solar cells. This study investigates the possible application of electrospray deposition to build high-quality layers in situ in a vacuum. Two distinct categories of dyes can be employed as sensitizers in DSSCs: organometallic semiconductor sensitizers and purely organic dyes. Most organometallic dyes, including Ru533, RuC, and RuP, contain a ruthenium atom, which is a rare element. This ruthenium atom enhances the efficiency of dye-sensitized solar cells (DSSCs). These dyes are characterized by their high cost and typically appear as dark purple powders. On the other hand, organic dyes, such as SQ2, RK1, D5, SC4, and R6, exhibit reduced efficacy due to the lack of a ruthenium atom. These dyes appear in green, red, orange, and blue powder-colored. This study will specifically concentrate on metal-organic dyes. The adsorption of dye molecules onto the rutile TiO₂ (110) surface has been deposited in situ under ultra-high vacuum conditions by combining an electrospray deposition method with X-ray photoelectron spectroscopy. The X-ray photoelectron spectroscopy (XPS) technique examines chemical bonds and interactions between molecules and TiO₂ surfaces. The dyes were deposited at varying times, from 5 minutes to 40 minutes, to achieve distinct layers of coverage categorized as sub-monolayer, monolayer, few layers, or multilayer. Based on the O 1s photoelectron spectra data, it can be observed that the monolayer establishes a strong chemical bond with the Ti atoms of the oxide substrate by deprotonating the carboxylic acid groups through 2M-bidentate bridging anchors. The C 1s and N 1s photoelectron spectra indicate that the molecule remains intact at the surface. This can be due to the existence of all functional groups and a ruthenium atom, where the binding energy of Ru 3d is consistent with Ru2+.Keywords: deposit, dye, electrospray, TiO₂, XPS
Procedia PDF Downloads 4835 A Multi-Scale Approach to Space Use: Habitat Disturbance Alters Behavior, Movement and Energy Budgets in Sloths (Bradypus variegatus)
Authors: Heather E. Ewart, Keith Jensen, Rebecca N. Cliffe
Abstract:
Fragmentation and changes in the structural composition of tropical forests – as a result of intensifying anthropogenic disturbance – are increasing pressures on local biodiversity. Species with low dispersal abilities have some of the highest extinction risks in response to environmental change, as even small-scale environmental variation can substantially impact their space use and energetic balance. Understanding the implications of forest disturbance is therefore essential, ultimately allowing for more effective and targeted conservation initiatives. Here, the impact of different levels of forest disturbance on the space use, energetics, movement and behavior of 18 brown-throated sloths (Bradypus variegatus) were assessed in the South Caribbean of Costa Rica. A multi-scale framework was used to measure forest disturbance, including large-scale (landscape-level classifications) and fine-scale (within and surrounding individual home ranges) forest composition. Three landscape-level classifications were identified: primary forests (undisturbed), secondary forests (some disturbance, regenerating) and urban forests (high levels of disturbance and fragmentation). Finer-scale forest composition was determined using measurements of habitat structure and quality within and surrounding individual home ranges for each sloth (home range estimates were calculated using autocorrelated kernel density estimation [AKDE]). Measurements of forest quality included tree connectivity, density, diameter and height, species richness, and percentage of canopy cover. To determine space use, energetics, movement and behavior, six sloths in urban forests, seven sloths in secondary forests and five sloths in primary forests were tracked using a combination of Very High Frequency (VHF) radio transmitters and Global Positioning System (GPS) technology over an average period of 120 days. All sloths were also fitted with micro data-loggers (containing tri-axial accelerometers and pressure loggers) for an average of 30 days to allow for behavior-specific movement analyses (data analysis ongoing for data-loggers and primary forest sloths). Data-loggers included determination of activity budgets, circadian rhythms of activity and energy expenditure (using the vector of the dynamic body acceleration [VeDBA] as a proxy). Analyses to date indicate that home range size significantly increased with the level of forest disturbance. Female sloths inhabiting secondary forests averaged 0.67-hectare home ranges, while female sloths inhabiting urban forests averaged 1.93-hectare home ranges (estimates are represented by median values to account for the individual variation in home range size in sloths). Likewise, home range estimates for male sloths were 2.35 hectares in secondary forests and 4.83 in urban forests. Sloths in urban forests also used nearly double (median = 22.5) the number of trees as sloths in the secondary forest (median = 12). These preliminary data indicate that forest disturbance likely heightens the energetic requirements of sloths, a species already critically limited by low dispersal ability and rates of energy acquisition. Energetic and behavioral analyses from the data-loggers will be considered in the context of fine-scale forest composition measurements (i.e., habitat quality and structure) and are expected to reflect the observed home range and movement constraints. The implications of these results are far-reaching, presenting an opportunity to define a critical index of habitat connectivity for low dispersal species such as sloths.Keywords: biodiversity conservation, forest disturbance, movement ecology, sloths
Procedia PDF Downloads 11434 Applying Concept Mapping to Explore Temperature Abuse Factors in the Processes of Cold Chain Logistics Centers
Authors: Marco F. Benaglia, Mei H. Chen, Kune M. Tsai, Chia H. Hung
Abstract:
As societal and family structures, consumer dietary habits, and awareness about food safety and quality continue to evolve in most developed countries, the demand for refrigerated and frozen foods has been growing, and the issues related to their preservation have gained increasing attention. A well-established cold chain logistics system is essential to avoid any temperature abuse; therefore, assessing potential disruptions in the operational processes of cold chain logistics centers becomes pivotal. This study preliminarily employs HACCP to find disruption factors in cold chain logistics centers that may cause temperature abuse. Then, concept mapping is applied: selected experts engage in brainstorming sessions to identify any further factors. The panel consists of ten experts, including four from logistics and home delivery, two from retail distribution, one from the food industry, two from low-temperature logistics centers, and one from the freight industry. Disruptions include equipment-related aspects, human factors, management aspects, and process-related considerations. The areas of observation encompass freezer rooms, refrigerated storage areas, loading docks, sorting areas, and vehicle parking zones. The experts also categorize the disruption factors based on perceived similarities and build a similarity matrix. Each factor is evaluated for its impact, frequency, and investment importance. Next, multiple scale analysis, cluster analysis, and other methods are used to analyze these factors. Simultaneously, key disruption factors are identified based on their impact and frequency, and, subsequently, the factors that companies prioritize and are willing to invest in are determined by assessing investors’ risk aversion behavior. Finally, Cumulative Prospect Theory (CPT) is applied to verify the risk patterns. 66 disruption factors are found and categorized into six clusters: (1) "Inappropriate Use and Maintenance of Hardware and Software Facilities", (2) "Inadequate Management and Operational Negligence", (3) "Product Characteristics Affecting Quality and Inappropriate Packaging", (4) "Poor Control of Operation Timing and Missing Distribution Processing", (5) "Inadequate Planning for Peak Periods and Poor Process Planning", and (6) "Insufficient Cold Chain Awareness and Inadequate Training of Personnel". This study also identifies five critical factors in the operational processes of cold chain logistics centers: "Lack of Personnel’s Awareness Regarding Cold Chain Quality", "Personnel Not Following Standard Operating Procedures", "Personnel’s Operational Negligence", "Management’s Inadequacy", and "Lack of Personnel’s Knowledge About Cold Chain". The findings show that cold chain operators prioritize prevention and improvement efforts in the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster, particularly focusing on the factors of "Temperature Setting Errors" and "Management’s Inadequacy". However, through the application of CPT theory, this study reveals that companies are not usually willing to invest in the improvement of factors related to the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster due to its low occurrence likelihood, but they acknowledge the severity of the consequences if it does occur. Hence, the main implication is that the key disruption factors in cold chain logistics centers’ processes are associated with personnel issues; therefore, comprehensive training, periodic audits, and the establishment of reasonable incentives and penalties for both new employees and managers may significantly reduce disruption issues.Keywords: concept mapping, cold chain, HACCP, cumulative prospect theory
Procedia PDF Downloads 7033 Emerging Positive Education Interventions for Clean Sport Behavior: A Pilot Study
Authors: Zeinab Zaremohzzabieh, Syasya Firzana Azmi, Haslinda Abdullah, Soh Kim Geok, Aini Azeqa Ma'rof, Hayrol Azril Mohammed Shaffril
Abstract:
The escalating prevalence of doping in sports, casting a shadow over both high-performance and recreational settings, has emerged as a formidable concern, particularly within the realm of young athletes. Doping, characterized by the surreptitious use of prohibited substances to gain a competitive edge, underscores the pressing need for comprehensive and efficacious preventive measures. This study aims to address a crucial void in current research by unraveling the motivations that drive clean adolescent athletes to steadfastly abstain from performance-enhancing substances. In navigating this intricate landscape, the study adopts a positive psychology perspective, investigating into the conditions and processes that contribute to the holistic well-being of individuals and communities. At the heart of this exploration lies the application of the PERMA model, a comprehensive positive psychology framework encapsulating positive emotion, engagement, relationships, meaning, and accomplishments. This model functions as a distinctive lens, dissecting intervention results to offer nuanced insights into the complex dynamics of clean sport behavior. The research is poised to usher in a paradigm shift from conventional anti-doping strategies, predominantly fixated on identifying deficits, towards an innovative approach firmly rooted in positive psychology. The objective of this study is to evaluate the efficacy of a positive education intervention program tailored to promote clean sport behavior among Malaysian adolescent athletes. Representing unexplored terrain within the landscape of anti-doping efforts, this initiative endeavors to reshape the focus from deficiencies to strengths. The meticulously crafted pilot study engages thirty adolescent athletes, divided into a control group of 15 and an experimental group of 15. The pilot study serves as the crucible to assess the effectiveness of the prepared intervention package, providing indispensable insights that will meticulously guide the finalization of an all-encompassing intervention program for the main study. The main study adopts a pioneering two-arm randomized control trial methodology, actively involving adolescent athletes from diverse Malaysian high schools. This approach aims to address critical lacunae in anti-doping strategies, specifically calibrated to resonate with the unique context of Malaysian schools. The study, cognizant of the imperative to develop preventive measures harmonizing with the cultural and educational milieu of Malaysian adolescent athletes, aspires to cultivate a culture of clean sport. In conclusion, this research aspires to contribute unprecedented insights into the efficacy of positive education interventions firmly rooted in the PERMA model. By unraveling the intricacies of clean sport behavior, particularly within the context of Malaysian adolescent athletes, the study seeks to introduce transformative preventive methods. The adoption of positive psychology as an avant-garde anti-doping tool represents an innovative and promising approach, bridging a conspicuous gap in scholarly research and offering potential panaceas for the sporting community. As this study unfurls its chapters, it carries the promise not only to enrich our understanding of clean sport behavior but also to pave the way for positive metamorphosis within the realm of adolescent sports in Malaysia.Keywords: positive education interventions, a pilot study, clean sport behavior, adolescent athletes, Malaysia
Procedia PDF Downloads 58