Search results for: real estate valuation model
510 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India
Authors: Vinu Elias Jacob, Manoj Kumar Kini
Abstract:
Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.Keywords: disaster management, resilience, spatial planning, spatial transformations
Procedia PDF Downloads 297509 Predictors of Sexually Transmitted Infection of Korean Adolescent Females: Analysis of Pooled Data from Korean Nationwide Survey
Authors: Jaeyoung Lee, Minji Je
Abstract:
Objectives: In adolescence, adolescents are curious about sex, but sexual experience before becoming an adult can cause the risk of high probability of sexually transmitted infection. Therefore, it is very important to prevent sexually transmitted infections so that adolescents can grow in healthy and upright way. Adolescent females, especially, have sexual behavior distinguished from that of male adolescents. Protecting female adolescents’ reproductive health is even more important since it is directly related to the childbirth of the next generation. This study, thus, investigated the predictors of sexually transmitted infection in adolescent females with sexual experiences based on the National Health Statistics in Korea. Methods: This study was conducted based on the National Health Statistics in Korea. The 11th Korea Youth Behavior Web-based Survey in 2016 was conducted in the type of anonymous self-reported survey in order to find out the health behavior of adolescents. The target recruitment group was middle and high school students nationwide as of April 2016, and 65,528 students from a total of 800 middle and high schools participated. The study was conducted in 537 female high school students (Grades 10–12) among them. The collected data were analyzed as complex sampling design using SPSS statistics 22. The strata, cluster, weight, and finite population correction provided by Korea Center for Disease Control & Prevention (KCDC) were reflected to constitute complex sample design files, which were used in the statistical analysis. The analysis methods included Rao-Scott chi-square test, complex samples general linear model, and complex samples multiple logistic regression analysis. Results: Out of 537 female adolescents, 11.9% (53 adolescents) had experiences of venereal infection. The predictors for venereal infection of the subjects were ‘age at first intercourse’ and ‘sexual intercourse after drinking’. The sexually transmitted infection of the subjects was decreased by 0.31 times (p=.006, 95%CI=0.13-0.71) for middle school students and 0.13 times (p<.001, 95%CI=0.05-0.32) for high school students whereas the age of the first sexual experience was under elementary school age. In addition, the sexually transmitted infection of the subjects was 3.54 times (p < .001, 95%CI=1.76-7.14) increased when they have experience of sexual relation after drinking alcohol, compared to those without the experience of sexual relation after drinking alcohol. Conclusions: The female adolescents had high probability of sexually transmitted infection if their age for the first sexual experience was low. Therefore, the female adolescents who start sexual experience earlier shall have practical sex education appropriate for their developmental stage. In addition, since the sexually transmitted infection increases, if they have sexual relations after drinking alcohol, the consideration for prevention of alcohol use or intervention of sex education shall be required. When health education intervention is conducted for health promotion for female adolescents in the future, it is necessary to reflect the result of this study.Keywords: adolescent, coitus, female, sexually transmitted diseases
Procedia PDF Downloads 192508 Evaluation of Cyclic Steam Injection in Multi-Layered Heterogeneous Reservoir
Authors: Worawanna Panyakotkaew, Falan Srisuriyachai
Abstract:
Cyclic steam injection (CSI) is a thermal recovery technique performed by injecting periodically heated steam into heavy oil reservoir. Oil viscosity is substantially reduced by means of heat transferred from steam. Together with gas pressurization, oil recovery is greatly improved. Nevertheless, prediction of effectiveness of the process is difficult when reservoir contains degree of heterogeneity. Therefore, study of heterogeneity together with interest reservoir properties must be evaluated prior to field implementation. In this study, thermal reservoir simulation program is utilized. Reservoir model is firstly constructed as multi-layered with coarsening upward sequence. The highest permeability is located on top layer with descending of permeability values in lower layers. Steam is injected from two wells located diagonally in quarter five-spot pattern. Heavy oil is produced by adjusting operating parameters including soaking period and steam quality. After selecting the best conditions for both parameters yielding the highest oil recovery, effects of degree of heterogeneity (represented by Lorenz coefficient), vertical permeability and permeability sequence are evaluated. Surprisingly, simulation results show that reservoir heterogeneity yields benefits on CSI technique. Increasing of reservoir heterogeneity impoverishes permeability distribution. High permeability contrast results in steam intruding in upper layers. Once temperature is cool down during back flow period, condense water percolates downward, resulting in high oil saturation on top layers. Gas saturation appears on top after while, causing better propagation of steam in the following cycle due to high compressibility of gas. Large steam chamber therefore covers most of the area in upper zone. Oil recovery reaches approximately 60% which is of about 20% higher than case of heterogeneous reservoir. Vertical permeability exhibits benefits on CSI. Expansion of steam chamber occurs within shorter time from upper to lower zone. For fining upward permeability sequence where permeability values are reversed from the previous case, steam does not override to top layers due to low permeability. Propagation of steam chamber occurs in middle of reservoir where permeability is high enough. Rate of oil recovery is slower compared to coarsening upward case due to lower permeability at the location where propagation of steam chamber occurs. Even CSI technique produces oil quite slowly in early cycles, once steam chamber is formed deep in the reservoir, heat is delivered to formation quickly in latter cycles. Since reservoir heterogeneity is unavoidable, a thorough understanding of its effect must be considered. This study shows that CSI technique might be one of the compatible solutions for highly heterogeneous reservoir. This competitive technique also shows benefit in terms of heat consumption as steam is injected periodically.Keywords: cyclic steam injection, heterogeneity, reservoir simulation, thermal recovery
Procedia PDF Downloads 459507 The Effect of Vibration Amplitude on Tissue Temperature and Lesion Size When Using a Vibrating Cardiac Catheter
Authors: Kaihong Yu, Tetsui Yamashita, Shigeaki Shingyochi, Kazuo Matsumoto, Makoto Ohta
Abstract:
During cardiac ablation, high power delivery for deeper lesion formation is limited by electrode-tissue interface overheating which can cause serious complications such as thrombus. To prevent this overheating, temperature control and open irrigation are often used. In temperature control, radiofrequency generator is adjusted to deliver the maximum output power, which maintains the electrode temperature at a target temperature (commonly 55°C or 60°C). Then the electrode-tissue interface temperature is also limited. The electrode temperature is a result of heating from the contacted tissue and cooling from the surrounding blood. Because the cooling from blood is decreased under conditions of low blood flow, the generator needs to decrease the output power. Thus, temperature control cannot deliver high power under conditions of low blood flow. In open irrigation, saline in room temperature is flushed through the holes arranged in the electrode. The electrode-tissue interface is cooled by the sufficient environmental cooling. And high power delivery can also be done under conditions of low blood flow. However, a large amount of saline infusions (approximately 1500 ml) during irrigation can cause other serious complication. When open irrigation cannot be used under conditions of low blood flow, a new overheating prevention may be required. The authors have proposed a new electrode cooling method by making the catheter vibrating. The previous work has introduced that the vibration can make a cooling effect on electrode, which may result form that the vibration could increase the flow velocity around the catheter. The previous work has also proved that increasing vibration frequency can increase the cooling by vibration. However, the effect of the vibration amplitude is still unknown. Thus, the present study investigated the effect of vibration amplitude on tissue temperature and lesion size. An agar phantom model was used as a tissue-equivalent material for measuring tissue temperature. Thermocouples were inserted into the agar to measure the internal temperature. Porcine myocardium was used for lesion size measurement. A normal ablation catheter was set perpendicular to the tissue (agar or porcine myocardium) with 10 gf contact force in 37°C saline without flow. Vibration amplitude of ± 0.5, ± 0.75, and ± 1.0 mm with a constant frequency (31 Hz or 63) was used. A temperature control protocol (45°C for agar phantom, 60°C for porcine myocardium) was used for the radiofrequency applications. The larger amplitude shows the larger lesion sizes. And the higher tissue temperatures in agar phantom are also shown with the higher amplitude. With a same frequency, the larger amplitude has the higher vibrating speed. And the higher vibrating speed will increase the flow velocity around the electrode more, which leads to a larger electrode temperature decrease. To maintain the electrode at the target temperature, ablator has to increase the output power. With the higher output power in the same duration, the released energy also increases. Consequently, the tissue temperature will be increased and lead to larger lesion sizes.Keywords: cardiac ablation, electrode cooling, lesion size, tissue temperature
Procedia PDF Downloads 372506 High Strain Rate Behavior of Harmonic Structure Designed Pure Nickel: Mechanical Characterization Microstructure Analysis and 3D Modelisation
Authors: D. Varadaradjou, H. Kebir, J. Mespoulet, D. Tingaud, S. Bouvier, P. Deconick, K. Ameyama, G. Dirras
Abstract:
The development of new architecture metallic alloys with controlled microstructures is one of the strategic ways for designing materials with high innovation potential and, particularly, with improved mechanical properties as required for structural materials. Indeed, unlike conventional counterparts, metallic materials having so-called harmonic structure displays strength and ductility synergy. The latter occurs due to a unique microstructure design: a coarse grain structure surrounded by a 3D continuous network of ultra-fine grain known as “core” and “shell,” respectively. In the present study, pure harmonic-structured (HS) Nickel samples were processed via controlled mechanical milling and followed by spark plasma sintering (SPS). The present work aims at characterizing the mechanical properties of HS pure Nickel under room temperature dynamic loading through a Split Hopkinson Pressure Bar (SHPB) test and the underlying microstructure evolution. A stopper ring was used to maintain the strain at a fixed value of about 20%. Five samples (named B1 to B5) were impacted using different striker bar velocities from 14 m/s to 28 m/s, yielding strain rate in the range 4000-7000 s-1. Results were considered until a 10% deformation value, which is the deformation threshold for the constant strain rate assumption. The non-deformed (INIT – post-SPS process) and post-SHPB microstructure (B1 to B5) were investigated by EBSD. It was observed that while the strain rate is increased, the average grain size within the core decreases. An in-depth analysis of grains and grain boundaries was made to highlight the thermal (such as dynamic recrystallization) or mechanical (such as grains fragmentation by dislocation) contribution within the “core” and “shell.” One of the most widely used methods for determining the dynamic behavior of materials is the SHPB technique developed by Kolsky. A 3D simulation of the SHPB test was created through ABAQUS in dynamic explicit. This 3D simulation allows taking into account all modes of vibration. An inverse approach was used to identify the material parameters from the equation of Johnson-Cook (JC) by minimizing the difference between the numerical and experimental data. The JC’s parameters were identified using B1 and B5 samples configurations. Predictively, identified parameters of JC’s equation shows good result for the other sample configuration. Furthermore, mean rise of temperature within the harmonic Nickel sample can be obtained through ABAQUS and show an elevation of about 35°C for all fives samples. At this temperature, a thermal mechanism cannot be activated. Therefore, grains fragmentation within the core is mainly due to mechanical phenomena for a fixed final strain of 20%.Keywords: 3D simulation, fragmentation, harmonic structure, high strain rate, Johnson-cook model, microstructure
Procedia PDF Downloads 231505 Chemical vs Visual Perception in Food Choice Ability of Octopus vulgaris (Cuvier, 1797)
Authors: Al Sayed Al Soudy, Valeria Maselli, Gianluca Polese, Anna Di Cosmo
Abstract:
Cephalopods are considered as a model organism with a rich behavioral repertoire. Sophisticated behaviors were widely studied and described in different species such as Octopus vulgaris, who has evolved the largest and more complex nervous system among invertebrates. In O. vulgaris, cognitive abilities in problem-solving tasks and learning abilities are associated with long-term memory and spatial memory, mediated by highly developed sensory organs. They are equipped with sophisticated eyes, able to discriminate colors even with a single photoreceptor type, vestibular system, ‘lateral line analogue’, primitive ‘hearing’ system and olfactory organs. They can recognize chemical cues either through direct contact with odors sources using suckers or by distance through the olfactory organs. Cephalopods are able to detect widespread waterborne molecules by the olfactory organs. However, many volatile odorant molecules are insoluble or have a very low solubility in water, and must be perceived by direct contact. O. vulgaris, equipped with many chemosensory neurons located in their suckers, exhibits a peculiar behavior that can be provocatively described as 'smell by touch'. The aim of this study is to establish the priority given to chemical vs. visual perception in food choice. Materials and methods: Three different types of food (anchovies, clams, and mussels) were used, and all sessions were recorded with a digital camera. During the acclimatization period, Octopuses were exposed to the three types of food to test their natural food preferences. Later, to verify if food preference is maintained, food was provided in transparent screw-jars with pierced lids to allow both visual and chemical recognition of the food inside. Subsequently, we tested alternatively octopuses with food in sealed transparent screw-jars and food in blind screw-jars with pierced lids. As a control, we used blind sealed jars with the same lid color to verify a random choice among food types. Results and discussion: During the acclimatization period, O. vulgaris shows a higher preference for anchovies (60%) followed by clams (30%), then mussels (10%). After acclimatization, using the transparent and pierced screw jars octopus’s food choices resulted in 50-50 between anchovies and clams, avoiding mussels. Later, guided by just visual sense, with transparent but not pierced jars, their food preferences resulted in 100% anchovies. With pierced but not transparent jars their food preference resulted in 100% anchovies as first food choice, the clams as a second food choice result (33.3%). With no possibility to select food, neither by vision nor by chemoreception, the results were 20% anchovies, 20% clams, and 60% mussels. We conclude that O. vulgaris uses both chemical and visual senses in an integrative way in food choice, but if we exclude one of them, it appears clear that its food preference relies on chemical sense more than on visual perception.Keywords: food choice, Octopus vulgaris, olfaction, sensory organs, visual sense
Procedia PDF Downloads 221504 Regional Rates of Sand Supply to the New South Wales Coast: Southeastern Australia
Authors: Marta Ribo, Ian D. Goodwin, Thomas Mortlock, Phil O’Brien
Abstract:
Coastal behavior is best investigated using a sediment budget approach, based on the identification of sediment sources and sinks. Grain size distribution over the New South Wales (NSW) continental shelf has been widely characterized since the 1970’s. Coarser sediment has generally accumulated on the outer shelf, and/or nearshore zones, with the latter related to the presence of nearshore reef and bedrocks. The central part of the NSW shelf is characterized by the presence of fine sediments distributed parallel to the coastline. This study presents new grain size distribution maps along the NSW continental shelf, built using all available NSW and Commonwealth Government holdings. All available seabed bathymetric data form prior projects, single and multibeam sonar, and aerial LiDAR surveys were integrated into a single bathymetric surface for the NSW continental shelf. Grain size information was extracted from the sediment sample data collected in more than 30 studies. The information extracted from the sediment collections varied between reports. Thus, given the inconsistency of the grain size data, a common grain size classification was her defined using the phi scale. The new sediment distribution maps produced, together with new detailed seabed bathymetric data enabled us to revise the delineation of sediment compartments to more accurately reflect the true nature of sediment movement on the inner shelf and nearshore. Accordingly, nine primary mega coastal compartments were delineated along the NSW coast and shelf. The sediment compartments are bounded by prominent nearshore headlands and reefs, and major river and estuarine inlets that act as sediment sources and/or sinks. The new sediment grain size distribution was used as an input in the morphological modelling to quantify the sediment transport patterns (and indicative rates of transport), used to investigate sand supply rates and processes from the lower shoreface to the NSW coast. The rate of sand supply to the NSW coast from deep water is a major uncertainty in projecting future coastal response to sea-level rise. Offshore transport of sand is generally expected as beaches respond to rising sea levels but an onshore supply from the lower shoreface has the potential to offset some of the impacts of sea-level rise, such as coastline recession. Sediment exchange between the lower shoreface and sub-aerial beach has been modelled across the south, central, mid-north and far-north coast of NSW. Our model approach is that high-energy storm events are the primary agents of sand transport in deep water, while non-storm conditions are responsible for re-distributing sand within the beach and surf zone.Keywords: New South Wales coast, off-shore transport, sand supply, sediment distribution maps
Procedia PDF Downloads 228503 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems
Procedia PDF Downloads 134502 Multimodal Analysis of News Magazines' Front-Page Portrayals of the US, Germany, China, and Russia
Authors: Alena Radina
Abstract:
On the global stage, national image is shaped by historical memory of wars and alliances, government ideology and particularly media stereotypes which represent countries in positive or negative ways. News magazine covers are a key site for national representation. The object of analysis in this paper is the portrayals of the US, Germany, China, and Russia in the front pages and cover stories of “Time”, “Der Spiegel”, “Beijing Review”, and “Expert”. Political comedy helps people learn about current affairs even if politics is not their area of interest, and thus satire indirectly sets the public agenda. Coupled with satirical messages, cover images and the linguistic messages embedded in the covers become persuasive visual and verbal factors, known to drive about 80% of magazine sales. Preliminary analysis identified satirical elements in magazine covers, which are known to influence and frame understandings and attract younger audiences. Multimodal and transnational comparative framing analyses lay the groundwork to investigate why journalists, editors and designers deploy certain frames rather than others. This research investigates to what degree frames used in covers correlate with frames within the cover stories and what these framings can tell us about media professionals’ representations of their own and other nations. The study sample includes 32 covers consisting of two covers representing each of the four chosen countries from the four magazines. The sampling framework considers two time periods to compare countries’ representation with two different presidents, and between men and women when present. The countries selected for analysis represent each category of the international news flows model: the core nations are the US and Germany; China is a semi-peripheral country; and Russia is peripheral. Examining textual and visual design elements on the covers and images in the cover stories reveals not only what editors believe visually attracts the reader’s attention to the magazine but also how the magazines frame and construct national images and national leaders. The cover is the most powerful editorial and design page in a magazine because images incorporate less intrusive framing tools. Thus, covers require less cognitive effort of audiences who may therefore be more likely to accept the visual frame without question. Analysis of design and linguistic elements in magazine covers helps to understand how media outlets shape their audience’s perceptions and how magazines frame global issues. While previous multimodal research of covers has focused mostly on lifestyle magazines or newspapers, this paper examines the power of current affairs magazines’ covers to shape audience perception of national image.Keywords: framing analysis, magazine covers, multimodality, national image, satire
Procedia PDF Downloads 103501 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation
Authors: Joanna Gerszon, Aleksandra Rodacka
Abstract:
In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation
Procedia PDF Downloads 167500 Wood as a Climate Buffer in a Supermarket
Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø
Abstract:
Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast
Procedia PDF Downloads 218499 Impact of Experiential Learning on Executive Function, Language Development, and Quality of Life for Adults with Intellectual and Developmental Disabilities (IDD)
Authors: Mary Deyo, Zmara Harrison
Abstract:
This study reports the outcomes of an 8-week experiential learning program for 6 adults with Intellectual and Developmental Disabilities (IDD) at a day habilitation program. The intervention foci for this program include executive function, language learning in the domains of expressive, receptive, and pragmatic language, and quality of life. The interprofessional collaboration aimed at supporting adults with IDD to reach person-centered, functional goals across skill domains is critical. This study is a significant addition to the speech-language pathology literature in that it examines a therapy method that potentially meets this need while targeting domains within the speech-language pathology scope of practice. Communication therapy was provided during highly valued and meaningful hands-on learning experiences, referred to as the Garden Club, which incorporated all aspects of planting and caring for a garden as well as related journaling, sensory, cooking, art, and technology-based activities. Direct care staff and an undergraduate research assistant were trained by SLP to be impactful language guides during their interactions with participants in the Garden Club. SLP also provided direct therapy and modeling during Garden Club. Research methods used in this study included a mixed methods analysis of a literature review, a quasi-experimental implementation of communication therapy in the context of experiential learning activities, Quality of Life participant surveys, quantitative pre- post- data collection and linear mixed model analysis, qualitative data collection with qualitative content analysis and coding for themes. Outcomes indicated overall positive changes in expressive vocabulary, following multi-step directions, sequencing, problem-solving, planning, skills for building and maintaining meaningful social relationships, and participant perception of the Garden Project’s impact on their own quality of life. Implementation of this project also highlighted supports and barriers that must be taken into consideration when planning similar projects. Overall findings support the use of experiential learning projects in day habilitation programs for adults with IDD, as well as additional research to deepen understanding of best practices, supports, and barriers for implementation of experiential learning with this population. This research provides an important contribution to research in the fields of speech-language pathology and other professions serving adults with IDD by describing an interprofessional experiential learning program with positive outcomes for executive function, language learning, and quality of life.Keywords: experiential learning, adults, intellectual and developmental disabilities, expressive language, receptive language, pragmatic language, executive function, communication therapy, day habilitation, interprofessionalism, quality of life
Procedia PDF Downloads 127498 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 144497 Experimental Measurement of Equatorial Ring Current Generated by Magnetoplasma Sail in Three-Dimensional Spatial Coordinate
Authors: Masato Koizumi, Yuya Oshio, Ikkoh Funaki
Abstract:
Magnetoplasma Sail (MPS) is a future spacecraft propulsion that generates high levels of thrust by inducing an artificial magnetosphere to capture and deflect solar wind charged particles in order to transfer momentum to the spacecraft. By injecting plasma in the spacecraft’s magnetic field region, the ring current azimuthally drifts on the equatorial plane about the dipole magnetic field generated by the current flowing through the solenoid attached on board the spacecraft. This ring current results in magnetosphere inflation which improves the thrust performance of MPS spacecraft. In this present study, the ring current was experimentally measured using three Rogowski Current Probes positioned in a circular array about the laboratory model of MPS spacecraft. This investigation aims to determine the detailed structure of ring current through physical experimentation performed under two different magnetic field strengths engendered by varying the applied voltage on the solenoid with 300 V and 600 V. The expected outcome was that the three current probes would detect the same current since all three probes were positioned at equal radial distance of 63 mm from the center of the solenoid. Although experimental results were numerically implausible due to probable procedural error, the trends of the results revealed three pieces of perceptive evidence of the ring current behavior. The first aspect is that the drift direction of the ring current depended on the strength of the applied magnetic field. The second aspect is that the diamagnetic current developed at a radial distance not occupied by the three current probes under the presence of solar wind. The third aspect is that the ring current distribution varied along the circumferential path about the spacecraft’s magnetic field. Although this study yielded experimental evidence that differed from the original hypothesis, the three key findings of this study have informed two critical MPS design solutions that will potentially improve thrust performance. The first design solution is the positioning of the plasma injection point. Based on the implication of the first of the three aspects of ring current behavior, the plasma injection point must be located at a distance instead of at close proximity from the MPS Solenoid for the ring current to drift in the direction that will result in magnetosphere inflation. The second design solution, predicated by the third aspect of ring current behavior, is the symmetrical configuration of plasma injection points. In this study, an asymmetrical configuration of plasma injection points using one plasma source resulted in a non-uniform distribution of ring current along the azimuthal path. This distorts the geometry of the inflated magnetosphere which minimizes the deflection area for the solar wind. Therefore, to realize a ring current that best provides the maximum possible inflated magnetosphere, multiple plasma sources must be spaced evenly apart for the plasma to be injected evenly along its azimuthal path.Keywords: Magnetoplasma Sail, magnetosphere inflation, ring current, spacecraft propulsion
Procedia PDF Downloads 310496 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices
Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.Keywords: anthropometry, childhood obesity, gender, morbid obesity
Procedia PDF Downloads 325495 Advancing Equitable Healthcare for Trans and Gender-Diverse Students: A Community-Based Participatory Action Project
Authors: Al Huuskonen, Clio Lake, K. M. Naude, Polina Petlitsyna, Sorsha Henning, Julia Wimmers-Klick
Abstract:
This project presents the outcomes of a community-based participatory action initiative aimed at advocating for equitable healthcare and human rights for trans, two-spirit, and gender-diverse individuals, building upon the University of British Columbia (UBC) Trans Coalition's ongoing efforts. Participatory Action Research (PAR) was chosen as the research method with the goal of improving trans rights on the UBC campus, particularly regarding equitable access to healthcare. PAR involves active community contribution throughout the research process, which in this case was done by way of liaising with student resource groups and advocacy leaders. The goals of this project were as follows: a) identify gaps in gender-affirming healthcare for UBC students by consulting the community and collaborating with UBC services, b) develop an information package outlining provincial and university-based health insurance for gender-affirming care (including hormone therapy and surgeries), FAQs, and resources for UBC's trans students, c) make this package available to UBC students and other national transgender advocacy organizations. The initiative successfully expanded the UBC AMS Student Health and Dental Plan to include gender-affirming procedural coverage, developed a care access guide for students, and advocated for improved health records inclusivity, mechanisms for trans students to report negative care experiences, and increased access to gender-affirming primary care through the on-campus health clinic. Collaboration with other universities' pride organizations and Trans Care BC yielded positive outcomes through broader coalition building and resource sharing. Ongoing efforts are underway to update provincial policies, particularly through expanding coverage under fair pharma care and addressing the compounding effects of the primary care crisis for trans individuals. The project's tangible results include improved trans rights on campus, especially in terms of healthcare access. Expanding healthcare coverage through student care benefits thousands of students, making the ability to undergo important affirming procedures more affordable. Providing students with information on extended coverage options and communication with their doctors further removes barriers to care and positively impacts student wellbeing. This initiative demonstrates the effectiveness of community-based participatory action in advancing equitable healthcare for trans and gender-diverse individuals and serves as a model for other institutions and organizations striving to promote inclusivity and advocate for marginalized populations' rights.Keywords: equitable healthcare, trans and gender-diverse individuals, inclusivity, participatory action research project
Procedia PDF Downloads 93494 Changing the Landscape of Fungal Genomics: New Trends
Authors: Igor V. Grigoriev
Abstract:
Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics
Procedia PDF Downloads 209493 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 132492 Scale up of Isoniazid Preventive Therapy: A Quality Management Approach in Nairobi County, Kenya
Authors: E. Omanya, E. Mueni, G. Makau, M. Kariuki
Abstract:
HIV infection is the strongest risk factor for a person to develop TB. Isoniazid preventive therapy (IPT) for People Living with HIV (PLWHIV) not only reduces the individual patients’ risk of developing active TB but mitigates cross infection. In Kenya, IPT for six months was recommended through the National TB, Leprosy and Lung Disease Program to treat latent TB. In spite of this recommendation by the national government, uptake of IPT among PLHIV remained low in Kenya by the end of 2015. The USAID/Kenya and East Africa Afya Jijini project, which supports 42 TBHIV health facilities in Nairobi County, began addressing low uptake of IPT through Quality Improvement (QI) teams set up at the facility level. Quality is characterized by WHO as one of the four main connectors between health systems building blocks and health systems outputs. Afya Jijini implements the Kenya Quality Model for Health, which involves QI teams being formed at the county, sub-county and facility levels. The teams review facility performance to identify gaps in service delivery and use QI tools to monitor and improve performance. Afya Jijini supported the formation of these teams in 42 facilities and built the teams’ capacity to review data and use QI principles to identify and address performance gaps. When the QI teams began working on improving IPT uptake among PLHIV, uptake was at 31.8%. The teams first conducted a root cause analysis using cause and effect diagrams, which help the teams to brainstorm on and to identify barriers to IPT uptake among PLHIV at the facility level. This is a participatory process where program staff provides technical support to the QI teams in problem identification and problem-solving. The gaps identified were inadequate knowledge and skills on the use of IPT among health care workers, lack of awareness of IPT by patients, inadequate monitoring and evaluation tools, and poor quantification and forecasting of IPT commodities. In response, Afya Jijini trained over 300 health care workers on the administration of IPT, supported patient education, supported quantification and forecasting of IPT commodities, and provided IPT data collection tools to help facilities monitor their performance. The facility QI teams conducted monthly meetings to monitor progress on implementation of IPT and took corrective action when necessary. IPT uptake improved from 31.8% to 61.2% during the second year of the Afya Jijini project and improved to 80.1% during the third year of the project’s support. Use of QI teams and root cause analysis to identify and address service delivery gaps, in addition to targeted program interventions and continual performance reviews, can be successful in increasing TB related service delivery uptake at health facilities.Keywords: isoniazid, quality, health care workers, people leaving with HIV
Procedia PDF Downloads 100491 Enhancing of Antibacterial Activity of Essential Oil by Rotating Magnetic Field
Authors: Tomasz Borowski, Dawid Sołoducha, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy
Abstract:
Essential oils (EOs) are fragrant volatile oils obtained from plants. These are used for cooking (for flavor and aroma), cleaning, beauty (e.g., rosemary essential oil is used to promote hair growth), health (e.g. thyme essential oil cures arthritis, normalizes blood pressure, reduces stress on the heart, cures chest infection and cough) and in the food industry as preservatives and antioxidants. Rosemary and thyme essential oils are considered the most eminent herbs based on their history and medicinal properties. They possess a wide range of activity against different types of bacteria and fungi compared with the other oils in both in vitro and in vivo studies. However, traditional uses of EOs are limited due to rosemary and thyme oils in high concentrations can be toxic. In light of the accessible data, the following hypothesis was put forward: Low frequency rotating magnetic field (RMF) increases the antimicrobial potential of EOs. The aim of this work was to investigate the antimicrobial activity of commercial Salvia Rosmarinus L. and Thymus vulgaris L. essential oil from Polish company Avicenna-Oil under Rotating Magnetic Field (RMF) at f = 25 Hz. The self-constructed reactor (MAP) was applied for this study. The chemical composition of oils was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Minimum inhibitory concentrations (MIC) against E. coli were determined for the essential oils. Tested oils in very small concentrations were prepared (from 1 to 3 drops of essential oils per 3 mL working suspensions). From the results of disc diffusion assay and MIC tests, it can be concluded that thyme oil had the highest antibacterial activity against E. coli. Moreover, the study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of tested oils. The extended radiation exposure to RMF at the frequency f= 25 Hz beyond 160 minutes resulted in a significant increase in antibacterial potential against E. coli. Bacteria were killed within 40 minutes in thyme oil in lower tested concentration (1 drop of essential oils per 3 mL working suspension). Rapid decrease (>3 log) of bacteria number was observed with rosemary oil within 100 minutes (in concentration 3 drops of essential oils per 3 mL working suspension). Thus, a method for improving the antimicrobial performance of essential oil in low concentrations was developed. However, it still remains to be investigated how bacteria get killed by the EOs treated by an electromagnetic field. The possible mechanisms relies on alteration in the permeability of ionic channels in ionic channels in the bacterial cell walls that transport in the cells was proposed. For further studies, it is proposed to examine other types of essential oils and other antibiotic-resistant bacteria (ARB), which are causing a serious concern throughout the world.Keywords: rotating magnetic field, rosemary, thyme, essential oils, Escherichia coli
Procedia PDF Downloads 157490 Frailty and Quality of Life among Older Adults: A Study of Six LMICs Using SAGE Data
Authors: Mamta Jat
Abstract:
Background: The increased longevity has resulted in the increase in the percentage of the global population aged 60 years or over. With this “demographic transition” towards ageing, “epidemiologic transition” is also taking place characterised by growing share of non-communicable diseases in the overall disease burden. So, many of the older adults are ageing with chronic disease and high levels of frailty which often results in lower levels of quality of life. Although frailty may be increasingly common in older adults, prevention or, at least, delay the onset of late-life adverse health outcomes and disability is necessary to maintain the health and functional status of the ageing population. This is an effort using SAGE data to assess levels of frailty and its socio-demographic correlates and its relation with quality of life in LMICs of India, China, Ghana, Mexico, Russia and South Africa in a comparative perspective. Methods: The data comes from multi-country Study on Global AGEing and Adult Health (SAGE), consists of nationally representative samples of older adults in six low and middle-income countries (LMICs): China, Ghana, India, Mexico, the Russian Federation and South Africa. For our study purpose, we will consider only 50+ year’s respondents. The logistic regression model has been used to assess the correlates of frailty. Multinomial logistic regression has been used to study the effect of frailty on QOL (quality of life), controlling for the effect of socio-economic and demographic correlates. Results: Among all the countries India is having highest mean frailty in males (0.22) and females (0.26) and China with the lowest mean frailty in males (0.12) and females (0.14). The odds of being frail are more likely with the increase in age across all the countries. In India, China and Russia the chances of frailty are more among rural older adults; whereas, in Ghana, South Africa and Mexico rural residence is protecting against frailty. Among all countries china has high percentage (71.46) of frail people in low QOL; whereas Mexico has lowest percentage (36.13) of frail people in low QOL.s The risk of having low and middle QOL is significantly (p<0.001) higher among frail elderly as compared to non–frail elderly across all countries with controlling socio-demographic correlates. Conclusion: Women and older age groups are having higher frailty levels than men and younger aged adults in LMICs. The mean frailty scores demonstrated a strong inverse relationship with education and income gradients, while lower levels of education and wealth are showing higher levels of frailty. These patterns are consistent across all LMICs. These data support a significant role of frailty with all other influences controlled, in having low QOL as measured by WHOQOL index. Future research needs to be built on this evolving concept of frailty in an effort to improve quality of life for frail elderly population, in LMICs setting.Keywords: Keywords: Ageing, elderly, frailty, quality of life
Procedia PDF Downloads 289489 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year
Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans
Abstract:
Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality
Procedia PDF Downloads 310488 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology
Authors: Hemendra Singh Rathod
Abstract:
Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.Keywords: frequency control, grid stability, li-ion battery storage, smart grid
Procedia PDF Downloads 152487 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 80486 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 65485 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 32484 Hospital Malnutrition and its Impact on 30-day Mortality in Hospitalized General Medicine Patients in a Tertiary Hospital in South India
Authors: Vineet Agrawal, Deepanjali S., Medha R., Subitha L.
Abstract:
Background. Hospital malnutrition is a highly prevalent issue and is known to increase the morbidity, mortality, length of hospital stay, and cost of care. In India, studies on hospital malnutrition have been restricted to ICU, post-surgical, and cancer patients. We designed this study to assess the impact of hospital malnutrition on 30-day post-discharge and in-hospital mortality in patients admitted in the general medicine department, irrespective of diagnosis. Methodology. All patients aged above 18 years admitted in the medicine wards, excluding medico-legal cases, were enrolled in the study. Nutritional assessment was done within 72 h of admission, using Subjective Global Assessment (SGA), which classifies patients into three categories: Severely malnourished, Mildly/moderately malnourished, and Normal/well-nourished. Anthropometric measurements like Body Mass Index (BMI), Triceps skin-fold thickness (TSF), and Mid-upper arm circumference (MUAC) were also performed. Patients were followed-up during hospital stay and 30 days after discharge through telephonic interview, and their final diagnosis, comorbidities, and cause of death were noted. Multivariate logistic regression and cox regression model were used to determine if the nutritional status at admission independently impacted mortality at one month. Results. The prevalence of malnourishment by SGA in our study was 67.3% among 395 hospitalized patients, of which 155 patients (39.2%) were moderately malnourished, and 111 (28.1%) were severely malnourished. Of 395 patients, 61 patients (15.4%) expired, of which 30 died in the hospital, and 31 died within 1 month of discharge from hospital. On univariate analysis, malnourished patients had significantly higher morality (24.3% in 111 Cat C patients) than well-nourished patients (10.1% in 129 Cat A patients), with OR 9.17, p-value 0.007. On multivariate logistic regression, age and higher Charlson Comorbidity Index (CCI) were independently associated with mortality. Higher CCI indicates higher burden of comorbidities on admission, and the CCI in the expired patient group (mean=4.38) was significantly higher than that of the alive cohort (mean=2.85). Though malnutrition significantly contributed to higher mortality on univariate analysis, it was not an independent predictor of outcome on multivariate logistic regression. Length of hospitalisation was also longer in the malnourished group (mean= 9.4 d) compared to the well-nourished group (mean= 8.03 d) with a trend towards significance (p=0.061). None of the anthropometric measurements like BMI, MUAC, or TSF showed any association with mortality or length of hospitalisation. Inference. The results of our study highlight the issue of hospital malnutrition in medicine wards and reiterate that malnutrition contributes significantly to patient outcomes. We found that SGA performs better than anthropometric measurements in assessing under-nutrition. We are of the opinion that the heterogeneity of the study population by diagnosis was probably the primary reason why malnutrition by SGA was not found to be an independent risk factor for mortality. Strategies to identify high-risk patients at admission and treat malnutrition in the hospital and post-discharge are needed.Keywords: hospitalization outcome, length of hospital stay, mortality, malnutrition, subjective global assessment (SGA)
Procedia PDF Downloads 150483 2,7-Diazaindole as a Photophysical Probe for Excited State Hydrogen/Proton Transfer
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among the scientific community over the past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried out on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phases. Derivatives of the above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are studies in the solution phase that suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization-time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy, i.e., fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to the S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1, whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red-shifted transition in the case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV which is significantly higher than the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in the case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red-shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronically excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in the excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: excited state hydrogen transfer, supersonic expansion, gas phase spectroscopy, IR-UV double resonance spectroscopy, laser induced fluorescence, photoionization efficiency spectroscopy
Procedia PDF Downloads 75482 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 491481 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review
Authors: R. Christopher, C. Brandt, N. Damons
Abstract:
Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity
Procedia PDF Downloads 180