Search results for: Robert Michael
137 Telemedicine Versus Face-to-Face Follow up in General Surgery: A Randomized Controlled Trial
Authors: Teagan Fink, Lynn Chong, Michael Hii, Brett Knowles
Abstract:
Background: Telemedicine is a rapidly advancing field providing healthcare to patients at a distance from their treating clinician. There is a paucity of high-quality evidence detailing the safety and acceptability of telemedicine for postoperative outpatient follow-up. This randomized controlled trial – conducted prior to the COVID 19 pandemic – aimed to assess patient satisfaction and safety (as determined by readmission, reoperation and complication rates) of telephone compared to face-to-face clinic follow-up after uncomplicated general surgical procedures. Methods: Patients following uncomplicated laparoscopic appendicectomy or cholecystectomy and laparoscopic or open umbilical or inguinal hernia repairs were randomized to a telephone or face-to-face outpatient clinic follow-up. Data points including patient demographics, perioperative details and postoperative outcomes (eg. wound healing complications, pain scores, unplanned readmission to hospital and return to daily activities) were compared between groups. Patients also completed a Likert patient satisfaction survey following their consultation. Results: 103 patients were recruited over a 12-month period (21 laparoscopic appendicectomies, 65 laparoscopic cholecystectomies, nine open umbilical hernia repairs, six laparoscopic inguinal hernia repairs and two laparoscopic umbilical hernia repairs). Baseline patient demographics and operative interventions were the same in both groups. Patient or clinician-reported concerns on postoperative pain, use of analgesia, wound healing complications and return to daily activities at clinic follow-up were not significantly different between the two groups. Of the 58 patients randomized to the telemedicine arm, 40% reported high and 60% reported very high patient satisfaction. Telemedicine clinic mean consultation times were significantly shorter than face-to-face consultation times (telemedicine 10.3 +/- 7.2 minutes, face-to-face 19.2 +/- 23.8 minutes, p-value = 0.014). Rates of failing to attend clinic were not significantly different (telemedicine 3%, control 6%). There was no increased rate of postoperative complications in patients followed up by telemedicine compared to in-person. There were no unplanned readmissions, return to theatre, or mortalities in this study. Conclusion: Telemedicine follow-up of patients undergoing uncomplicated general surgery is safe and does not result in any missed diagnosis or higher rates of complications. Telemedicine provides high patient satisfaction and steps to implement this modality in inpatient care should be undertaken.Keywords: general surgery, telemedicine, patient satisfaction, patient safety
Procedia PDF Downloads 118136 Congenital Diaphragmatic Hernia Outcomes in a Low-Volume Center
Authors: Michael Vieth, Aric Schadler, Hubert Ballard, J. A. Bauer, Pratibha Thakkar
Abstract:
Introduction: Congenital diaphragmatic hernia (CDH) is a condition characterized by the herniation of abdominal contents into the thoracic cavity requiring postnatal surgical repair. Previous literature suggests improved CDH outcomes at high-volume regional referral centers compared to low-volume centers. The purpose of this study was to examine CDH outcomes at Kentucky Children’s Hospital (KCH), a low-volume center, compared to the Congenital Diaphragmatic Hernia Study Group (CDHSG). Methods: A retrospective chart review was performed at KCH from 2007-2019 for neonates with CDH, and then subdivided into two cohorts: those requiring ECMO therapy and those not requiring ECMO therapy. Basic demographic data and measures of mortality and morbidity including ventilator days and length of stay were compared to the CDHSG. Measures of morbidity for the ECMO cohort including duration of ECMO, clinical bleeding, intracranial hemorrhage, sepsis, need for continuous renal replacement therapy (CRRT), need for sildenafil at discharge, timing of surgical repair, and total ventilator days were collected. Statistical analysis was performed using IBM SPSS Statistics version 28. One-sample t-tests and one-sample Wilcoxon Signed Rank test were utilized as appropriate.Results: There were a total of 27 neonatal patients with CDH at KCH from 2007-2019; 9 of the 27 required ECMO therapy. The birth weight and gestational age were similar between KCH and the CDHSG (2.99 kg vs 2.92 kg, p =0.655; 37.0 weeks vs 37.4 weeks, p =0.51). About half of the patients were inborn in both cohorts (52% vs 56%, p =0.676). KCH cohort had significantly more Caucasian patients (96% vs 55%, p=<0.001). Unadjusted mortality was similar in both groups (KCH 70% vs CDHSG 72%, p =0.857). Using ECMO utilization (KCH 78% vs CDHSG 52%, p =0.118) and need for surgical repair (KCH 95% vs CDHSG 85%, p =0.060) as proxy for severity, both groups’ mortality were comparable. No significant difference was noted for pulmonary outcomes such as average ventilator days (KCH 43.2 vs. CDHSG 17.3, p =0.078) and home oxygen dependency (KCH 44% vs. CDHSG 24%, p =0.108). Average length of hospital stay for patients treated at KCH was similar to CDHSG (64.4 vs 49.2, p=1.000). Conclusion: Our study demonstrates that outcome in CDH patients is independent of center’s case volume status. Management of CDH with a standardized approach in a low-volume center can yield similar outcomes. This data supports the treatment of patients with CDH at low-volume centers as opposed to transferring to higher-volume centers.Keywords: ECMO, case volume, congenital diaphragmatic hernia, congenital diaphragmatic hernia study group, neonate
Procedia PDF Downloads 96135 Middle School as a Developmental Context for Emergent Citizenship
Authors: Casta Guillaume, Robert Jagers, Deborah Rivas-Drake
Abstract:
Civically engaged youth are critical to maintaining and/or improving the functioning of local, national and global communities and their institutions. The present study investigated how school climate and academic beliefs (academic self-efficacy and school belonging) may inform emergent civic behaviors (emergent citizenship) among self-identified middle school youth of color (African American, Multiracial or Mixed, Latino, Asian American or Pacific Islander, Native American, and other). Study aims: 1) Understand whether and how school climate is associated with civic engagement behaviors, directly and indirectly, by fostering a positive sense of connection to the school and/or engendering feelings of self-efficacy in the academic domain. Accordingly, we examined 2) The association of youths’ sense of school connection and academic self-efficacy with their personally responsible and participatory civic behaviors in school and community contexts—both concurrently and longitudinally. Data from two subsamples of a larger study of social/emotional development among middle school students were used for longitudinal and cross sectional analysis. The cross-sectional sample included 324 6th-8th grade students, of which 43% identified as African American, 20% identified as Multiracial or Mixed, 18% identified as Latino, 12% identified as Asian American or Pacific Islander, 6% identified as Other, and 1% identified as Native American. The age of the sample ranged from 11 – 15 (M = 12.33, SD = .97). For the longitudinal test of our mediation model, we drew on data from the 6th and 7th grade cohorts only (n =232); the ethnic and racial diversity of this longitudinal subsample was virtually identical to that of the cross-sectional sample. For both the cross-sectional and longitudinal analyses, full information maximum likelihood was used to deal with missing data. Fit indices were inspected to determine if they met the recommended thresholds of RMSEA below .05 and CFI and TLI values of at least .90. To determine if particular mediation pathways were significant, the bias-corrected bootstrap confidence intervals for each indirect pathway were inspected. Fit indices for the latent variable mediation model using the cross-sectional data suggest that the hypothesized model fit the observed data well (CFI = .93; TLI =. 92; RMSEA = .05, 90% CI = [.04, .06]). In the model, students’ perceptions of school climate were significantly and positively associated with greater feelings of school connectedness, which were in turn significantly and positively associated with civic engagement. In addition, school climate was significantly and positively associated with greater academic self-efficacy, but academic self-efficacy was not significantly associated with civic engagement. Tests of mediation indicated there was one significant indirect pathway between school climate and civic engagement behavior. There was an indirect association between school climate and civic engagement via its association with sense of school connectedness, indirect association estimate = .17 [95% CI: .08, .32]. The aforementioned indirect association via school connectedness accounted for 50% (.17/.34) of the total effect. Partial support was found for the prediction that students’ perceptions of a positive school climate are linked to civic engagement in part through their role in students’ sense of connection to school.Keywords: civic engagement, early adolescence, school climate, school belonging, developmental niche
Procedia PDF Downloads 370134 Efficacy and Safety of COVID-19 Vaccination in Patients with Multiple Sclerosis: Looking Forward to Post-COVID-19
Authors: Achiron Anat, Mathilda Mandel, Mayust Sue, Achiron Reuven, Gurevich Michael
Abstract:
Introduction: As coronavirus disease 2019 (COVID-19) vaccination is currently spreading around the world, it is of importance to assess the ability of multiple sclerosis (MS) patients to mount an appropriate immune response to the vaccine in the context of disease-modifying treatments (DMT’s). Objectives: Evaluate immunity generated following COVID-19 vaccination in MS patients, and assess factors contributing to protective humoral and cellular immune responses in MS patients vaccinated against severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) virus infection. Methods: Review our recent data related to (1) the safety of PfizerBNT162b2 COVID-19 mRNA vaccine in adult MS patients; (2) the humoral post-vaccination SARS-CoV2 IgG response in MS vaccinees using anti-spike protein-based serology; and (3) the cellular immune response of memory B-cells specific for SARS-CoV-2 receptor-binding domain (RBD) and memory T-cells secreting IFN-g and/or IL-2 in response to SARS-CoV2 peptides using ELISpot/Fluorospot assays in MS patients either untreated or under treatment with fingolimod, cladribine, or ocrelizumab; (4) covariate parameters related to mounting protective immune responses. Results: COVID-19 vaccine proved safe in MS patients, and the adverse event profile was mainly characterised by pain at the injection site, fatigue, and headache. Not any increased risk of relapse activity was noted and the rate of patients with acute relapse was comparable to the relapse rate in non-vaccinated patients during the corresponding follow-up period. A mild increase in the rate of adverse events was noted in younger MS patients, among patients with lower disability, and in patients treated with DMTs. Following COVID-19 vaccination protective humoral immune response was significantly decreased in fingolimod- and ocrelizumab- treated MS patients. SARS-CoV2 specific B-cell and T-cell cellular responses were respectively decreased. Untreated MS patients and patients treated with cladribine demonstrated protective humoral and cellular immune responses, similar to healthy vaccinated subjects. Conclusions: COVID-19 BNT162b2 vaccine proved as safe for MS patients. No increased risk of relapse activity was noted post-vaccination. Although COVID-19 vaccination is new, accumulated data demonstrate differences in immune responses under various DMT’s. This knowledge can help to construct appropriate COVID-19 vaccine guidelines to ensure proper immune responses for MS patients.Keywords: covid-19, vaccination, multiple sclerosis, IgG
Procedia PDF Downloads 139133 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective
Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg
Abstract:
The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.Keywords: Industry 4.0., mass customization, production networks, virtual process-chain
Procedia PDF Downloads 277132 Examining the Role of Farmer-Centered Participatory Action Learning in Building Sustainable Communities in Rural Haiti
Authors: Charles St. Geste, Michael Neumann, Catherine Twohig
Abstract:
Our primary aim is to examine farmer-centered participatory action learning as a tool to improve agricultural production, build resilience to climate shocks and, more broadly, advance community-driven solutions for sustainable development in rural communities across Haiti. For over six years, sixty plus farmers from Deslandes, Haiti, organized in three traditional work groups called konbits, have designed and tested low-input agroecology techniques as part of the Konbit Vanyan Kapab Pwoje Agroekoloji. The project utilizes a participatory action learning approach, emphasizing social inclusion, building on local knowledge, experiential learning, active farmer participation in trial design and evaluation, and cross-community sharing. Mixed methods were used to evaluate changes in knowledge and adoption of agroecology techniques, confidence in advancing agroecology locally, and innovation among Konbit Vanyan Kapab farmers. While skill and knowledge in application of agroecology techniques varied among individual farmers, a majority of farmers successfully adopted techniques outside of the trial farms. The use of agroecology techniques on trial and individual farms has doubled crop production in many cases. Farm income has also increased, and farmers report less damage to crops and property caused by extreme weather events. Furthermore, participatory action strategies have led to greater local self-determination and greater capacity for sustainable community development. With increased self-confidence and the knowledge and skills acquired from participating in the project, farmers prioritized sharing their successful techniques with other farmers and have developed a farmer-to-farmer training program that incorporates participatory action learning. Using adult education methods, farmers, trained as agroecology educators, are currently providing training in sustainable farming practices to farmers from five villages in three departments across Haiti. Konbit Vanyan Kapab farmers have also begun testing production of value-added food products, including a dried soup mix and tea. Key factors for success include: opportunities for farmers to actively participate in all phases of the project, group diversity, resources for application of agroecology techniques, focus on group processes and overcoming local barriers to inclusive decision-making.Keywords: agroecology, participatory action learning, rural Haiti, sustainable community development
Procedia PDF Downloads 156131 Structural and Functional Correlates of Reaction Time Variability in a Large Sample of Healthy Adolescents and Adolescents with ADHD Symptoms
Authors: Laura O’Halloran, Zhipeng Cao, Clare M. Kelly, Hugh Garavan, Robert Whelan
Abstract:
Reaction time (RT) variability on cognitive tasks provides the index of the efficiency of executive control processes (e.g. attention and inhibitory control) and is considered to be a hallmark of clinical disorders, such as attention-deficit disorder (ADHD). Increased RT variability is associated with structural and functional brain differences in children and adults with various clinical disorders, as well as poorer task performance accuracy. Furthermore, the strength of functional connectivity across various brain networks, such as the negative relationship between the task-negative default mode network and task-positive attentional networks, has been found to reflect differences in RT variability. Although RT variability may provide an index of attentional efficiency, as well as being a useful indicator of neurological impairment, the brain substrates associated with RT variability remain relatively poorly defined, particularly in a healthy sample. Method: Firstly, we used the intra-individual coefficient of variation (ICV) as an index of RT variability from “Go” responses on the Stop Signal Task. We then examined the functional and structural neural correlates of ICV in a large sample of 14-year old healthy adolescents (n=1719). Of these, a subset had elevated symptoms of ADHD (n=80) and was compared to a matched non-symptomatic control group (n=80). The relationship between brain activity during successful and unsuccessful inhibitions and gray matter volume were compared with the ICV. A mediation analysis was conducted to examine if specific brain regions mediated the relationship between ADHD symptoms and ICV. Lastly, we looked at functional connectivity across various brain networks and quantified both positive and negative correlations during “Go” responses on the Stop Signal Task. Results: The brain data revealed that higher ICV was associated with increased structural and functional brain activation in the precentral gyrus in the whole sample and in adolescents with ADHD symptoms. Lower ICV was associated with lower activation in the anterior cingulate cortex (ACC) and medial frontal gyrus in the whole sample and in the control group. Furthermore, our results indicated that activation in the precentral gyrus (Broadman Area 4) mediated the relationship between ADHD symptoms and behavioural ICV. Conclusion: This is the first study first to investigate the functional and structural correlates of ICV collectively in a large adolescent sample. Our findings demonstrate a concurrent increase in brain structure and function within task-active prefrontal networks as a function of increased RT variability. Furthermore, structural and functional brain activation patterns in the ACC, and medial frontal gyrus plays a role-optimizing top-down control in order to maintain task performance. Our results also evidenced clear differences in brain morphometry between adolescents with symptoms of ADHD but without clinical diagnosis and typically developing controls. Our findings shed light on specific functional and structural brain regions that are implicated in ICV and yield insights into effective cognitive control in healthy individuals and in clinical groups.Keywords: ADHD, fMRI, reaction-time variability, default mode, functional connectivity
Procedia PDF Downloads 255130 Screening for Larvicidal Activity of Aqueous and Ethanolic Extracts of Fourteen Selected Plants and Formulation of a Larvicide against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) Larvae
Authors: Michael Russelle S. Alvarez, Noel S. Quiming, Francisco M. Heralde
Abstract:
This study aims to: a) obtain ethanolic (95% EtOH) and aqueous extracts of Selaginella elmeri, Christella dentata, Elatostema sinnatum, Curculigo capitulata, Euphorbia hirta, Murraya koenigii, Alpinia speciosa, Cymbopogon citratus, Eucalyptus globulus, Jatropha curcas, Psidium guajava, Gliricidia sepium, Ixora coccinea and Capsicum frutescens and screen them for larvicidal activities against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) larvae; b) to fractionate the most active extract and determine the most active fraction; c) to determine the larvicidal properties of the most active extract and fraction against by computing their percentage mortality, LC50, and LC90 after 24 and 48 hours of exposure; and d) to determine the nature of the components of the active extracts and fractions using phytochemical screening. Ethanolic (95% EtOH) and aqueous extracts of the selected plants will be screened for potential larvicidal activity against Ae. aegypti and Ae. albopictus using standard procedures and 1% malathion and a Piper nigrum based ovicide-larvicide by the Department of Science and Technology as positive controls. The results were analyzed using One-Way ANOVA with Tukey’s and Dunnett’s test. The most active extract will be subjected to partial fractionation using normal-phase column chromatography, and the fractions subsequently screened to determine the most active fraction. The most active extract and fraction were subjected to dose-response assay and probit analysis to determine the LC50 and LC90 after 24 and 48 hours of exposure. The active extracts and fractions will be screened for phytochemical content. The ethanolic extracts of C. citratus, E. hirta, I. coccinea, G. sepium, M. koenigii, E globulus, J. curcas and C. frutescens exhibited significant larvicidal activity, with C. frutescens being the most active. After fractionation, the ethyl acetate fraction was found to be the most active. Phytochemical screening of the extracts revealed the presence of alkaloids, tannins, indoles and steroids. A formulation using talcum powder–300 mg fraction per 1 g talcum powder–was made and again tested for larvicidal activity. At 2 g/L, the formulation proved effective in killing all of the test larvae after 24 hours.Keywords: larvicidal activity screening, partial purification, dose-response assay, capsicum frutescens
Procedia PDF Downloads 329129 Phytochemistry and Alpha-Amylase Inhibitory Activities of Rauvolfia vomitoria (Afzel) Leaves and Picralima nitida (Stapf) Seeds
Authors: Oseyemi Omowunmi Olubomehin, Olufemi Michael Denton
Abstract:
Diabetes mellitus is a disease that is related to the digestion of carbohydrates, proteins and fats and how this affects the blood glucose levels. Various synthetic drugs employed in the management of the disease work through different mechanisms. Keeping postprandial blood glucose levels within acceptable range is a major factor in the management of type 2 diabetes and its complications. Thus, the inhibition of carbohydrate-hydrolyzing enzymes such as α-amylase is an important strategy in lowering postprandial blood glucose levels, but synthetic inhibitors have undesirable side effects like flatulence, diarrhea, gastrointestinal disorders to mention a few. Therefore, it is necessary to identify and explore the α-amylase inhibitors from plants due to their availability, safety, and low costs. In the present study, extracts from the leaves of Rauvolfia vomitoria and seeds of Picralima nitida which are used in the Nigeria traditional system of medicine to treat diabetes were tested for their α-amylase inhibitory effect. The powdered plant samples were subjected to phytochemical screening using standard procedures. The leaves and seeds macerated successively using n-hexane, ethyl acetate and methanol resulted in the crude extracts which at different concentrations (0.1, 0.5 and 1 mg/mL) alongside the standard drug acarbose, were subjected to α-amylase inhibitory assay using the Benfield and Miller methods, with slight modification. Statistical analysis was done using ANOVA, SPSS version 2.0. The phytochemical screening results of the leaves of Rauvolfia vomitoria and the seeds of Picralima nitida showed the presence of alkaloids, tannins, saponins and cardiac glycosides while in addition Rauvolfia vomitoria had phenols and Picralima nitida had terpenoids. The α-amylase assay results revealed that at 1 mg/mL the methanol, hexane, and ethyl acetate extracts of the leaves of Rauvolfia vomitoria gave (15.74, 23.13 and 26.36 %) α-amylase inhibitions respectively, the seeds of Picralima nitida gave (15.50, 30.68, 36.72 %) inhibitions which were not significantly different from the control at p < 0.05, while acarbose gave a significant 56 % inhibition at p < 0.05. The presence of alkaloids, phenols, tannins, steroids, saponins, cardiac glycosides and terpenoids in these plants are responsible for the observed anti-diabetic activity. However, the low percentages of α-amylase inhibition by these plant samples shows that α-amylase inhibition is not the major way by which both plants exhibit their anti-diabetic effect.Keywords: alpha-amylase, Picralima nitida, postprandial hyperglycemia, Rauvolfia vomitoria
Procedia PDF Downloads 191128 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 108127 Prenatal Paraben Exposure Impacts Infant Overweight Development and in vitro Adipogenesis
Authors: Beate Englich, Linda Schlittenbauer, Christiane Pfeifer, Isabel Kratochvil, Michael Borte, Gabriele I. Stangl, Martin von Bergen, Thorsten Reemtsma, Irina Lehmann, Kristin M. Junge
Abstract:
The worldwide production of endocrine disrupting compounds (EDC) has risen dramatically over the last decades, as so has the prevalence for obesity. Many EDCs are believed to contribute to this obesity epidemic, by enhancing adipogenesis or disrupting relevant metabolism. This effect is most tremendous in the early prenatal period when priming effects find a highly vulnerable time window. Therefore, we investigate the impact of parabens on childhood overweight development and adipogenesis in general. Parabens are ester of 4-hydroxy-benzoic acid and part of many cosmetic products or food packing. Therefore, ubiquitous exposure can be found in the westernized world, with exposure already starting during the sensitive prenatal period. We assessed maternal cosmetic product consumption, prenatal paraben exposure and infant BMI z-scores in the prospective German LINA cohort. In detail, maternal urinary concentrations (34 weeks of gestation) of methyl paraben (MeP), ethyl paraben (EtP), n-propyl paraben (PrP) and n-butyl paraben (BuP) were quantified using UPLC-MS/MS. Body weight and height of their children was assessed during annual clinical visits. Further, we investigated the direct influence of those parabens on adipogenesis in-vitro using a human mesenchymal stem cell (MSC) differentiation assay to mimic a prenatal exposure scenario. MSC were exposed to 0.1 – 50 µM paraben during the entire differentiation period. Differentiation outcome was monitored by impedance spectrometry, real-time PCR and triglyceride staining. We found that maternal cosmetic product consumption was highly correlated with urinary paraben concentrations at pregnancy. Further, prenatal paraben exposure was linked to higher BMI Z-scores in children. Our in-vitro analysis revealed that especially the long chained paraben BuP stimulates adipogenesis by increasing the expression of adipocyte specific genes (PPARγ, ADIPOQ, LPL, etc.) and triglyceride storage. Moreover, we found that adiponectin secretion is increased whereas leptin secretion is reduced under BuP exposure in-vitro. Further mechanistic analysis for receptor binding and activation of PPARγ and other key players in adipogenesis are currently in process. We conclude that maternal cosmetic product consumption is linked to prenatal paraben exposure of children and contributes to the development of infant overweight development by triggering key pathways of adipogenesis.Keywords: adipogenesis, endocrine disruptors, paraben, prenatal exposure
Procedia PDF Downloads 272126 Trends in All-Cause Mortality and Inpatient and Outpatient Visits for Ambulatory Care Sensitive Conditions during the First Year of the COVID-19 Pandemic: A Population-Based Study
Authors: Tetyana Kendzerska, David T. Zhu, Michael Pugliese, Douglas Manuel, Mohsen Sadatsafavi, Marcus Povitz, Therese A. Stukel, Teresa To, Shawn D. Aaron, Sunita Mulpuru, Melanie Chin, Claire E. Kendall, Kednapa Thavorn, Rebecca Robillard, Andrea S. Gershon
Abstract:
The impact of the COVID-19 pandemic on the management of ambulatory care sensitive conditions (ACSCs) remains unknown. To compare observed and expected (projected based on previous years) trends in all-cause mortality and healthcare use for ACSCs in the first year of the pandemic (March 2020 - March 2021). A population-based study using provincial health administrative data.General adult population (Ontario, Canada). Monthly all-cause mortality, and hospitalizations, emergency department (ED) and outpatient visit rates (per 100,000 people at-risk) for seven combined ACSCs (asthma, COPD, angina, congestive heart failure, hypertension, diabetes, and epilepsy) during the first year were compared with similar periods in previous years (2016-2019) by fitting monthly time series auto-regressive integrated moving-average models. Compared to previous years, all-cause mortality rates increased at the beginning of the pandemic (observed rate in March-May 2020 of 79.98 vs. projected of 71.24 [66.35-76.50]) and then returned to expected in June 2020—except among immigrants and people with mental health conditions where they remained elevated. Hospitalization and ED visit rates for ACSCs remained lower than projected throughout the first year: observed hospitalization rate of 37.29 vs. projected of 52.07 (47.84-56.68); observed ED visit rate of 92.55 vs. projected of 134.72 (124.89-145.33). ACSC outpatient visit rates decreased initially (observed rate of 4,299.57 vs. projected of 5,060.23 [4,712.64-5,433.46]) and then returned to expected in June 2020. Reductions in outpatient visits for ACSCs at the beginning of the pandemic combined with reduced hospital admissions may have been associated with temporally increased mortality—disproportionately experienced by immigrants and those with mental health conditions. The Ottawa Hospital Academic Medical OrganizationKeywords: COVID-19, chronic disease, all-cause mortality, hospitalizations, emergency department visits, outpatient visits, modelling, population-based study, asthma, COPD, angina, heart failure, hypertension, diabetes, epilepsy
Procedia PDF Downloads 92125 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis
Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon
Abstract:
Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage
Procedia PDF Downloads 161124 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine
Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo
Abstract:
The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.Keywords: copper-gold, DMLZ, skarn, structure
Procedia PDF Downloads 501123 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk
Authors: Michael Mihalicz, Aziz Guergachi
Abstract:
Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.Keywords: decision making, emotions, prospect theory, visceral factors
Procedia PDF Downloads 149122 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century
Authors: Richard Levy, Peter Dawson
Abstract:
Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location.Keywords: 3D imaging, multimedia, virtual reality, arctic
Procedia PDF Downloads 420121 Investigation of Yard Seam Workings for the Proposed Newcastle Light Rail Project
Authors: David L. Knott, Robert Kingsland, Alistair Hitchon
Abstract:
The proposed Newcastle Light Rail is a key part of the revitalisation of Newcastle, NSW and will provide a frequent and reliable travel option throughout the city centre, running from Newcastle Interchange at Wickham to Pacific Park in Newcastle East, a total of 2.7 kilometers in length. Approximately one-third of the route, along Hunter and Scott Streets, is subject to potential shallow underground mine workings. The extent of mining and seams mined is unclear. Convicts mined the Yard Seam and overlying Dudley (Dirty) Seam in Newcastle sometime between 1800 and 1830. The Australian Agricultural Company mined the Yard Seam from about 1831 to the 1860s in the alignment area. The Yard Seam was about 3 feet (0.9m) thick, and therefore, known as the Yard Seam. Mine maps do not exist for the workings in the area of interest and it was unclear if both or just one seam was mined. Information from 1830s geological mapping and other data showing shaft locations were used along Scott Street and information from the 1908 Royal Commission was used along Hunter Street to develop an investigation program. In addition, mining was encountered for several sites to the south of the alignment at depths of about 7 m to 25 m. Based on the anticipated depths of mining, it was considered prudent to assess the potential for sinkhole development on the proposed alignment and realigned underground utilities and to obtain approval for the work from Subsidence Advisory NSW (SA NSW). The assessment consisted of a desktop study, followed by a subsurface investigation. Four boreholes were drilled along Scott Street and three boreholes were drilled along Hunter Street using HQ coring techniques in the rock. The placement of boreholes was complicated by the presence of utilities in the roadway and traffic constraints. All the boreholes encountered the Yard Seam, with conditions varying from unmined coal to an open void, indicating the presence of mining. The geotechnical information obtained from the boreholes was expanded by using various downhole techniques including; borehole camera, borehole sonar, and downhole geophysical logging. The camera provided views of the rock and helped to explain zones of no recovery. In addition, timber props within the void were observed. Borehole sonar was performed in the void and provided an indication of room size as well as the presence of timber props within the room. Downhole geophysical logging was performed in the boreholes to measure density, natural gamma, and borehole deviation. The data helped confirm that all the mining was in the Yard Seam and that the overlying Dudley Seam had been eroded in the past over much of the alignment. In summary, the assessment allowed the potential for sinkhole subsidence to be assessed and a mitigation approach developed to allow conditional approval by SA NSW. It also confirmed the presence of mining in the Yard Seam, the depth to the seam and mining conditions, and indicated that subsidence did not appear to have occurred in the past.Keywords: downhole investigation techniques, drilling, mine subsidence, yard seam
Procedia PDF Downloads 314120 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 169119 The Invisible Planner: Unearthing the Informal Dynamics Shaping Mixed-Use and Compact Development in Ghanaian Cities
Authors: Muwaffaq Usman Adam, Isaac Quaye, Jim Anbazu, Yetimoni Kpeebi, Michael Osei-Assibey
Abstract:
Urban informality, characterized by spontaneous and self-organized practices, plays a significant but often overlooked role in shaping the development of cities, particularly in the context of mixed-use and compact urban environments. This paper aims to explore the invisible planning processes inherent in informal practices and their influence on the urban form of Ghanaian cities. By examining the dynamic interplay between informality and formal planning, the study will discuss the ways in which informal actors shape and plan for mixed-use and compact development. Drawing on the synthesis of relevant secondary data, the research will begin by defining urban informality and identifying the factors that contribute to its prevalence in Ghanaian cities. It will delve into the concept of mixed-use and compact development, highlighting its benefits and importance in urban areas. Drawing on case studies, the paper will uncover the hidden planning processes that occur within informal settlements, showcasing their impact on the physical layout, land use, and spatial arrangements of Ghanaian cities. The study will also uncover the challenges and opportunities associated with informal planning. It examines the constraints faced by informal planners (actors) while also exploring the potential benefits and opportunities that emerge when informality is integrated into formal planning frameworks. By understanding the invisible planner, the research will offer valuable insights into how informal practices can contribute to sustainable and inclusive urban development. Based on the findings, the paper will present policy implications and recommendations. It highlights the need to bridge the policy gaps and calls for the recognition of informal planning practices within formal systems. Strategies are proposed to integrate informality into planning frameworks, fostering collaboration between formal and informal actors to achieve compact and mixed-use development in Ghanaian cities. This research underscores the importance of recognizing and leveraging the invisible planner in Ghanaian cities. By embracing informal planning practices, cities can achieve more sustainable, inclusive, and vibrant urban environments that meet the diverse needs of their residents. This research will also contribute to a deeper understanding of the complex dynamics between informality and planning, advocating for inclusive and collaborative approaches that harness the strengths of both formal and informal actors. The findings will likewise contribute to advancing our understanding of informality's role as an invisible yet influential planner, shedding light on its spatial planning implications on Ghanaian cities.Keywords: informality, mixed-uses, compact development, land use, ghana
Procedia PDF Downloads 124118 On implementing Sumak Kawsay in Post Bellum Principles: The Reconstruction of Natural Damage in the Aftermath of War
Authors: Lisa Tragbar
Abstract:
In post-war scenarios, reconstruction is a principle towards creating a Just Peace in order to restore a stable post-war society. Just peace theorists explore normative behaviour after war, including the duties and responsibilities of different actors and peacebuilding strategies to achieve a lasting, positive peace. Environmental peace ethicists have argued for including the role of nature in the Ethics of War and Peace. This text explores the question of why and how to rethink the value of nature in post-war scenarios. The aim is to include the rights of nature within a maximalist account of reconstruction by highlighting sumak kawsay in the post-war period. Destruction of nature is usually considered collateral damage in war scenarios. Common universal standards for post-war reconstruction are restitution, compensation and reparation programmes, which is mostly anthropocentric approach. The problem of reconstruction in the aftermath of war is the instrumental value of nature. The responsibility to rebuild needs to be revisited within a non-anthropocentric context. There is an ongoing debate about a minimalist or maximalist approach to post-war reconstruction. While Michael Walzer argues for minimalist in-and-out interventions, Alex Bellamy argues for maximalist strategies such as the responsibility to protect, a UN-concept on how face mass atrocity crimes and how to reconstruct peace. While supporting the tradition of maximalist responsibility to rebuild, these normative post-Bellum concepts do not yet sufficiently consider the rights of nature in the aftermath of war. While reconstruction of infrastructures seems important and necessary, concepts that strengthen the intrinsic value of nature in post-bellum measures must also be included. Peace is not Just Peace without a thriving nature that provides the conditions and resources to live and guarantee human rights. Ecuador's indigenous philosophy of life can contribute to the restoration of nature after war by changing the perspective on the value of nature. The sumak kawsay includes the de-hierarchisation of humans and nature and the principle of reciprocity towards nature. Transferring this idea of life and interconnectedness to post-war reconstruction practices, post bellum perpetrators have restorative obligations not only to people but also to nature. This maximalist approach would include both a restitutive principle, by restoring the balance between humans and nature, and a retributive principle, by punishing the perpetrators through compensatory duties to nature. A maximalist approach to post-war reconstruction that takes into account the rights of nature expands the normative post-war questions to include a more complex field of responsibilities. After a war, Just Peace is restored once not only human rights but also the rights of nature are secured. A minimalist post-bellum approach to reconstruction does not locate future problems at their source and does not offer a solution for the inclusion of obligations to nature. There is a lack of obligations towards nature after a war, which can be changed through a different perspective: The indigenous philosophy of life provides the necessary principles for a comprehensive reconstruction of Just Peace.Keywords: normative ethics, peace, post-war, sumak kawsay, applied ethics
Procedia PDF Downloads 78117 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality
Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard
Abstract:
Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus
Procedia PDF Downloads 573116 Respiratory Health and Air Movement Within Equine Indoor Arenas
Authors: Staci McGill, Morgan Hayes, Robert Coleman, Kimberly Tumlin
Abstract:
The interaction and relationships between horses and humans have been shown to be positive for physical, mental, and emotional wellbeing, however equine spaces where these interactions occur do include some environmental risks. There are 1.7 million jobs associated with the equine industry in the United States in addition to recreational riders, owners, and volunteers who interact with horses for substantial amounts of time daily inside built structures. One specialized facility, an “indoor arena” is a semi-indoor structure used for exercising horses and exhibiting skills during competitive events. Typically, indoor arenas have a sand or sand mixture as the footing or surface over which the horse travels, and increasingly, silica sand is being recommended due to its durable nature. It was previously identified in a semi-qualitative survey that the majority of individuals using indoor arenas have environmental concerns with dust. 27% (90/333) of respondents reported respiratory issues or allergy-like symptoms while riding with 21.6% (71/329) of respondents reporting these issues while standing on the ground observing or teaching. Frequent headaches and/or lightheadedness was reported in 9.9% (33/333) of respondents while riding and in 4.3% 14/329 while on the ground. Horse respiratory health is also negatively impacted with 58% (194/333) of respondents indicating horses cough during or after time in the indoor arena. Instructors who spent time in indoor arenas self-reported more respiratory issues than those individuals who identified as smokers, highlighting the health relevance of understanding these unique structures. To further elucidate environmental concerns and self-reported health issues, 35 facility assessments were conducted in a cross-sectional sampling design in the states of Kentucky and Ohio (USA). Data, including air speeds, were collected in a grid fashion at 15 points within the indoor arenas and then mapped spatially using krigging in ARCGIS. From the spatial maps, standard variances were obtained and differences were analyzed using multivariant analysis of variances (MANOVA) and analysis of variances (ANOVA). There were no differences for the variance of the air speeds in the spaces for facility orientation, presence and type of roof ventilation, climate control systems, amount of openings, or use of fans. Variability of the air speeds in the indoor arenas was 0.25 or less. Further analysis yielded that average air speeds within the indoor arenas were lower than 100 ft/min (0.51 m/s) which is considered still air in other animal facilities. The lack of air movement means that dust clearance is reliant on particle size and weight rather than ventilation. While further work on respirable dust is necessary, this characterization of the semi-indoor environment where animals and humans interact indicates insufficient air flow to eliminate or reduce respiratory hazards. Finally, engineering solutions to address air movement deficiencies within indoor arenas or mitigate particulate matter are critical to ensuring exposures do not lead to adverse health outcomes for equine professionals, volunteers, participants, and horses within these spaces.Keywords: equine, indoor arena, ventilation, particulate matter, respiratory health
Procedia PDF Downloads 116115 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany
Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig
Abstract:
The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis
Procedia PDF Downloads 119114 Revealing the Nitrogen Reaction Pathway for the Catalytic Oxidative Denitrification of Fuels
Authors: Michael Huber, Maximilian J. Poller, Jens Tochtermann, Wolfgang Korth, Andreas Jess, Jakob Albert
Abstract:
Aside from the desulfurisation, the denitrogenation of fuels is of great importance to minimize the environmental impact of transport emissions. The oxidative reaction pathway of organic nitrogen in the catalytic oxidative denitrogenation could be successfully elucidated. This is the first time such a pathway could be traced in detail in non-microbial systems. It was found that the organic nitrogen is first oxidized to nitrate, which is subsequently reduced to molecular nitrogen via nitrous oxide. Hereby, the organic substrate serves as a reducing agent. The discovery of this pathway is an important milestone for the further development of fuel denitrogenation technologies. The United Nations aims to counteract global warming with Net Zero Emissions (NZE) commitments; however, it is not yet foreseeable when crude oil-based fuels will become obsolete. In 2021, more than 50 million barrels per day (mb/d) were consumed for the transport sector alone. Above all, heteroatoms such as sulfur or nitrogen produce SO₂ and NOx during combustion in the engines, which is not only harmful to the climate but also to health. Therefore, in refineries, these heteroatoms are removed by hy-drotreating to produce clean fuels. However, this catalytic reaction is inhibited by the basic, nitrogenous reactants (e.g., quinoline) as well as by NH3. The ion pair of the nitrogen atom forms strong pi-bonds to the active sites of the hydrotreating catalyst, which dimin-ishes its activity. To maximize the desulfurization and denitrogenation effectiveness in comparison to just extraction and adsorption, selective oxidation is typically combined with either extraction or selective adsorption. The selective oxidation produces more polar compounds that can be removed from the non-polar oil in a separate step. The extraction step can also be carried out in parallel to the oxidation reaction, as a result of in situ separation of the oxidation products (ECODS; extractive catalytic oxidative desulfurization). In this process, H8PV5Mo7O40 (HPA-5) is employed as a homogeneous polyoxometalate (POM) catalyst in an aqueous phase, whereas the sulfur containing fuel components are oxidized after diffusion from the organic fuel phase into the aqueous catalyst phase, to form highly polar products such as H₂SO₄ and carboxylic acids, which are thereby extracted from the organic fuel phase and accumulate in the aqueous phase. In contrast to the inhibiting properties of the basic nitrogen compounds in hydrotreating, the oxidative desulfurization improves with simultaneous denitrification in this system (ECODN; extractive catalytic oxidative denitrogenation). The reaction pathway of ECODS has already been well studied. In contrast, the oxidation of nitrogen compounds in ECODN is not yet well understood and requires more detailed investigations.Keywords: oxidative reaction pathway, denitrogenation of fuels, molecular catalysis, polyoxometalate
Procedia PDF Downloads 180113 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 168112 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 286111 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 301110 Gas Systems of the Amadeus Basin, Australia
Authors: Chris J. Boreham, Dianne S. Edwards, Amber Jarrett, Justin Davies, Robert Poreda, Alex Sessions, John Eiler
Abstract:
The origins of natural gases in the Amadeus Basin have been assessed using molecular and stable isotope (C, H, N, He) systematics. A dominant end-member thermogenic, oil-associated gas is considered for the Ordovician Pacoota−Stairway sandstones of the Mereenie gas and oil field. In addition, an abiogenic end-member is identified in the latest Proterozoic lower Arumbera Sandstone of the Dingo gasfield, being most likely associated with radiolysis of methane with polymerisation to wet gases. The latter source assignment is based on a similar geochemical fingerprint derived from the laboratory gamma irradiation experiments on methane. A mixed gas source is considered for the Palm Valley gasfield in the Ordovician Pacoota Sandstone. Gas wetness (%∑C₂−C₅/∑C₁−C₅) decreases in the order Mereenie (19.1%) > Palm Valley (9.4%) > Dingo (4.1%). Non-produced gases at Magee-1 (23.5%; Late Proterozoic Heavitree Quartzite) and Mount Kitty-1 (18.9%; Paleo-Mesoproterozoic fractured granitoid basement) are very wet. Methane thermometry based on clumped isotopes of methane (¹³CDH₃) is consistent with the abiogenic origin for the Dingo gas field with methane formation temperature of 254ᵒC. However, the low methane formation temperature of 57°C for the Mereenie gas suggests either a mixed thermogenic-biogenic methane source or there is no thermodynamic equilibrium between the methane isotopomers. The shallow reservoir depth and present-day formation temperature below 80ᵒC would support microbial methanogenesis, but there is no accompanying alteration of the C- and H-isotopes of the wet gases and CO₂ that is typically associated with biodegradation. The Amadeus Basin gases show low to extremely high inorganic gas contents. Carbon dioxide is low in abundance (< 1% CO₂) and becomes increasing depleted in ¹³C from the Palm Valley (av. δ¹³C 0‰) to the Mereenie (av. δ¹³C -6.6‰) and Dingo (av. δ¹³C -14.3‰) gas fields. Although the wide range in carbon isotopes for CO₂ is consistent with multiple origins from inorganic to organic inputs, the most likely process is fluid-rock alteration with enrichment in ¹²C in the residual gaseous CO₂ accompanying progressive carbonate precipitation within the reservoir. Nitrogen ranges from low−moderate (1.7−9.9% N₂) abundance (Palm Valley av. 1.8%; Mereenie av. 9.1%; Dingo av. 9.4%) to extremely high abundance in Magee-1 (43.6%) and Mount Kitty-1 (61.0%). The nitrogen isotopes for the production gases have δ¹⁵N = -3.0‰ for Mereenie, -3.0‰ for Palm Valley and -7.1‰ for Dingo, suggest all being mixed inorganic and thermogenic nitrogen sources. Helium (He) abundance varies over a wide range from a low of 0.17% to one of the world’s highest at 9% (Mereenie av. 0.23%; Palm Valley av. 0.48%, Dingo av. 0.18%, Magee-1 6.2%; Mount Kitty-1 9.0%). Complementary helium isotopes (R/Ra = ³He/⁴Hesample / ³He/⁴Heair) range from 0.013 to 0.031 R/Ra, indicating a dominant crustal origin for helium with a sustained input of radiogenic 4He from the decomposition of U- and Th-bearing minerals, effectively diluting any original mantle helium input. The high helium content in the non-produced gases compared to the shallower producing wells most likely reflects their stratigraphic position relative to the Tonian Bitter Springs Group with the former below and the latter above an effective carbonate-salt seal.Keywords: amadeus gas, thermogenic, abiogenic, C, H, N, He isotopes
Procedia PDF Downloads 195109 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 109108 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 240