Search results for: Michael C. Barbecho
121 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics
Authors: Michael Lousis
Abstract:
This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors
Procedia PDF Downloads 190120 Historic Fire Occurrence in Hemi-Boreal Forests: Exploring Natural and Cultural Scots Pine Multi-Cohort Fire Regimes in Lithuania
Authors: Charles Ruffner, Michael Manton, Gintautas Kibirkstis, Gediminas Brazaitas, Vitas Marozas, Ekaterine Makrickiene, Rutile Pukiene, Per Angelstam
Abstract:
In dynamic boreal forests, fire is an important natural disturbance, which drives regeneration and mortality of living and dead trees, and thus successional trajectories. However, current forest management practices focusing on wood production only have effectively eliminated fire as a stand-level disturbance. While this is generally well studied across much of Europe, in Lithuania, little is known about the historic fire regime and the role fire plays as a management tool towards the sustainable management of future landscapes. Focusing on Scots pine forests, we explore; i) the relevance of fire disturbance regimes on forestlands of Lithuania; ii) fire occurrence in the Dzukija landscape for dry upland and peatland forest sites, and iii) correlate tree-ring data with climate variables to ascertain climatic influences on growth and fire occurrence. We sampled and cross-dated 132 Scots pine samples with fire scars from 4 dry pine forest stands and 4 peatland forest stands, respectively. The fire history of each sample was analyzed using standard dendrochronological methods and presented in FHAES format. Analyses of soil moisture and nutrient conditions revealed a strong probability of finding forests that have a high fire frequency in Scots pine forests (59%), which cover 34.5% of Lithuania’s current forestland. The fire history analysis revealed 455 fire scars and 213 fire events during the period 1742-2019. Within the Dzukija landscape, the mean fire interval was 4.3 years for the dry Scots pine forest and 8.7 years for the peatland Scots pine forest. However, our comparison of fire frequency before and after 1950 shows a marked decrease in mean fire interval. Our data suggest that hemi-boreal forest landscapes of Lithuania provide strong evidence that fire, both human and lightning-ignited fires, has been and should be a natural phenomenon and that the examination of biological archives can be used to guide sustainable forest management into the future. Currently, fire use is prohibited by law as a tool for forest management in Lithuania. We recommend introducing trials that use low-intensity prescribed burning of Scots pine stands as a regeneration tool towards mimicking natural forest disturbance regimes.Keywords: biodiversity conservation, cultural burning, dendrochronology, forest dynamics, forest management, succession
Procedia PDF Downloads 200119 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing
Authors: Maria Ahmad, Michael Page, Danë Goodsman
Abstract:
‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course
Procedia PDF Downloads 217118 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language
Authors: Tengku Sepora Tengku Mahadi
Abstract:
Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture
Procedia PDF Downloads 149117 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps
Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe
Abstract:
Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion
Procedia PDF Downloads 166116 New Derivatives 7-(diethylamino)quinolin-2-(1H)-one Based Chalcone Colorimetric Probes for Detection of Bisulfite Anion in Cationic Micellar Media
Authors: Guillermo E. Quintero, Edwin G. Perez, Oriel Sanchez, Christian Espinosa-Bustos, Denis Fuentealba, Margarita E. Aliaga
Abstract:
Bisulfite ion (HSO3-) has been used as a preservative in food, drinks, and medication. However, it is well-known that HSO3- can cause health problems like asthma and allergic reactions in people. Due to the above, the development of analytical methods for detecting this ion has gained great interest. In line with the above, the current use of colorimetric and/or fluorescent probes as a detection technique has acquired great relevance due to their high sensitivity and accuracy. In this context, 2-quinolinone derivatives have been found to possess promising activity as antiviral agents, sensitizers in solar cells, antifungals, antioxidants, and sensors. In particular, 7-(diethylamino)-2-quinolinone derivatives have attracted attention in recent years since their suitable photophysical properties become promising fluorescent probes. In Addition, there is evidence that photophysical properties and reactivity can be affected by the study medium, such as micellar media. Based on the above background, 7-(diethylamino)-2-quinolinone derivatives based chalcone will be able to be incorporated into a cationic micellar environment (Cetyltrimethylammonium bromide, CTAB). Furthermore, the supramolecular control induced by the micellar environment will increase the reactivity of these derivatives towards nucleophilic analytes such as HSO3- (Michael-type addition reaction), leading to the generation of new colorimetric and/or fluorescent probes. In the present study, two derivatives of 7-(diethylamino)-2-quinolinone based chalcone DQD1-2 were synthesized according to the method reported by the literature. These derivatives were structurally characterized by 1H, 13C NMR, and HRMS-ESI. In addition, UV-VIS and fluorescence studies determined absorption bands near 450 nm, emission bands near 600 nm, fluorescence quantum yields near 0.01, and fluorescence lifetimes of 5 ps. In line with the foregoing, these photophysical properties aforementioned were improved in the presence of a cationic micellar medium using CTAB thanks to the formation of adducts presenting association constants of the order of 2,5x105 M-1, increasing the quantum yields to 0.12 and the fluorescence lifetimes corresponding to two lifetimes near to 120 and 400 ps for DQD1 and DQD2. Besides, thanks to the presence of the micellar medium, the reactivity of these derivatives with nucleophilic analytes, such as HSO3-, was increased. This was achieved through kinetic studies, which demonstrated an increase in the bimolecular rate constants in the presence of a micellar medium. Finally, probe DQD1 was chosen as the best sensor since it was assessed to detect HSO3- with excellent results.Keywords: bisulfite detection, cationic micelle, colorimetric probes, quinolinone derivatives
Procedia PDF Downloads 94115 School Refusal Behaviours: The Roles of Adolescent and Parental Factors
Authors: Junwen Chen, Celina Feleppa, Tingyue Sun, Satoko Sasagawa, Michael Smithson
Abstract:
School refusal behaviours refer to behaviours to avoid school attendance, chronic lateness in arriving at school, or regular early dismissal. Poor attendance in schools is highly correlated with anxiety, depression, suicide attempts, delinquency, violence, and substance use and abuse. Poor attendance is also a strong indicator of lower achievement in school, as well as problematic social-emotional development. Long-term consequences of school refusal behaviours include fewer opportunities for higher education, employment, and social difficulties, and high risks of later psychiatric illness. Given its negative impacts on youth educational outcomes and well-being, a thorough understanding of factors that are involved in the development of this phenomenon is warranted for developing effective management approaches. This study investigated parental and adolescent factors that may contribute to school refusal behaviours by specifically focusing on the role of parental and adolescents’ anxiety and depression, emotion dysregulation, and parental rearing style. Findings are expected to inform the identification of both parental and adolescents’ factors that may contribute to school refusal behaviours. This knowledge will enable novel and effective approaches that incorporate these factors to managing school refusal behaviours in adolescents, which in turn improve their school and daily functioning. Results are important for an integrative understanding of school refusal behaviours. Furthermore, findings will also provide information for policymakers to weigh the benefits of interventions targeting school refusal behaviours in adolescents. One-hundred-and-six adolescents aged 12-18 years (mean age = 14.79 years old, SD = 1.78, males = 44) and their parents (mean age = 47.49 years old, SD = 5.61, males = 27) completed an online questionnaire measuring both parental and adolescents’ anxiety, depression, emotion dysregulation, parental rearing styles, and adolescents’ school refusal behaviours. Adolescents with school refusal behaviours reported greater anxiety and depression, with their parents showing greater emotion dysregulation. Parental emotion dysregulation and adolescents’ anxiety and depression predicted school refusal behaviours independently. To date, only limited studies have investigated the interplay between parental and youth factors in relation to youth school refusal behaviours. Although parental emotion dysregulation has been investigated in relation to youth emotion dysregulation, little is known about its role in the context of school refusal. This study is one of the very few that investigated both parental and adolescent factors in relation to school refusal behaviours in adolescents. The findings support the theoretical models that emphasise the role of youth and parental psychopathology in school refusal behaviours. Future management of school refusal behaviours should target adolescents’ anxiety and depression while incorporating training for parental emotion regulation skills.Keywords: adolescents, school refusal behaviors, parental factors, anxiety and depression, emotion dysregulation
Procedia PDF Downloads 127114 Telemedicine Versus Face-to-Face Follow up in General Surgery: A Randomized Controlled Trial
Authors: Teagan Fink, Lynn Chong, Michael Hii, Brett Knowles
Abstract:
Background: Telemedicine is a rapidly advancing field providing healthcare to patients at a distance from their treating clinician. There is a paucity of high-quality evidence detailing the safety and acceptability of telemedicine for postoperative outpatient follow-up. This randomized controlled trial – conducted prior to the COVID 19 pandemic – aimed to assess patient satisfaction and safety (as determined by readmission, reoperation and complication rates) of telephone compared to face-to-face clinic follow-up after uncomplicated general surgical procedures. Methods: Patients following uncomplicated laparoscopic appendicectomy or cholecystectomy and laparoscopic or open umbilical or inguinal hernia repairs were randomized to a telephone or face-to-face outpatient clinic follow-up. Data points including patient demographics, perioperative details and postoperative outcomes (eg. wound healing complications, pain scores, unplanned readmission to hospital and return to daily activities) were compared between groups. Patients also completed a Likert patient satisfaction survey following their consultation. Results: 103 patients were recruited over a 12-month period (21 laparoscopic appendicectomies, 65 laparoscopic cholecystectomies, nine open umbilical hernia repairs, six laparoscopic inguinal hernia repairs and two laparoscopic umbilical hernia repairs). Baseline patient demographics and operative interventions were the same in both groups. Patient or clinician-reported concerns on postoperative pain, use of analgesia, wound healing complications and return to daily activities at clinic follow-up were not significantly different between the two groups. Of the 58 patients randomized to the telemedicine arm, 40% reported high and 60% reported very high patient satisfaction. Telemedicine clinic mean consultation times were significantly shorter than face-to-face consultation times (telemedicine 10.3 +/- 7.2 minutes, face-to-face 19.2 +/- 23.8 minutes, p-value = 0.014). Rates of failing to attend clinic were not significantly different (telemedicine 3%, control 6%). There was no increased rate of postoperative complications in patients followed up by telemedicine compared to in-person. There were no unplanned readmissions, return to theatre, or mortalities in this study. Conclusion: Telemedicine follow-up of patients undergoing uncomplicated general surgery is safe and does not result in any missed diagnosis or higher rates of complications. Telemedicine provides high patient satisfaction and steps to implement this modality in inpatient care should be undertaken.Keywords: general surgery, telemedicine, patient satisfaction, patient safety
Procedia PDF Downloads 118113 Congenital Diaphragmatic Hernia Outcomes in a Low-Volume Center
Authors: Michael Vieth, Aric Schadler, Hubert Ballard, J. A. Bauer, Pratibha Thakkar
Abstract:
Introduction: Congenital diaphragmatic hernia (CDH) is a condition characterized by the herniation of abdominal contents into the thoracic cavity requiring postnatal surgical repair. Previous literature suggests improved CDH outcomes at high-volume regional referral centers compared to low-volume centers. The purpose of this study was to examine CDH outcomes at Kentucky Children’s Hospital (KCH), a low-volume center, compared to the Congenital Diaphragmatic Hernia Study Group (CDHSG). Methods: A retrospective chart review was performed at KCH from 2007-2019 for neonates with CDH, and then subdivided into two cohorts: those requiring ECMO therapy and those not requiring ECMO therapy. Basic demographic data and measures of mortality and morbidity including ventilator days and length of stay were compared to the CDHSG. Measures of morbidity for the ECMO cohort including duration of ECMO, clinical bleeding, intracranial hemorrhage, sepsis, need for continuous renal replacement therapy (CRRT), need for sildenafil at discharge, timing of surgical repair, and total ventilator days were collected. Statistical analysis was performed using IBM SPSS Statistics version 28. One-sample t-tests and one-sample Wilcoxon Signed Rank test were utilized as appropriate.Results: There were a total of 27 neonatal patients with CDH at KCH from 2007-2019; 9 of the 27 required ECMO therapy. The birth weight and gestational age were similar between KCH and the CDHSG (2.99 kg vs 2.92 kg, p =0.655; 37.0 weeks vs 37.4 weeks, p =0.51). About half of the patients were inborn in both cohorts (52% vs 56%, p =0.676). KCH cohort had significantly more Caucasian patients (96% vs 55%, p=<0.001). Unadjusted mortality was similar in both groups (KCH 70% vs CDHSG 72%, p =0.857). Using ECMO utilization (KCH 78% vs CDHSG 52%, p =0.118) and need for surgical repair (KCH 95% vs CDHSG 85%, p =0.060) as proxy for severity, both groups’ mortality were comparable. No significant difference was noted for pulmonary outcomes such as average ventilator days (KCH 43.2 vs. CDHSG 17.3, p =0.078) and home oxygen dependency (KCH 44% vs. CDHSG 24%, p =0.108). Average length of hospital stay for patients treated at KCH was similar to CDHSG (64.4 vs 49.2, p=1.000). Conclusion: Our study demonstrates that outcome in CDH patients is independent of center’s case volume status. Management of CDH with a standardized approach in a low-volume center can yield similar outcomes. This data supports the treatment of patients with CDH at low-volume centers as opposed to transferring to higher-volume centers.Keywords: ECMO, case volume, congenital diaphragmatic hernia, congenital diaphragmatic hernia study group, neonate
Procedia PDF Downloads 96112 Efficacy and Safety of COVID-19 Vaccination in Patients with Multiple Sclerosis: Looking Forward to Post-COVID-19
Authors: Achiron Anat, Mathilda Mandel, Mayust Sue, Achiron Reuven, Gurevich Michael
Abstract:
Introduction: As coronavirus disease 2019 (COVID-19) vaccination is currently spreading around the world, it is of importance to assess the ability of multiple sclerosis (MS) patients to mount an appropriate immune response to the vaccine in the context of disease-modifying treatments (DMT’s). Objectives: Evaluate immunity generated following COVID-19 vaccination in MS patients, and assess factors contributing to protective humoral and cellular immune responses in MS patients vaccinated against severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) virus infection. Methods: Review our recent data related to (1) the safety of PfizerBNT162b2 COVID-19 mRNA vaccine in adult MS patients; (2) the humoral post-vaccination SARS-CoV2 IgG response in MS vaccinees using anti-spike protein-based serology; and (3) the cellular immune response of memory B-cells specific for SARS-CoV-2 receptor-binding domain (RBD) and memory T-cells secreting IFN-g and/or IL-2 in response to SARS-CoV2 peptides using ELISpot/Fluorospot assays in MS patients either untreated or under treatment with fingolimod, cladribine, or ocrelizumab; (4) covariate parameters related to mounting protective immune responses. Results: COVID-19 vaccine proved safe in MS patients, and the adverse event profile was mainly characterised by pain at the injection site, fatigue, and headache. Not any increased risk of relapse activity was noted and the rate of patients with acute relapse was comparable to the relapse rate in non-vaccinated patients during the corresponding follow-up period. A mild increase in the rate of adverse events was noted in younger MS patients, among patients with lower disability, and in patients treated with DMTs. Following COVID-19 vaccination protective humoral immune response was significantly decreased in fingolimod- and ocrelizumab- treated MS patients. SARS-CoV2 specific B-cell and T-cell cellular responses were respectively decreased. Untreated MS patients and patients treated with cladribine demonstrated protective humoral and cellular immune responses, similar to healthy vaccinated subjects. Conclusions: COVID-19 BNT162b2 vaccine proved as safe for MS patients. No increased risk of relapse activity was noted post-vaccination. Although COVID-19 vaccination is new, accumulated data demonstrate differences in immune responses under various DMT’s. This knowledge can help to construct appropriate COVID-19 vaccine guidelines to ensure proper immune responses for MS patients.Keywords: covid-19, vaccination, multiple sclerosis, IgG
Procedia PDF Downloads 139111 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective
Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg
Abstract:
The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.Keywords: Industry 4.0., mass customization, production networks, virtual process-chain
Procedia PDF Downloads 277110 Examining the Role of Farmer-Centered Participatory Action Learning in Building Sustainable Communities in Rural Haiti
Authors: Charles St. Geste, Michael Neumann, Catherine Twohig
Abstract:
Our primary aim is to examine farmer-centered participatory action learning as a tool to improve agricultural production, build resilience to climate shocks and, more broadly, advance community-driven solutions for sustainable development in rural communities across Haiti. For over six years, sixty plus farmers from Deslandes, Haiti, organized in three traditional work groups called konbits, have designed and tested low-input agroecology techniques as part of the Konbit Vanyan Kapab Pwoje Agroekoloji. The project utilizes a participatory action learning approach, emphasizing social inclusion, building on local knowledge, experiential learning, active farmer participation in trial design and evaluation, and cross-community sharing. Mixed methods were used to evaluate changes in knowledge and adoption of agroecology techniques, confidence in advancing agroecology locally, and innovation among Konbit Vanyan Kapab farmers. While skill and knowledge in application of agroecology techniques varied among individual farmers, a majority of farmers successfully adopted techniques outside of the trial farms. The use of agroecology techniques on trial and individual farms has doubled crop production in many cases. Farm income has also increased, and farmers report less damage to crops and property caused by extreme weather events. Furthermore, participatory action strategies have led to greater local self-determination and greater capacity for sustainable community development. With increased self-confidence and the knowledge and skills acquired from participating in the project, farmers prioritized sharing their successful techniques with other farmers and have developed a farmer-to-farmer training program that incorporates participatory action learning. Using adult education methods, farmers, trained as agroecology educators, are currently providing training in sustainable farming practices to farmers from five villages in three departments across Haiti. Konbit Vanyan Kapab farmers have also begun testing production of value-added food products, including a dried soup mix and tea. Key factors for success include: opportunities for farmers to actively participate in all phases of the project, group diversity, resources for application of agroecology techniques, focus on group processes and overcoming local barriers to inclusive decision-making.Keywords: agroecology, participatory action learning, rural Haiti, sustainable community development
Procedia PDF Downloads 156109 Screening for Larvicidal Activity of Aqueous and Ethanolic Extracts of Fourteen Selected Plants and Formulation of a Larvicide against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) Larvae
Authors: Michael Russelle S. Alvarez, Noel S. Quiming, Francisco M. Heralde
Abstract:
This study aims to: a) obtain ethanolic (95% EtOH) and aqueous extracts of Selaginella elmeri, Christella dentata, Elatostema sinnatum, Curculigo capitulata, Euphorbia hirta, Murraya koenigii, Alpinia speciosa, Cymbopogon citratus, Eucalyptus globulus, Jatropha curcas, Psidium guajava, Gliricidia sepium, Ixora coccinea and Capsicum frutescens and screen them for larvicidal activities against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) larvae; b) to fractionate the most active extract and determine the most active fraction; c) to determine the larvicidal properties of the most active extract and fraction against by computing their percentage mortality, LC50, and LC90 after 24 and 48 hours of exposure; and d) to determine the nature of the components of the active extracts and fractions using phytochemical screening. Ethanolic (95% EtOH) and aqueous extracts of the selected plants will be screened for potential larvicidal activity against Ae. aegypti and Ae. albopictus using standard procedures and 1% malathion and a Piper nigrum based ovicide-larvicide by the Department of Science and Technology as positive controls. The results were analyzed using One-Way ANOVA with Tukey’s and Dunnett’s test. The most active extract will be subjected to partial fractionation using normal-phase column chromatography, and the fractions subsequently screened to determine the most active fraction. The most active extract and fraction were subjected to dose-response assay and probit analysis to determine the LC50 and LC90 after 24 and 48 hours of exposure. The active extracts and fractions will be screened for phytochemical content. The ethanolic extracts of C. citratus, E. hirta, I. coccinea, G. sepium, M. koenigii, E globulus, J. curcas and C. frutescens exhibited significant larvicidal activity, with C. frutescens being the most active. After fractionation, the ethyl acetate fraction was found to be the most active. Phytochemical screening of the extracts revealed the presence of alkaloids, tannins, indoles and steroids. A formulation using talcum powder–300 mg fraction per 1 g talcum powder–was made and again tested for larvicidal activity. At 2 g/L, the formulation proved effective in killing all of the test larvae after 24 hours.Keywords: larvicidal activity screening, partial purification, dose-response assay, capsicum frutescens
Procedia PDF Downloads 329108 Phytochemistry and Alpha-Amylase Inhibitory Activities of Rauvolfia vomitoria (Afzel) Leaves and Picralima nitida (Stapf) Seeds
Authors: Oseyemi Omowunmi Olubomehin, Olufemi Michael Denton
Abstract:
Diabetes mellitus is a disease that is related to the digestion of carbohydrates, proteins and fats and how this affects the blood glucose levels. Various synthetic drugs employed in the management of the disease work through different mechanisms. Keeping postprandial blood glucose levels within acceptable range is a major factor in the management of type 2 diabetes and its complications. Thus, the inhibition of carbohydrate-hydrolyzing enzymes such as α-amylase is an important strategy in lowering postprandial blood glucose levels, but synthetic inhibitors have undesirable side effects like flatulence, diarrhea, gastrointestinal disorders to mention a few. Therefore, it is necessary to identify and explore the α-amylase inhibitors from plants due to their availability, safety, and low costs. In the present study, extracts from the leaves of Rauvolfia vomitoria and seeds of Picralima nitida which are used in the Nigeria traditional system of medicine to treat diabetes were tested for their α-amylase inhibitory effect. The powdered plant samples were subjected to phytochemical screening using standard procedures. The leaves and seeds macerated successively using n-hexane, ethyl acetate and methanol resulted in the crude extracts which at different concentrations (0.1, 0.5 and 1 mg/mL) alongside the standard drug acarbose, were subjected to α-amylase inhibitory assay using the Benfield and Miller methods, with slight modification. Statistical analysis was done using ANOVA, SPSS version 2.0. The phytochemical screening results of the leaves of Rauvolfia vomitoria and the seeds of Picralima nitida showed the presence of alkaloids, tannins, saponins and cardiac glycosides while in addition Rauvolfia vomitoria had phenols and Picralima nitida had terpenoids. The α-amylase assay results revealed that at 1 mg/mL the methanol, hexane, and ethyl acetate extracts of the leaves of Rauvolfia vomitoria gave (15.74, 23.13 and 26.36 %) α-amylase inhibitions respectively, the seeds of Picralima nitida gave (15.50, 30.68, 36.72 %) inhibitions which were not significantly different from the control at p < 0.05, while acarbose gave a significant 56 % inhibition at p < 0.05. The presence of alkaloids, phenols, tannins, steroids, saponins, cardiac glycosides and terpenoids in these plants are responsible for the observed anti-diabetic activity. However, the low percentages of α-amylase inhibition by these plant samples shows that α-amylase inhibition is not the major way by which both plants exhibit their anti-diabetic effect.Keywords: alpha-amylase, Picralima nitida, postprandial hyperglycemia, Rauvolfia vomitoria
Procedia PDF Downloads 191107 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 108106 Prenatal Paraben Exposure Impacts Infant Overweight Development and in vitro Adipogenesis
Authors: Beate Englich, Linda Schlittenbauer, Christiane Pfeifer, Isabel Kratochvil, Michael Borte, Gabriele I. Stangl, Martin von Bergen, Thorsten Reemtsma, Irina Lehmann, Kristin M. Junge
Abstract:
The worldwide production of endocrine disrupting compounds (EDC) has risen dramatically over the last decades, as so has the prevalence for obesity. Many EDCs are believed to contribute to this obesity epidemic, by enhancing adipogenesis or disrupting relevant metabolism. This effect is most tremendous in the early prenatal period when priming effects find a highly vulnerable time window. Therefore, we investigate the impact of parabens on childhood overweight development and adipogenesis in general. Parabens are ester of 4-hydroxy-benzoic acid and part of many cosmetic products or food packing. Therefore, ubiquitous exposure can be found in the westernized world, with exposure already starting during the sensitive prenatal period. We assessed maternal cosmetic product consumption, prenatal paraben exposure and infant BMI z-scores in the prospective German LINA cohort. In detail, maternal urinary concentrations (34 weeks of gestation) of methyl paraben (MeP), ethyl paraben (EtP), n-propyl paraben (PrP) and n-butyl paraben (BuP) were quantified using UPLC-MS/MS. Body weight and height of their children was assessed during annual clinical visits. Further, we investigated the direct influence of those parabens on adipogenesis in-vitro using a human mesenchymal stem cell (MSC) differentiation assay to mimic a prenatal exposure scenario. MSC were exposed to 0.1 – 50 µM paraben during the entire differentiation period. Differentiation outcome was monitored by impedance spectrometry, real-time PCR and triglyceride staining. We found that maternal cosmetic product consumption was highly correlated with urinary paraben concentrations at pregnancy. Further, prenatal paraben exposure was linked to higher BMI Z-scores in children. Our in-vitro analysis revealed that especially the long chained paraben BuP stimulates adipogenesis by increasing the expression of adipocyte specific genes (PPARγ, ADIPOQ, LPL, etc.) and triglyceride storage. Moreover, we found that adiponectin secretion is increased whereas leptin secretion is reduced under BuP exposure in-vitro. Further mechanistic analysis for receptor binding and activation of PPARγ and other key players in adipogenesis are currently in process. We conclude that maternal cosmetic product consumption is linked to prenatal paraben exposure of children and contributes to the development of infant overweight development by triggering key pathways of adipogenesis.Keywords: adipogenesis, endocrine disruptors, paraben, prenatal exposure
Procedia PDF Downloads 274105 Trends in All-Cause Mortality and Inpatient and Outpatient Visits for Ambulatory Care Sensitive Conditions during the First Year of the COVID-19 Pandemic: A Population-Based Study
Authors: Tetyana Kendzerska, David T. Zhu, Michael Pugliese, Douglas Manuel, Mohsen Sadatsafavi, Marcus Povitz, Therese A. Stukel, Teresa To, Shawn D. Aaron, Sunita Mulpuru, Melanie Chin, Claire E. Kendall, Kednapa Thavorn, Rebecca Robillard, Andrea S. Gershon
Abstract:
The impact of the COVID-19 pandemic on the management of ambulatory care sensitive conditions (ACSCs) remains unknown. To compare observed and expected (projected based on previous years) trends in all-cause mortality and healthcare use for ACSCs in the first year of the pandemic (March 2020 - March 2021). A population-based study using provincial health administrative data.General adult population (Ontario, Canada). Monthly all-cause mortality, and hospitalizations, emergency department (ED) and outpatient visit rates (per 100,000 people at-risk) for seven combined ACSCs (asthma, COPD, angina, congestive heart failure, hypertension, diabetes, and epilepsy) during the first year were compared with similar periods in previous years (2016-2019) by fitting monthly time series auto-regressive integrated moving-average models. Compared to previous years, all-cause mortality rates increased at the beginning of the pandemic (observed rate in March-May 2020 of 79.98 vs. projected of 71.24 [66.35-76.50]) and then returned to expected in June 2020—except among immigrants and people with mental health conditions where they remained elevated. Hospitalization and ED visit rates for ACSCs remained lower than projected throughout the first year: observed hospitalization rate of 37.29 vs. projected of 52.07 (47.84-56.68); observed ED visit rate of 92.55 vs. projected of 134.72 (124.89-145.33). ACSC outpatient visit rates decreased initially (observed rate of 4,299.57 vs. projected of 5,060.23 [4,712.64-5,433.46]) and then returned to expected in June 2020. Reductions in outpatient visits for ACSCs at the beginning of the pandemic combined with reduced hospital admissions may have been associated with temporally increased mortality—disproportionately experienced by immigrants and those with mental health conditions. The Ottawa Hospital Academic Medical OrganizationKeywords: COVID-19, chronic disease, all-cause mortality, hospitalizations, emergency department visits, outpatient visits, modelling, population-based study, asthma, COPD, angina, heart failure, hypertension, diabetes, epilepsy
Procedia PDF Downloads 92104 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis
Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon
Abstract:
Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage
Procedia PDF Downloads 161103 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine
Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo
Abstract:
The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.Keywords: copper-gold, DMLZ, skarn, structure
Procedia PDF Downloads 501102 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk
Authors: Michael Mihalicz, Aziz Guergachi
Abstract:
Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.Keywords: decision making, emotions, prospect theory, visceral factors
Procedia PDF Downloads 149101 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 169100 The Invisible Planner: Unearthing the Informal Dynamics Shaping Mixed-Use and Compact Development in Ghanaian Cities
Authors: Muwaffaq Usman Adam, Isaac Quaye, Jim Anbazu, Yetimoni Kpeebi, Michael Osei-Assibey
Abstract:
Urban informality, characterized by spontaneous and self-organized practices, plays a significant but often overlooked role in shaping the development of cities, particularly in the context of mixed-use and compact urban environments. This paper aims to explore the invisible planning processes inherent in informal practices and their influence on the urban form of Ghanaian cities. By examining the dynamic interplay between informality and formal planning, the study will discuss the ways in which informal actors shape and plan for mixed-use and compact development. Drawing on the synthesis of relevant secondary data, the research will begin by defining urban informality and identifying the factors that contribute to its prevalence in Ghanaian cities. It will delve into the concept of mixed-use and compact development, highlighting its benefits and importance in urban areas. Drawing on case studies, the paper will uncover the hidden planning processes that occur within informal settlements, showcasing their impact on the physical layout, land use, and spatial arrangements of Ghanaian cities. The study will also uncover the challenges and opportunities associated with informal planning. It examines the constraints faced by informal planners (actors) while also exploring the potential benefits and opportunities that emerge when informality is integrated into formal planning frameworks. By understanding the invisible planner, the research will offer valuable insights into how informal practices can contribute to sustainable and inclusive urban development. Based on the findings, the paper will present policy implications and recommendations. It highlights the need to bridge the policy gaps and calls for the recognition of informal planning practices within formal systems. Strategies are proposed to integrate informality into planning frameworks, fostering collaboration between formal and informal actors to achieve compact and mixed-use development in Ghanaian cities. This research underscores the importance of recognizing and leveraging the invisible planner in Ghanaian cities. By embracing informal planning practices, cities can achieve more sustainable, inclusive, and vibrant urban environments that meet the diverse needs of their residents. This research will also contribute to a deeper understanding of the complex dynamics between informality and planning, advocating for inclusive and collaborative approaches that harness the strengths of both formal and informal actors. The findings will likewise contribute to advancing our understanding of informality's role as an invisible yet influential planner, shedding light on its spatial planning implications on Ghanaian cities.Keywords: informality, mixed-uses, compact development, land use, ghana
Procedia PDF Downloads 12599 On implementing Sumak Kawsay in Post Bellum Principles: The Reconstruction of Natural Damage in the Aftermath of War
Authors: Lisa Tragbar
Abstract:
In post-war scenarios, reconstruction is a principle towards creating a Just Peace in order to restore a stable post-war society. Just peace theorists explore normative behaviour after war, including the duties and responsibilities of different actors and peacebuilding strategies to achieve a lasting, positive peace. Environmental peace ethicists have argued for including the role of nature in the Ethics of War and Peace. This text explores the question of why and how to rethink the value of nature in post-war scenarios. The aim is to include the rights of nature within a maximalist account of reconstruction by highlighting sumak kawsay in the post-war period. Destruction of nature is usually considered collateral damage in war scenarios. Common universal standards for post-war reconstruction are restitution, compensation and reparation programmes, which is mostly anthropocentric approach. The problem of reconstruction in the aftermath of war is the instrumental value of nature. The responsibility to rebuild needs to be revisited within a non-anthropocentric context. There is an ongoing debate about a minimalist or maximalist approach to post-war reconstruction. While Michael Walzer argues for minimalist in-and-out interventions, Alex Bellamy argues for maximalist strategies such as the responsibility to protect, a UN-concept on how face mass atrocity crimes and how to reconstruct peace. While supporting the tradition of maximalist responsibility to rebuild, these normative post-Bellum concepts do not yet sufficiently consider the rights of nature in the aftermath of war. While reconstruction of infrastructures seems important and necessary, concepts that strengthen the intrinsic value of nature in post-bellum measures must also be included. Peace is not Just Peace without a thriving nature that provides the conditions and resources to live and guarantee human rights. Ecuador's indigenous philosophy of life can contribute to the restoration of nature after war by changing the perspective on the value of nature. The sumak kawsay includes the de-hierarchisation of humans and nature and the principle of reciprocity towards nature. Transferring this idea of life and interconnectedness to post-war reconstruction practices, post bellum perpetrators have restorative obligations not only to people but also to nature. This maximalist approach would include both a restitutive principle, by restoring the balance between humans and nature, and a retributive principle, by punishing the perpetrators through compensatory duties to nature. A maximalist approach to post-war reconstruction that takes into account the rights of nature expands the normative post-war questions to include a more complex field of responsibilities. After a war, Just Peace is restored once not only human rights but also the rights of nature are secured. A minimalist post-bellum approach to reconstruction does not locate future problems at their source and does not offer a solution for the inclusion of obligations to nature. There is a lack of obligations towards nature after a war, which can be changed through a different perspective: The indigenous philosophy of life provides the necessary principles for a comprehensive reconstruction of Just Peace.Keywords: normative ethics, peace, post-war, sumak kawsay, applied ethics
Procedia PDF Downloads 7898 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality
Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard
Abstract:
Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus
Procedia PDF Downloads 57397 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany
Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig
Abstract:
The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis
Procedia PDF Downloads 11996 A Comparison of the Microbiology Profile for Periprosthetic Joint Infection (PJI) of Knee Arthroplasty and Lower Limb Endoprostheses in Tumour Surgery
Authors: Amirul Adlan, Robert A McCulloch, Neil Jenkins, MIchael Parry, Jonathan Stevenson, Lee Jeys
Abstract:
Background and Objectives: The current antibiotic prophylaxis for oncological patients is based upon evidence from primary arthroplasty despite significant differences in both patient group and procedure. The aim of this study was to compare the microbiology organisms responsible for PJI in patients who underwent two-stage revision for infected primary knee replacement with those of infected oncological endoprostheses of the lower limb in a single institution. This will subsequently guide decision making regarding antibiotic prophylaxis at primary implantation for oncological procedures and empirical antibiotics for infected revision procedures (where the infecting organism(s) are unknown). Patient and Methods: 118 patients were treated with two-stage revision surgery for infected knee arthroplasty and lower limb endoprostheses between 1999 and 2019. 74 patients had two-stage revision for PJI of knee arthroplasty, and 44 had two-stage revision of lower limb endoprostheses. There were 68 males and 50 females. The mean age for the knee arthroplasty cohort and lower limb endoprostheses cohort were 70.2 years (50-89) and 36.1 years (12-78), respectively (p<0.01). Patient host and extremity criteria were categorised according to the MSIS Host and Extremity Staging System. Patient microbiological culture, the incidence of polymicrobial infection and multi-drug resistance (MDR) were analysed and recorded. Results: Polymicrobial infection was reported in 16% (12 patients) from knee arthroplasty PJI and 14.5% (8 patients) in endoprostheses PJI (p=0.783). There was a significantly higher incidence of MDR in endoprostheses PJI, isolated in 36.4% of cultures, compared to knee arthroplasty PJI (17.2%) (p=0.01). Gram-positive organisms were isolated in more than 80% of cultures from both cohorts. Coagulase-negative Staphylococcus (CoNS) was the commonest gram-positive organism, and Escherichia coli was the commonest Gram-negative organism in both groups. According to the MSIS staging system, the host and extremity grade of knee arthroplasty PJI cohort were significantly better than endoprostheses PJI(p<0.05). Conclusion: Empirical antibiotic management of PJI in orthopaedic oncology is based upon PJI in arthroplasty despite differences in both host and microbiology. Our results show a significant increase in MDR pathogens within the oncological group despite CoNS being the most common infective organism in both groups. Endoprosthetic patients presented with poorer host and extremity criteria. These factors should be considered when managing this complex patient group, emphasising the importance of broad-spectrum antibiotic prophylaxis and preoperative sampling to ensure appropriate perioperative antibiotic cover.Keywords: microbiology, periprosthetic Joint infection, knee arthroplasty, endoprostheses
Procedia PDF Downloads 11695 Revealing the Nitrogen Reaction Pathway for the Catalytic Oxidative Denitrification of Fuels
Authors: Michael Huber, Maximilian J. Poller, Jens Tochtermann, Wolfgang Korth, Andreas Jess, Jakob Albert
Abstract:
Aside from the desulfurisation, the denitrogenation of fuels is of great importance to minimize the environmental impact of transport emissions. The oxidative reaction pathway of organic nitrogen in the catalytic oxidative denitrogenation could be successfully elucidated. This is the first time such a pathway could be traced in detail in non-microbial systems. It was found that the organic nitrogen is first oxidized to nitrate, which is subsequently reduced to molecular nitrogen via nitrous oxide. Hereby, the organic substrate serves as a reducing agent. The discovery of this pathway is an important milestone for the further development of fuel denitrogenation technologies. The United Nations aims to counteract global warming with Net Zero Emissions (NZE) commitments; however, it is not yet foreseeable when crude oil-based fuels will become obsolete. In 2021, more than 50 million barrels per day (mb/d) were consumed for the transport sector alone. Above all, heteroatoms such as sulfur or nitrogen produce SO₂ and NOx during combustion in the engines, which is not only harmful to the climate but also to health. Therefore, in refineries, these heteroatoms are removed by hy-drotreating to produce clean fuels. However, this catalytic reaction is inhibited by the basic, nitrogenous reactants (e.g., quinoline) as well as by NH3. The ion pair of the nitrogen atom forms strong pi-bonds to the active sites of the hydrotreating catalyst, which dimin-ishes its activity. To maximize the desulfurization and denitrogenation effectiveness in comparison to just extraction and adsorption, selective oxidation is typically combined with either extraction or selective adsorption. The selective oxidation produces more polar compounds that can be removed from the non-polar oil in a separate step. The extraction step can also be carried out in parallel to the oxidation reaction, as a result of in situ separation of the oxidation products (ECODS; extractive catalytic oxidative desulfurization). In this process, H8PV5Mo7O40 (HPA-5) is employed as a homogeneous polyoxometalate (POM) catalyst in an aqueous phase, whereas the sulfur containing fuel components are oxidized after diffusion from the organic fuel phase into the aqueous catalyst phase, to form highly polar products such as H₂SO₄ and carboxylic acids, which are thereby extracted from the organic fuel phase and accumulate in the aqueous phase. In contrast to the inhibiting properties of the basic nitrogen compounds in hydrotreating, the oxidative desulfurization improves with simultaneous denitrification in this system (ECODN; extractive catalytic oxidative denitrogenation). The reaction pathway of ECODS has already been well studied. In contrast, the oxidation of nitrogen compounds in ECODN is not yet well understood and requires more detailed investigations.Keywords: oxidative reaction pathway, denitrogenation of fuels, molecular catalysis, polyoxometalate
Procedia PDF Downloads 18094 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 28793 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 30292 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 109