Search results for: Michael Ward
114 Screening for Larvicidal Activity of Aqueous and Ethanolic Extracts of Fourteen Selected Plants and Formulation of a Larvicide against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) Larvae
Authors: Michael Russelle S. Alvarez, Noel S. Quiming, Francisco M. Heralde
Abstract:
This study aims to: a) obtain ethanolic (95% EtOH) and aqueous extracts of Selaginella elmeri, Christella dentata, Elatostema sinnatum, Curculigo capitulata, Euphorbia hirta, Murraya koenigii, Alpinia speciosa, Cymbopogon citratus, Eucalyptus globulus, Jatropha curcas, Psidium guajava, Gliricidia sepium, Ixora coccinea and Capsicum frutescens and screen them for larvicidal activities against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) larvae; b) to fractionate the most active extract and determine the most active fraction; c) to determine the larvicidal properties of the most active extract and fraction against by computing their percentage mortality, LC50, and LC90 after 24 and 48 hours of exposure; and d) to determine the nature of the components of the active extracts and fractions using phytochemical screening. Ethanolic (95% EtOH) and aqueous extracts of the selected plants will be screened for potential larvicidal activity against Ae. aegypti and Ae. albopictus using standard procedures and 1% malathion and a Piper nigrum based ovicide-larvicide by the Department of Science and Technology as positive controls. The results were analyzed using One-Way ANOVA with Tukey’s and Dunnett’s test. The most active extract will be subjected to partial fractionation using normal-phase column chromatography, and the fractions subsequently screened to determine the most active fraction. The most active extract and fraction were subjected to dose-response assay and probit analysis to determine the LC50 and LC90 after 24 and 48 hours of exposure. The active extracts and fractions will be screened for phytochemical content. The ethanolic extracts of C. citratus, E. hirta, I. coccinea, G. sepium, M. koenigii, E globulus, J. curcas and C. frutescens exhibited significant larvicidal activity, with C. frutescens being the most active. After fractionation, the ethyl acetate fraction was found to be the most active. Phytochemical screening of the extracts revealed the presence of alkaloids, tannins, indoles and steroids. A formulation using talcum powder–300 mg fraction per 1 g talcum powder–was made and again tested for larvicidal activity. At 2 g/L, the formulation proved effective in killing all of the test larvae after 24 hours.Keywords: larvicidal activity screening, partial purification, dose-response assay, capsicum frutescens
Procedia PDF Downloads 328113 Developing and integrated Clinical Risk Management Model
Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei
Abstract:
Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.Keywords: failure modes and effective analysis, risk management, root cause analysis, model
Procedia PDF Downloads 248112 Phytochemistry and Alpha-Amylase Inhibitory Activities of Rauvolfia vomitoria (Afzel) Leaves and Picralima nitida (Stapf) Seeds
Authors: Oseyemi Omowunmi Olubomehin, Olufemi Michael Denton
Abstract:
Diabetes mellitus is a disease that is related to the digestion of carbohydrates, proteins and fats and how this affects the blood glucose levels. Various synthetic drugs employed in the management of the disease work through different mechanisms. Keeping postprandial blood glucose levels within acceptable range is a major factor in the management of type 2 diabetes and its complications. Thus, the inhibition of carbohydrate-hydrolyzing enzymes such as α-amylase is an important strategy in lowering postprandial blood glucose levels, but synthetic inhibitors have undesirable side effects like flatulence, diarrhea, gastrointestinal disorders to mention a few. Therefore, it is necessary to identify and explore the α-amylase inhibitors from plants due to their availability, safety, and low costs. In the present study, extracts from the leaves of Rauvolfia vomitoria and seeds of Picralima nitida which are used in the Nigeria traditional system of medicine to treat diabetes were tested for their α-amylase inhibitory effect. The powdered plant samples were subjected to phytochemical screening using standard procedures. The leaves and seeds macerated successively using n-hexane, ethyl acetate and methanol resulted in the crude extracts which at different concentrations (0.1, 0.5 and 1 mg/mL) alongside the standard drug acarbose, were subjected to α-amylase inhibitory assay using the Benfield and Miller methods, with slight modification. Statistical analysis was done using ANOVA, SPSS version 2.0. The phytochemical screening results of the leaves of Rauvolfia vomitoria and the seeds of Picralima nitida showed the presence of alkaloids, tannins, saponins and cardiac glycosides while in addition Rauvolfia vomitoria had phenols and Picralima nitida had terpenoids. The α-amylase assay results revealed that at 1 mg/mL the methanol, hexane, and ethyl acetate extracts of the leaves of Rauvolfia vomitoria gave (15.74, 23.13 and 26.36 %) α-amylase inhibitions respectively, the seeds of Picralima nitida gave (15.50, 30.68, 36.72 %) inhibitions which were not significantly different from the control at p < 0.05, while acarbose gave a significant 56 % inhibition at p < 0.05. The presence of alkaloids, phenols, tannins, steroids, saponins, cardiac glycosides and terpenoids in these plants are responsible for the observed anti-diabetic activity. However, the low percentages of α-amylase inhibition by these plant samples shows that α-amylase inhibition is not the major way by which both plants exhibit their anti-diabetic effect.Keywords: alpha-amylase, Picralima nitida, postprandial hyperglycemia, Rauvolfia vomitoria
Procedia PDF Downloads 191111 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 107110 Prenatal Paraben Exposure Impacts Infant Overweight Development and in vitro Adipogenesis
Authors: Beate Englich, Linda Schlittenbauer, Christiane Pfeifer, Isabel Kratochvil, Michael Borte, Gabriele I. Stangl, Martin von Bergen, Thorsten Reemtsma, Irina Lehmann, Kristin M. Junge
Abstract:
The worldwide production of endocrine disrupting compounds (EDC) has risen dramatically over the last decades, as so has the prevalence for obesity. Many EDCs are believed to contribute to this obesity epidemic, by enhancing adipogenesis or disrupting relevant metabolism. This effect is most tremendous in the early prenatal period when priming effects find a highly vulnerable time window. Therefore, we investigate the impact of parabens on childhood overweight development and adipogenesis in general. Parabens are ester of 4-hydroxy-benzoic acid and part of many cosmetic products or food packing. Therefore, ubiquitous exposure can be found in the westernized world, with exposure already starting during the sensitive prenatal period. We assessed maternal cosmetic product consumption, prenatal paraben exposure and infant BMI z-scores in the prospective German LINA cohort. In detail, maternal urinary concentrations (34 weeks of gestation) of methyl paraben (MeP), ethyl paraben (EtP), n-propyl paraben (PrP) and n-butyl paraben (BuP) were quantified using UPLC-MS/MS. Body weight and height of their children was assessed during annual clinical visits. Further, we investigated the direct influence of those parabens on adipogenesis in-vitro using a human mesenchymal stem cell (MSC) differentiation assay to mimic a prenatal exposure scenario. MSC were exposed to 0.1 – 50 µM paraben during the entire differentiation period. Differentiation outcome was monitored by impedance spectrometry, real-time PCR and triglyceride staining. We found that maternal cosmetic product consumption was highly correlated with urinary paraben concentrations at pregnancy. Further, prenatal paraben exposure was linked to higher BMI Z-scores in children. Our in-vitro analysis revealed that especially the long chained paraben BuP stimulates adipogenesis by increasing the expression of adipocyte specific genes (PPARγ, ADIPOQ, LPL, etc.) and triglyceride storage. Moreover, we found that adiponectin secretion is increased whereas leptin secretion is reduced under BuP exposure in-vitro. Further mechanistic analysis for receptor binding and activation of PPARγ and other key players in adipogenesis are currently in process. We conclude that maternal cosmetic product consumption is linked to prenatal paraben exposure of children and contributes to the development of infant overweight development by triggering key pathways of adipogenesis.Keywords: adipogenesis, endocrine disruptors, paraben, prenatal exposure
Procedia PDF Downloads 271109 Trends in All-Cause Mortality and Inpatient and Outpatient Visits for Ambulatory Care Sensitive Conditions during the First Year of the COVID-19 Pandemic: A Population-Based Study
Authors: Tetyana Kendzerska, David T. Zhu, Michael Pugliese, Douglas Manuel, Mohsen Sadatsafavi, Marcus Povitz, Therese A. Stukel, Teresa To, Shawn D. Aaron, Sunita Mulpuru, Melanie Chin, Claire E. Kendall, Kednapa Thavorn, Rebecca Robillard, Andrea S. Gershon
Abstract:
The impact of the COVID-19 pandemic on the management of ambulatory care sensitive conditions (ACSCs) remains unknown. To compare observed and expected (projected based on previous years) trends in all-cause mortality and healthcare use for ACSCs in the first year of the pandemic (March 2020 - March 2021). A population-based study using provincial health administrative data.General adult population (Ontario, Canada). Monthly all-cause mortality, and hospitalizations, emergency department (ED) and outpatient visit rates (per 100,000 people at-risk) for seven combined ACSCs (asthma, COPD, angina, congestive heart failure, hypertension, diabetes, and epilepsy) during the first year were compared with similar periods in previous years (2016-2019) by fitting monthly time series auto-regressive integrated moving-average models. Compared to previous years, all-cause mortality rates increased at the beginning of the pandemic (observed rate in March-May 2020 of 79.98 vs. projected of 71.24 [66.35-76.50]) and then returned to expected in June 2020—except among immigrants and people with mental health conditions where they remained elevated. Hospitalization and ED visit rates for ACSCs remained lower than projected throughout the first year: observed hospitalization rate of 37.29 vs. projected of 52.07 (47.84-56.68); observed ED visit rate of 92.55 vs. projected of 134.72 (124.89-145.33). ACSC outpatient visit rates decreased initially (observed rate of 4,299.57 vs. projected of 5,060.23 [4,712.64-5,433.46]) and then returned to expected in June 2020. Reductions in outpatient visits for ACSCs at the beginning of the pandemic combined with reduced hospital admissions may have been associated with temporally increased mortality—disproportionately experienced by immigrants and those with mental health conditions. The Ottawa Hospital Academic Medical OrganizationKeywords: COVID-19, chronic disease, all-cause mortality, hospitalizations, emergency department visits, outpatient visits, modelling, population-based study, asthma, COPD, angina, heart failure, hypertension, diabetes, epilepsy
Procedia PDF Downloads 91108 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis
Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon
Abstract:
Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage
Procedia PDF Downloads 159107 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine
Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo
Abstract:
The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.Keywords: copper-gold, DMLZ, skarn, structure
Procedia PDF Downloads 501106 A Fresh Approach to Learn Evidence-Based Practice, a Prospective Interventional Study
Authors: Ebtehal Qulisy, Geoffrey Dougherty, Kholoud Hothan, Mylene Dandavino
Abstract:
Background: For more than 200 years, journal clubs (JCs) have been used to teach the fundamentals of critical appraisal and evidence-based practice (EBP). However, JCs curricula face important challenges, including poor sustainability, insufficient time to prepare for and conduct the activities, and lack of trainee skills and self-efficacy with critical appraisal. Andragogy principles and modern technology could help EBP be taught in more relevant, modern, and interactive ways. Method: We propose a fresh educational activity to teach EBP. Educational sessions are designed to encourage collaborative and experiential learning and do not require advanced preparation by the participants. Each session lasts 60 minutes and is adaptable to in-person, virtual, or hybrid contexts. Sessions are structured around a worksheet and include three educational objectives: “1. Identify a Clinical Conundrum”, “2. Compare and Contrast Current Guidelines”, and “3. Choose a Recent Journal Article”. Sessions begin with a short presentation by a facilitator of a clinical scenario highlighting a “grey-zone” in pediatrics. Trainees are placed in groups of two to four (based on the participants’ number) of varied training levels. The first task requires the identification of a clinical conundrum (a situation where there is no clear answer but only a reasonable solution) related to the scenario. For the second task, trainees must identify two or three clinical guidelines. The last task requires trainees to find a journal article published in the last year that reports an update regarding the scenario’s topic. Participants are allowed to use their electronic devices throughout the session. Our university provides full-text access to major journals, which facilitated this exercise. Results: Participants were a convenience sample of trainees in the inpatient services at the Montréal Children’s Hospital, McGill University. Sessions were conducted as a part of an existing weekly academic activity and facilitated by pediatricians with experience in critical appraisal. There were 28 participants in 4 sessions held during Spring 2022. Time was allocated at the end of each session to collect participants’ feedback via a self-administered online survey. There were 22 responses, were 41%(n=9) pediatric residents, 22.7%(n=5) family medicine residents, 31.8%(n=7) medical students, and 4.5%(n=1) nurse practitioner. Four respondents participated in more than one session. The “Satisfied” rates were 94.7% for session format, 100% for topic selection, 89.5% for time allocation, and 84.3% for worksheet structure. 60% of participants felt that including the sessions during the clinical ward rotation was “Feasible.” As per self-efficacy, participants reported being “Confident” for the tasks as follows: 89.5% for the ability to identify a relevant conundrum, 94.8% for the compare and contrast task, and 84.2% for the identification of a published update. The perceived effectiveness to learn EBP was reported as “Agreed” by all participants. All participants would recommend this session for further teaching. Conclusion: We developed a modern approach to teach EBP, enjoyed by all levels of participants, who also felt it was a useful learning experience. Our approach addresses known JCs challenges by being relevant to clinical care, fostering active engagement but not requiring any preparation, using available technology, and being adaptable to hybrid contexts.Keywords: medical education, journal clubs, post-graduate teaching, andragogy, experiential learning, evidence-based practice
Procedia PDF Downloads 114105 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk
Authors: Michael Mihalicz, Aziz Guergachi
Abstract:
Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.Keywords: decision making, emotions, prospect theory, visceral factors
Procedia PDF Downloads 148104 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 167103 The Invisible Planner: Unearthing the Informal Dynamics Shaping Mixed-Use and Compact Development in Ghanaian Cities
Authors: Muwaffaq Usman Adam, Isaac Quaye, Jim Anbazu, Yetimoni Kpeebi, Michael Osei-Assibey
Abstract:
Urban informality, characterized by spontaneous and self-organized practices, plays a significant but often overlooked role in shaping the development of cities, particularly in the context of mixed-use and compact urban environments. This paper aims to explore the invisible planning processes inherent in informal practices and their influence on the urban form of Ghanaian cities. By examining the dynamic interplay between informality and formal planning, the study will discuss the ways in which informal actors shape and plan for mixed-use and compact development. Drawing on the synthesis of relevant secondary data, the research will begin by defining urban informality and identifying the factors that contribute to its prevalence in Ghanaian cities. It will delve into the concept of mixed-use and compact development, highlighting its benefits and importance in urban areas. Drawing on case studies, the paper will uncover the hidden planning processes that occur within informal settlements, showcasing their impact on the physical layout, land use, and spatial arrangements of Ghanaian cities. The study will also uncover the challenges and opportunities associated with informal planning. It examines the constraints faced by informal planners (actors) while also exploring the potential benefits and opportunities that emerge when informality is integrated into formal planning frameworks. By understanding the invisible planner, the research will offer valuable insights into how informal practices can contribute to sustainable and inclusive urban development. Based on the findings, the paper will present policy implications and recommendations. It highlights the need to bridge the policy gaps and calls for the recognition of informal planning practices within formal systems. Strategies are proposed to integrate informality into planning frameworks, fostering collaboration between formal and informal actors to achieve compact and mixed-use development in Ghanaian cities. This research underscores the importance of recognizing and leveraging the invisible planner in Ghanaian cities. By embracing informal planning practices, cities can achieve more sustainable, inclusive, and vibrant urban environments that meet the diverse needs of their residents. This research will also contribute to a deeper understanding of the complex dynamics between informality and planning, advocating for inclusive and collaborative approaches that harness the strengths of both formal and informal actors. The findings will likewise contribute to advancing our understanding of informality's role as an invisible yet influential planner, shedding light on its spatial planning implications on Ghanaian cities.Keywords: informality, mixed-uses, compact development, land use, ghana
Procedia PDF Downloads 123102 On implementing Sumak Kawsay in Post Bellum Principles: The Reconstruction of Natural Damage in the Aftermath of War
Authors: Lisa Tragbar
Abstract:
In post-war scenarios, reconstruction is a principle towards creating a Just Peace in order to restore a stable post-war society. Just peace theorists explore normative behaviour after war, including the duties and responsibilities of different actors and peacebuilding strategies to achieve a lasting, positive peace. Environmental peace ethicists have argued for including the role of nature in the Ethics of War and Peace. This text explores the question of why and how to rethink the value of nature in post-war scenarios. The aim is to include the rights of nature within a maximalist account of reconstruction by highlighting sumak kawsay in the post-war period. Destruction of nature is usually considered collateral damage in war scenarios. Common universal standards for post-war reconstruction are restitution, compensation and reparation programmes, which is mostly anthropocentric approach. The problem of reconstruction in the aftermath of war is the instrumental value of nature. The responsibility to rebuild needs to be revisited within a non-anthropocentric context. There is an ongoing debate about a minimalist or maximalist approach to post-war reconstruction. While Michael Walzer argues for minimalist in-and-out interventions, Alex Bellamy argues for maximalist strategies such as the responsibility to protect, a UN-concept on how face mass atrocity crimes and how to reconstruct peace. While supporting the tradition of maximalist responsibility to rebuild, these normative post-Bellum concepts do not yet sufficiently consider the rights of nature in the aftermath of war. While reconstruction of infrastructures seems important and necessary, concepts that strengthen the intrinsic value of nature in post-bellum measures must also be included. Peace is not Just Peace without a thriving nature that provides the conditions and resources to live and guarantee human rights. Ecuador's indigenous philosophy of life can contribute to the restoration of nature after war by changing the perspective on the value of nature. The sumak kawsay includes the de-hierarchisation of humans and nature and the principle of reciprocity towards nature. Transferring this idea of life and interconnectedness to post-war reconstruction practices, post bellum perpetrators have restorative obligations not only to people but also to nature. This maximalist approach would include both a restitutive principle, by restoring the balance between humans and nature, and a retributive principle, by punishing the perpetrators through compensatory duties to nature. A maximalist approach to post-war reconstruction that takes into account the rights of nature expands the normative post-war questions to include a more complex field of responsibilities. After a war, Just Peace is restored once not only human rights but also the rights of nature are secured. A minimalist post-bellum approach to reconstruction does not locate future problems at their source and does not offer a solution for the inclusion of obligations to nature. There is a lack of obligations towards nature after a war, which can be changed through a different perspective: The indigenous philosophy of life provides the necessary principles for a comprehensive reconstruction of Just Peace.Keywords: normative ethics, peace, post-war, sumak kawsay, applied ethics
Procedia PDF Downloads 77101 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality
Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard
Abstract:
Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus
Procedia PDF Downloads 572100 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany
Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig
Abstract:
The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis
Procedia PDF Downloads 11799 A Comparison of the Microbiology Profile for Periprosthetic Joint Infection (PJI) of Knee Arthroplasty and Lower Limb Endoprostheses in Tumour Surgery
Authors: Amirul Adlan, Robert A McCulloch, Neil Jenkins, MIchael Parry, Jonathan Stevenson, Lee Jeys
Abstract:
Background and Objectives: The current antibiotic prophylaxis for oncological patients is based upon evidence from primary arthroplasty despite significant differences in both patient group and procedure. The aim of this study was to compare the microbiology organisms responsible for PJI in patients who underwent two-stage revision for infected primary knee replacement with those of infected oncological endoprostheses of the lower limb in a single institution. This will subsequently guide decision making regarding antibiotic prophylaxis at primary implantation for oncological procedures and empirical antibiotics for infected revision procedures (where the infecting organism(s) are unknown). Patient and Methods: 118 patients were treated with two-stage revision surgery for infected knee arthroplasty and lower limb endoprostheses between 1999 and 2019. 74 patients had two-stage revision for PJI of knee arthroplasty, and 44 had two-stage revision of lower limb endoprostheses. There were 68 males and 50 females. The mean age for the knee arthroplasty cohort and lower limb endoprostheses cohort were 70.2 years (50-89) and 36.1 years (12-78), respectively (p<0.01). Patient host and extremity criteria were categorised according to the MSIS Host and Extremity Staging System. Patient microbiological culture, the incidence of polymicrobial infection and multi-drug resistance (MDR) were analysed and recorded. Results: Polymicrobial infection was reported in 16% (12 patients) from knee arthroplasty PJI and 14.5% (8 patients) in endoprostheses PJI (p=0.783). There was a significantly higher incidence of MDR in endoprostheses PJI, isolated in 36.4% of cultures, compared to knee arthroplasty PJI (17.2%) (p=0.01). Gram-positive organisms were isolated in more than 80% of cultures from both cohorts. Coagulase-negative Staphylococcus (CoNS) was the commonest gram-positive organism, and Escherichia coli was the commonest Gram-negative organism in both groups. According to the MSIS staging system, the host and extremity grade of knee arthroplasty PJI cohort were significantly better than endoprostheses PJI(p<0.05). Conclusion: Empirical antibiotic management of PJI in orthopaedic oncology is based upon PJI in arthroplasty despite differences in both host and microbiology. Our results show a significant increase in MDR pathogens within the oncological group despite CoNS being the most common infective organism in both groups. Endoprosthetic patients presented with poorer host and extremity criteria. These factors should be considered when managing this complex patient group, emphasising the importance of broad-spectrum antibiotic prophylaxis and preoperative sampling to ensure appropriate perioperative antibiotic cover.Keywords: microbiology, periprosthetic Joint infection, knee arthroplasty, endoprostheses
Procedia PDF Downloads 11498 Revealing the Nitrogen Reaction Pathway for the Catalytic Oxidative Denitrification of Fuels
Authors: Michael Huber, Maximilian J. Poller, Jens Tochtermann, Wolfgang Korth, Andreas Jess, Jakob Albert
Abstract:
Aside from the desulfurisation, the denitrogenation of fuels is of great importance to minimize the environmental impact of transport emissions. The oxidative reaction pathway of organic nitrogen in the catalytic oxidative denitrogenation could be successfully elucidated. This is the first time such a pathway could be traced in detail in non-microbial systems. It was found that the organic nitrogen is first oxidized to nitrate, which is subsequently reduced to molecular nitrogen via nitrous oxide. Hereby, the organic substrate serves as a reducing agent. The discovery of this pathway is an important milestone for the further development of fuel denitrogenation technologies. The United Nations aims to counteract global warming with Net Zero Emissions (NZE) commitments; however, it is not yet foreseeable when crude oil-based fuels will become obsolete. In 2021, more than 50 million barrels per day (mb/d) were consumed for the transport sector alone. Above all, heteroatoms such as sulfur or nitrogen produce SO₂ and NOx during combustion in the engines, which is not only harmful to the climate but also to health. Therefore, in refineries, these heteroatoms are removed by hy-drotreating to produce clean fuels. However, this catalytic reaction is inhibited by the basic, nitrogenous reactants (e.g., quinoline) as well as by NH3. The ion pair of the nitrogen atom forms strong pi-bonds to the active sites of the hydrotreating catalyst, which dimin-ishes its activity. To maximize the desulfurization and denitrogenation effectiveness in comparison to just extraction and adsorption, selective oxidation is typically combined with either extraction or selective adsorption. The selective oxidation produces more polar compounds that can be removed from the non-polar oil in a separate step. The extraction step can also be carried out in parallel to the oxidation reaction, as a result of in situ separation of the oxidation products (ECODS; extractive catalytic oxidative desulfurization). In this process, H8PV5Mo7O40 (HPA-5) is employed as a homogeneous polyoxometalate (POM) catalyst in an aqueous phase, whereas the sulfur containing fuel components are oxidized after diffusion from the organic fuel phase into the aqueous catalyst phase, to form highly polar products such as H₂SO₄ and carboxylic acids, which are thereby extracted from the organic fuel phase and accumulate in the aqueous phase. In contrast to the inhibiting properties of the basic nitrogen compounds in hydrotreating, the oxidative desulfurization improves with simultaneous denitrification in this system (ECODN; extractive catalytic oxidative denitrogenation). The reaction pathway of ECODS has already been well studied. In contrast, the oxidation of nitrogen compounds in ECODN is not yet well understood and requires more detailed investigations.Keywords: oxidative reaction pathway, denitrogenation of fuels, molecular catalysis, polyoxometalate
Procedia PDF Downloads 17897 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 28696 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 29995 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 10894 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 23793 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 11892 Optimizing Weight Loss with AI (GenAISᵀᴹ): A Randomized Trial of Dietary Supplement Prescriptions in Obese Patients
Authors: Evgeny Pokushalov, Andrey Ponomarenko, John Smith, Michael Johnson, Claire Garcia, Inessa Pak, Evgenya Shrainer, Dmitry Kudlay, Sevda Bayramova, Richard Miller
Abstract:
Background: Obesity is a complex, multifactorial chronic disease that poses significant health risks. Recent advancements in artificial intelligence (AI) offer the potential for more personalized and effective dietary supplement (DS) regimens to promote weight loss. This study aimed to evaluate the efficacy of AI-guided DS prescriptions compared to standard physician-guided DS prescriptions in obese patients. Methods: This randomized, parallel-group pilot study enrolled 60 individuals aged 40 to 60 years with a body mass index (BMI) of 25 or greater. Participants were randomized to receive either AI-guided DS prescriptions (n = 30) or physician-guided DS prescriptions (n = 30) for 180 days. The primary endpoints were the percentage change in body weight and the proportion of participants achieving a ≥5% weight reduction. Secondary endpoints included changes in BMI, fat mass, visceral fat rating, systolic and diastolic blood pressure, lipid profiles, fasting plasma glucose, hsCRP levels, and postprandial appetite ratings. Adverse events were monitored throughout the study. Results: Both groups were well balanced in terms of baseline characteristics. Significant weight loss was observed in the AI-guided group, with a mean reduction of -12.3% (95% CI: -13.1 to -11.5%) compared to -7.2% (95% CI: -8.1 to -6.3%) in the physician-guided group, resulting in a treatment difference of -5.1% (95% CI: -6.4 to -3.8%; p < 0.01). At day 180, 84.7% of the AI-guided group achieved a weight reduction of ≥5%, compared to 54.5% in the physician-guided group (Odds Ratio: 4.3; 95% CI: 3.1 to 5.9; p < 0.01). Significant improvements were also observed in BMI, fat mass, and visceral fat rating in the AI-guided group (p < 0.01 for all). Postprandial appetite suppression was greater in the AI-guided group, with significant reductions in hunger and prospective food consumption, and increases in fullness and satiety (p < 0.01 for all). Adverse events were generally mild-to-moderate, with higher incidences of gastrointestinal symptoms in the AI-guided group, but these were manageable and did not impact adherence. Conclusion: The AI-guided dietary supplement regimen was more effective in promoting weight loss, improving body composition, and suppressing appetite compared to the physician-guided regimen. These findings suggest that AI-guided, personalized supplement prescriptions could offer a more effective approach to managing obesity. Further research with larger sample sizes is warranted to confirm these results and optimize AI-based interventions for weight loss.Keywords: obesity, AI-guided, dietary supplements, weight loss, personalized medicine, metabolic health, appetite suppression
Procedia PDF Downloads 391 Regional Analysis of Freight Movement by Vehicle Classification
Authors: Katerina Koliou, Scott Parr, Evangelos Kaisar
Abstract:
The surface transportation of freight is particularly vulnerable to storm and hurricane disasters, while at the same time, it is the primary transportation mode for delivering medical supplies, fuel, water, and other essential goods. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The research investigation used Florida's statewide continuous-count station traffic volumes, where then compared between years, to identify locations where traffic was moving differently during the evacuation. The data was then used to identify days on which traffic was significantly different between years. While the literature on auto-based evacuations is extensive, the consideration of freight travel is lacking. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The goal of this research was to investigate the movement of vehicles by classification, with an emphasis on freight during two major evacuation events: hurricanes Irma (2017) and Michael (2018). The methodology of the research was divided into three phases: data collection and management, spatial analysis, and temporal comparisons. Data collection and management obtained continuous-co station data from the state of Florida for both 2017 and 2018 by vehicle classification. The data was then processed into a manageable format. The second phase used geographic information systems (GIS) to display where and when traffic varied across the state. The third and final phase was a quantitative investigation into which vehicle classifications were statistically different and on which dates statewide. This phase used a two-sample, two-tailed t-test to compare sensor volume by classification on similar days between years. Overall, increases in freight movement between years prevented a more precise paired analysis. This research sought to identify where and when different classes of vehicles were traveling leading up to hurricane landfall and post-storm reentry. Of the more significant findings, the research results showed that commercial-use vehicles may have underutilized rest areas during the evacuation, or perhaps these rest areas were closed. This may suggest that truckers are driving longer distances and possibly longer hours before hurricanes. Another significant finding of this research was that changes in traffic patterns for commercial-use vehicles occurred earlier and lasted longer than changes for personal-use vehicles. This finding suggests that commercial vehicles are perhaps evacuating in a fashion different from personal use vehicles. This paper may serve as the foundation for future research into commercial travel during evacuations and explore additional factors that may influence freight movements during evacuations.Keywords: evacuation, freight, travel time, evacuation
Procedia PDF Downloads 6590 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27
Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer
Abstract:
At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis
Procedia PDF Downloads 18589 Geospatial and Statistical Evidences of Non-Engineered Landfill Leachate Effects on Groundwater Quality in a Highly Urbanised Area of Nigeria
Authors: David A. Olasehinde, Peter I. Olasehinde, Segun M. A. Adelana, Dapo O. Olasehinde
Abstract:
An investigation was carried out on underground water system dynamics within Ilorin metropolis to monitor the subsurface flow and its corresponding pollution. Africa population growth rate is the highest among the regions of the world, especially in urban areas. A corresponding increase in waste generation and a change in waste composition from predominantly organic to non-organic waste has also been observed. Percolation of leachate from non-engineered landfills, the chief means of waste disposal in many of its cities, constitutes a threat to the underground water bodies. Ilorin city, a transboundary town in southwestern Nigeria, is a ready microcosm of Africa’s unique challenge. In spite of the fact that groundwater is naturally protected from common contaminants such as bacteria as the subsurface provides natural attenuation process, groundwater samples have been noted to however possesses relatively higher dissolved chemical contaminants such as bicarbonate, sodium, and chloride which poses a great threat to environmental receptors and human consumption. The Geographic Information System (GIS) was used as a tool to illustrate, subsurface dynamics and the corresponding pollutant indicators. Forty-four sampling points were selected around known groundwater pollutant, major old dumpsites without landfill liners. The results of the groundwater flow directions and the corresponding contaminant transport were presented using expert geospatial software. The experimental results were subjected to four descriptive statistical analyses, namely: principal component analysis, Pearson correlation analysis, scree plot analysis, and Ward cluster analysis. Regression model was also developed aimed at finding functional relationships that can adequately relate or describe the behaviour of water qualities and the hypothetical factors landfill characteristics that may influence them namely; distance of source of water body from dumpsites, static water level of groundwater, subsurface permeability (inferred from hydraulic gradient), and soil infiltration. The regression equations developed were validated using the graphical approach. Underground water seems to flow from the northern portion of Ilorin metropolis down southwards transporting contaminants. Pollution pattern in the study area generally assumed a bimodal pattern with the major concentration of the chemical pollutants in the underground watershed and the recharge. The correlation between contaminant concentrations and the spread of pollution indicates that areas of lower subsurface permeability display a higher concentration of dissolved chemical content. The principal component analysis showed that conductivity, suspended solids, calcium hardness, total dissolved solids, total coliforms, and coliforms were the chief contaminant indicators in the underground water system in the study area. Pearson correlation revealed a high correlation of electrical conductivity for many parameters analyzed. In the same vein, the regression models suggest that the heavier the molecular weight of a chemical contaminant of a pollutant from a point source, the greater the pollution of the underground water system at a short distance. The study concludes that the associative properties of landfill have a significant effect on groundwater quality in the study area.Keywords: dumpsite, leachate, groundwater pollution, linear regression, principal component
Procedia PDF Downloads 11588 Cardiolipin-Incorporated Liposomes Carrying Curcumin and Nerve Growth Factor to Rescue Neurons from Apoptosis for Alzheimer’s Disease Treatment
Authors: Yung-Chih Kuo, Che-Yu Lin, Jay-Shake Li, Yung-I Lou
Abstract:
Curcumin (CRM) and nerve growth factor (NGF) were entrapped in liposomes (LIP) with cardiolipin (CL) to downregulate the phosphorylation of mitogen-activated protein kinases for Alzheimer’s disease (AD) management. AD belongs to neurodegenerative disorder with a gradual loss of memory, yielding irreversible dementia. CL-conjugated LIP loaded with CRM (CRM-CL/LIP) and that with NGF (NGF-CL/LIP) were applied to AD models of SK-N-MC cells and Wistar rats with an insult of β-amyloid peptide (Aβ). Lipids comprising 1,2-dipalmitoyl-sn-glycero-3- phosphocholine (Avanti Polar Lipids, Alabaster, AL), 1',3'-bis[1,2- dimyristoyl-sn-glycero-3-phospho]-sn-glycerol (CL; Avanti Polar Lipids), 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N- [methoxy(polyethylene glycol)-2000] (Avanti Polar Lipids), 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[carboxy(polyethylene glycol)-2000] (Avanti Polar Lipids) and CRM (Sigma–Aldrich, St. Louis, MO) were dissolved in chloroform (J. T. Baker, Phillipsburg, NJ) and condensed using a rotary evaporator (Panchum, Kaohsiung, Taiwan). Human β-NGF (Alomone Lab, Jerusalem, Israel) was added in the aqueous phase. Wheat germ agglutinin (WGA; Medicago AB, Uppsala, Sweden) was grafted on LIP loaded with CRM for (WGA-CRM-LIP) and CL-conjugated LIP loaded with CRM (WGA-CRM-CL/LIP) using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (Sigma–Aldrich) and N-hydroxysuccinimide (Alfa Aesar, Ward Hill, MA). The protein samples of SK-N-MC cells (American Type Tissue Collection, Rockville, MD) were used for sodium dodecyl sulfate (Sigma–Aldrich) polyacrylamide gel (Sigma–Aldrich) electrophoresis. In animal study, the LIP formulations were administered by intravenous injection via a tail vein of male Wistar rats (250–280 g, 8 weeks, BioLasco, Taipei, Taiwan), which were housed in the Animal Laboratory of National Chung Cheng University in accordance with the institutional guidelines and the guidelines of Animal Protection Committee under the Council of Agriculture of the Republic of China. We found that CRM-CL/LIP could inhibit the expressions of phosphorylated p38 (p-p38), p-Jun N-terminal kinase (p-JNK), and p-tau protein at serine 202 (p-Ser202) to retard the neuronal apoptosis. Free CRM and released CRM from CRM-LIP and CRM-CL/LIP were not in a straightforward manner to effectively inhibit the expression of p-p38 and p-JNK in the cytoplasm. In addition, NGF-CL/LIP enhanced the quantities of p-neurotrophic tyrosine kinase receptor type 1 (p-TrkA) and p-extracellular-signal-regulated kinase 5 (p-ERK5), preventing the Aβ-induced degeneration of neurons. The membrane fusion of NGF-LIP activated the ERK5 pathway and the targeting capacity of NGF-CL/LIP enhanced the possibility of released NGF to affect the TrkA level. Moreover, WGA-CRM-LIP improved the permeation of CRM across the blood–brain barrier (BBB) and significantly reduced the Aβ plaque deposition and malondialdehyde level and increased the percentage of normal neurons and cholinergic function in the hippocampus of AD rats. This was mainly because the encapsulated CRM was protected by LIP against a rapid degradation in the blood. Furthermore, WGA on LIP could target N-acetylglucosamine on endothelia and increased the quantity of CRM transported across the BBB. In addition, WGA-CRM-CL/LIP could be effective in suppressing the synthesis of acetylcholinesterase and reduced the decomposition of acetylcholine for better neurotransmission. Based on the in vitro and in vivo evidences, WGA-CRM-CL/LIP can rescue neurons from apoptosis in the brain and can be a promising drug delivery system for clinical AD therapy.Keywords: Alzheimer’s disease, β-amyloid, liposome, mitogen-activated protein kinase
Procedia PDF Downloads 32987 Impact of Helicobacter pylori Infection on Colorectal Adenoma-Colorectal Carcinoma Sequence
Authors: Jannis Kountouras, Nikolaos Kapetanakis, Stergios A. Polyzos, Apostolis Papaeftymiou, Panagiotis Katsinelos, Ioannis Venizelos, Christina Nikolaidou, Christos Zavos, Iordanis Romiopoulos, Elena Tsiaousi, Evangelos Kazakos, Michael Doulberis
Abstract:
Background & Aims: Helicobacter pylori infection (Hp-I) has been recognized as a substantial risk agent involved in gastrointestinal (GI) tract oncogenesis by stimulating cancer stem cells (CSCs), oncogenes, immune surveillance processes, and triggering GI microbiota dysbiosis. We aimed to investigate the possible involvement of active Hp-I in the sequence: chronic inflammation–adenoma–colorectal cancer (CRC) development. Methods: Four pillars were investigated: (i) endoscopic and conventional histological examinations of patients with CRC, colorectal adenomas (CRA) versus controls to detect the presence of active Hp-I; (ii) immunohistochemical determination of the presence of Hp; expression of CD44, an indicator of CSCs and/or bone marrow-derived stem cells (BMDSCs); expressions of oncogene Ki67 and anti-apoptotic Bcl-2 protein; (iii) expression of CD45, indicator of immune surveillance locally (assessing mainly T and B lymphocytes locally); and (iv) correlation of the studied parameters with the presence or absence of Hp-I. Results: Among 50 patients with CRC, 25 with CRA, and 10 controls, a significantly higher presence of Hp-I in the CRA (68%) and CRC group (84%) were found compared with controls (30%). The presence of Hp-I with accompanying immunohistochemical expression of CD44 in biopsy specimens was revealed in a high proportion of patients with CRA associated with moderate/severe dysplasia (88%) and CRC patients with moderate/severe degree of malignancy (91%). Comparable results were also obtained for Ki67, Bcl-2, and CD45 immunohistochemical expressions. Concluding Remarks: Hp-I seems to be involved in the sequence: CRA – dysplasia – CRC, similarly to the upper GI tract oncogenesis, by several pathways such as the following: Beyond Hp-I associated insulin resistance, the major underlying mechanism responsible for the metabolic syndrome (MetS) that increase the risk of colorectal neoplasms, as implied by other Hp-I related MetS pathologies, such as non-alcoholic fatty liver disease and upper GI cancer, the disturbance of the normal GI microbiota (i.e., dysbiosis) and the formation of an irritative biofilm could contribute to a perpetual inflammatory upper GIT and colon mucosal damage, stimulating CSCs or recruiting BMDSCs and affecting oncogenes and immune surveillance processes. Further large-scale relative studies with a pathophysiological perspective are necessary to demonstrate in-depth this relationship.Keywords: Helicobacter pylori, colorectal cancer, colorectal adenomas, gastrointestinal oncogenesis
Procedia PDF Downloads 14586 Validating the Micro-Dynamic Rule in Opinion Dynamics Models
Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule
Procedia PDF Downloads 15985 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 346