Search results for: Michael Luc Andre
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 761

Search results for: Michael Luc Andre

191 Mitigating Self-Regulation Issues in the Online Instruction of Math

Authors: Robert Vanderburg, Michael Cowling, Nicholas Gibson

Abstract:

Mathematics is one of the core subjects taught in the Australian K-12 education system and is considered an important component for future studies in areas such as engineering and technology. In addition to this, Australia has been a world leader in distance education due to the vastness of its geographic landscape. Despite this, research is still needed on distance math instruction. Even though delivery of curriculum has given way to online studies, and there is a resultant push for computer-based (PC, tablet, smartphone) math instruction, much instruction still involves practice problems similar to those original curriculum packs, without the ability for students to self-regulate their learning using the full interactive capabilities of these devices. Given this need, this paper addresses issues students have during online instruction. This study consists of 32 students struggling with mathematics enrolled in a math tutorial conducted in an online setting. The study used a case study design to understand some of the blockades hindering the students’ success. Data was collected by tracking students practice and quizzes, tracking engagement of the site, recording one-on-one tutorials, and collecting data from interviews with the students. Results revealed that when students have cognitively straining tasks in an online instructional setting, the first thing to dissipate was their ability to self-regulate. The results also revealed that instructors could ameliorate the situation and provided useful data on strategies that could be used for designing future online tasks. Specifically, instructors could utilize cognitive dissonance strategies to reduce the cognitive drain of the tasks online. They could segment the instruction process to reduce the cognitive demands of the tasks and provide in-depth self-regulatory training, freeing mental capacity for the mathematics content. Finally, instructors could provide specific scheduling and assignment structure changes to reduce the amount of student centered self-regulatory tasks in the class. These findings will be discussed in more detail and summarized in a framework that can be used for future work.

Keywords: digital education, distance education, mathematics education, self-regulation

Procedia PDF Downloads 111
190 Internet Health: A Cross-Sectional Survey Exploring Identified Risks and Online Safety Measures in Parent and Children with Neurodevelopmental Disorders

Authors: Abdirahim Mohamed, Sarita Rana Chhetri, Michael Sleath, Nadia Saleem

Abstract:

Rationale: Internet usage has been very much integrated into our daily lives. Internet usage within a neurodevelopmental disorder population is also on the increase. Nevertheless, there is very little empirical research on how this population virtually protect themselves; along with how their parents can keep them safe online. This topic was an ever-growing concern to the parents within our services and in many cases would add to the stresses and mental health of parents. This ignited an idea within our team to conduct research to explore the perceived online risks within this population and how they keep themselves safe. In conjunction, we also explored how parents and caregivers monitor and safeguard their young people to the potential threats online. Our hypothesis was that the perceived risks will heavily outnumber the safeguarding measures implemented by this population. Method: Within the Coventry and Warwickshire NHS Partnership Trust Child and Adolescent Mental Health Service (CAMHS), we distributed qualitative questionnaires to all the clinical bases (N=80). Questions explored topics such as daily internet usage, safeguarding measures, and perceived threats. The researchers requested for all CAMHS clinicians to identify participants. Participants in this study were accessing CAMHS for neurodevelopmental specific interventions. Results: The data were analysed using both Excel and SPSS. Within SPSS, a MANOVA was conducted and found a significant difference between safeguarding measures and perceived online risks within responses (p ≤ 0.5). This supports our hypothesis that participants in this population are well versed in the safeguarding issues of the internet; however, struggle to implement appropriate preventative measures. Data were also screened using Excel and found that all parents and carers stated they 'monitored their child’s internet use'. Conclusion: Data suggest that parents/carers may require more specific intervention to equip them with preventative measures due to the clear discrepancy between perceived risks and safeguarding measures. More research may also need to be conducted around this area to determine appropriate methodology to explore this topic further.

Keywords: Internet, health , how safe are we , internet health check

Procedia PDF Downloads 234
189 Using Serious Games to Integrate the Potential of Mass Customization into the Fuzzy Front-End of New Product Development

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Mass customization is the idea of offering custom products or services to satisfy the needs of each individual customer while maintaining the efficiency of mass production. Technologies like 3D printing and artificial intelligence have many start-ups hoping to capitalize on this dream of creating personalized products at an affordable price, and well established companies scrambling to innovate and maintain their market share. However, the majority of them are failing as they struggle to understand one key question – where does customization make sense? Customization and personalization only make sense where the value of the perceived benefit outweighs the cost to implement it. In other words, will people pay for it? Looking at the Kano Model makes it clear that it depends on the product. In products where customization is an inherent need, like prosthetics, mass customization technologies can be highly beneficial. However, for products that already sell as a standard, like headphones, offering customization is likely only an added bonus, and so the product development team must figure out if the customers’ perception of the added value of this feature will outweigh its premium price tag. This can be done through the use of a ‘serious game,’ whereby potential customers are given a limited budget to collaboratively buy and bid on potential features of the product before it is developed. If the group choose to buy customization over other features, then the product development team should implement it into their design. If not, the team should prioritize the features on which the customers have spent their budget. The level of customization purchased can also be translated to an appropriate production method, for example, the most expensive type of customization would likely be free-form design and could be achieved through digital fabrication, while a lower level could be achieved through short batch production. Twenty-five teams of final year students from design, engineering, construction and technology tested this methodology when bringing a product from concept through to production specification, and found that it allowed them to confidently decide what level of customization, if any, would be worth offering for their product, and what would be the best method of producing it. They also found that the discussion and negotiations between players during the game led to invaluable insights, and often decided to play a second game where they offered customers the option to buy the various customization ideas that had been discussed during the first game.

Keywords: Kano model, mass customization, new product development, serious game

Procedia PDF Downloads 110
188 Moderating and Mediating Effects of Business Model Innovation Barriers during Crises: A Structural Equation Model Tested on German Chemical Start-Ups

Authors: Sarah Mueller-Saegebrecht, André Brendler

Abstract:

Business model innovation (BMI) as an intentional change of an existing business model (BM) or the design of a new BM is essential to a firm's development in dynamic markets. The relevance of BMI is also evident in the ongoing COVID-19 pandemic, in which start-ups, in particular, are affected by limited access to resources. However, first studies also show that they react faster to the pandemic than established firms. A strategy to successfully handle such threatening dynamic changes represents BMI. Entrepreneurship literature shows how and when firms should utilize BMI in times of crisis and which barriers one can expect during the BMI process. Nevertheless, research merging BMI barriers and crises is still underexplored. Specifically, further knowledge about antecedents and the effect of moderators on the BMI process is necessary for advancing BMI research. The addressed research gap of this study is two-folded: First, foundations to the subject on how different crises impact BM change intention exist, yet their analysis lacks the inclusion of barriers. Especially, entrepreneurship literature lacks knowledge about the individual perception of BMI barriers, which is essential to predict managerial reactions. Moreover, internal BMI barriers have been the focal point of current research, while external BMI barriers remain virtually understudied. Second, to date, BMI research is based on qualitative methodologies. Thus, a lack of quantitative work can specify and confirm these qualitative findings. By focusing on the crisis context, this study contributes to BMI literature by offering a first quantitative attempt to embed BMI barriers into a structural equation model. It measures managers' perception of BMI development and implementation barriers in the BMI process, asking the following research question: How does a manager's perception of BMI barriers influence BMI development and implementation in times of crisis? Two distinct research streams in economic literature explain how individuals react when perceiving a threat. "Prospect Theory" claims that managers demonstrate risk-seeking tendencies when facing a potential loss, and opposing "Threat-Rigidity Theory" suggests that managers demonstrate risk-averse behavior when facing a potential loss. This study quantitively tests which theory can best predict managers' BM reaction to a perceived crisis. Out of three in-depth interviews in the German chemical industry, 60 past BMIs were identified. The participating start-up managers gave insights into their start-up's strategic and operational functioning. After, each interviewee described crises that had already affected their BM. The participants explained how they conducted BMI to overcome these crises, which development and implementation barriers they faced, and how severe they perceived them, assessed on a 5-point Likert scale. In contrast to current research, results reveal that a higher perceived threat level of a crisis harms BM experimentation. Managers seem to conduct less BMI in times of crisis, whereby BMI development barriers dampen this relation. The structural equation model unveils a mediating role of BMI implementation barriers on the link between the intention to change a BM and the concrete BMI implementation. In conclusion, this study confirms the threat-rigidity theory.

Keywords: barrier perception, business model innovation, business model innovation barriers, crises, prospect theory, start-ups, structural equation model, threat-rigidity theory

Procedia PDF Downloads 70
187 Mathematical Competence as It Is Defined through Learners' Errors in Arithmetic and Algebra

Authors: Michael Lousis

Abstract:

Mathematical competence is the great aim of every mathematical teaching and learning endeavour. This can be defined as an idealised conceptualisation of the quality of cognition and the ability of implementation in practice of the mathematical subject matter, which is included in the curriculum, and is displayed only through performance of doing mathematics. The present study gives a clear definition of mathematical competence in the domains of Arithmetic and Algebra that stems from the explanation of the learners’ errors in these domains. The learners, whose errors are explained, were Greek and English participants of a large, international, longitudinal, comparative research program entitled the Kassel Project. The participants’ errors emerged as results of their work in dealing with mathematical questions and problems of the tests, which were presented to them. The construction of the tests was such as only the outcomes of the participants’ work was to be encompassed and not their course of thinking, which resulted in these outcomes. The intention was that the tests had to provide undeviating comparable results and simultaneously avoid any probable bias. Any bias could stem from obtaining results by involving so many markers from different countries and cultures, with so many different belief systems concerning the assessment of learners’ course of thinking. In this way the validity of the research was protected. This fact forced the implementation of specific research methods and theoretical prospects to take place in order the participants’ erroneous way of thinking to be disclosed. These were Methodological Pragmatism, Symbolic Interactionism, Philosophy of Mind and the ideas of Computationalism, which were used for deciding and establishing the grounds of the adequacy and legitimacy of the obtained kinds of knowledge through the explanations given by the error analysis. The employment of this methodology and of these theoretical prospects resulted in the definition of the learners’ mathematical competence, which is the thesis of the present study. Thus, learners’ mathematical competence is depending upon three key elements that should be developed in their minds: appropriate representations, appropriate meaning, and appropriate developed schemata. This definition then determined the development of appropriate teaching practices and interventions conducive to the achievement and finally the entailment of mathematical competence.

Keywords: representations, meaning, appropriate developed schemata, computationalism, error analysis, explanations for the probable causes of the errors, Kassel Project, mathematical competence

Procedia PDF Downloads 244
186 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 131
185 Improving the Weekend Handover in General Surgery: A Quality Improvement Project

Authors: Michael Ward, Eliana Kalakouti, Andrew Alabi

Abstract:

Aim: The handover process is recognized as a vulnerable step in the patient care pathway where errors are likely to occur. As such, it is a major preventable cause of patient harm due to human factors of poor communication and systematic error. The aim of this study was to audit the general surgery department’s weekend handover process compared to the recommended criteria for safe handover as set out by the Royal College of Surgeons (RCS). Method: A retrospective audit of the General Surgery department’s Friday patient lists and patient medical notes used for weekend handover in a London-based District General Hospital (DGH). Medical notes were analyzed against RCS's suggested criteria for handover. A standardized paper weekend handover proforma was then developed in accordance with guidelines and circulated in the department. A post-intervention audit was then conducted using the same methods for cycle 1. For cycle 2, we introduced an electronic weekend handover tool along with Electronic Patient Records (EPR). After a one-month period, a second post-intervention audit was conducted. Results: Following cycle 1, the paper weekend handover proforma was only used in 23% of patient notes. However, when it was used, 100% of them had a plan for the weekend, diagnosis and location but only 40% documented potential discharge status and 40% ceiling of care status. Qualitative feedback was that it was time-consuming to fill out. Better results were achieved following cycle 2, with 100% of patient notes having the electronic proforma. Results improved with every patient having documented ceiling of care, discharge status and location. Only 55% of patients had a past surgical history; however, this was still an increase when compared to paper proforma (45%). When comparing electronic versus paper proforma, there was an increase in documentation in every domain of the handover outlined by RCS with an average relative increase of 1.72 times (p<0.05). Qualitative feedback was that the autofill function made it easy to use and simple to view. Conclusion: These results demonstrate that the implementation of an electronic autofill handover proforma significantly improved handover compliance with RCS guidelines, thereby improving the transmission of information from week-day to weekend teams.

Keywords: surgery, handover, proforma, electronic handover, weekend, general surgery

Procedia PDF Downloads 131
184 Environmental Conditions Simulation Device for Evaluating Fungal Growth on Wooden Surfaces

Authors: Riccardo Cacciotti, Jiri Frankl, Benjamin Wolf, Michael Machacek

Abstract:

Moisture fluctuations govern the occurrence of fungi-related problems in buildings, which may impose significant health risks for users and even lead to structural failures. Several numerical engineering models attempt to capture the complexity of mold growth on building materials. From real life observations, in cases with suppressed daily variations of boundary conditions, e.g. in crawlspaces, mold growth model predictions well correspond with the observed mold growth. On the other hand, in cases with substantial diurnal variations of boundary conditions, e.g. in the ventilated cavity of a cold flat roof, mold growth predicted by the models is significantly overestimated. This study, founded by the Grant Agency of the Czech Republic (GAČR 20-12941S), aims at gaining a better understanding of mold growth behavior on solid wood, under varying boundary conditions. In particular, the experimental investigation focuses on the response of mold to changing conditions in the boundary layer and its influence on heat and moisture transfer across the surface. The main results include the design and construction at the facilities of ITAM (Prague, Czech Republic) of an innovative device allowing for the simulation of changing environmental conditions in buildings. It consists of a square section closed circuit with rough dimensions 200 × 180 cm and cross section roughly 30 × 30 cm. The circuit is thermally insulated and equipped with an electric fan to control air flow inside the tunnel, a heat and humidity exchange unit to control the internal RH and variations in temperature. Several measuring points, including an anemometer, temperature and humidity sensor, a loading cell in the test section for recording mass changes, are provided to monitor the variations of parameters during the experiments. The research is ongoing and it is expected to provide the final results of the experimental investigation at the end of 2022.

Keywords: moisture, mold growth, testing, wood

Procedia PDF Downloads 106
183 Navigating Life Transitions for Young People with Vision Impairment: A Community-Based Participatory Research Approach to Accessibility and Diversity

Authors: Aikaterini Tavoulari, Michael Proulx, Karin Petrini

Abstract:

Objective: This study aims to explore the unique challenges faced by young individuals with vision impairment (VI) during key life transitions, utilizing a community-based participatory research (CBPR) approach to identify limitations and positive aspects of existing support systems, with a focus on accessibility and diversity. Design: The study employs a qualitative CBPR design, engaging young participants with VI through online and in-person working groups over six months, prioritizing their active involvement and diverse perspectives. Methods: Twenty-one young individuals with VI from across the UK and with different VI conditions were recruited to participate in the study via a climbing and virtual reality event and stakeholders’ support. Data collection methods included open discussions, forum exchanges, and qualitative questionnaires. The data were analyzed with NVivo using inductive thematic analysis to identify key themes and patterns related to the challenges and experiences of life transitions for this diverse population. Results: The analysis revealed barriers to accessibility, such as assumptions about what a person with VI can do, inaccessibility to material, noisy environments, and insufficient training with assistive technologies. Enablers included guidance from diverse professionals and peers, multisensory approaches (beyond tactile), and peer collaborations. This study underscores the need for developing accessible and tailored strategies together with these young people to address the specific needs of this diverse population during critical life transitions (e.g., to independent living, employment and higher education). Conclusion: Engaging and co-designing effective approaches and tools with young people with VI is key to tackling the specific accessibility barriers they encounter. These approaches should be targeted at different transitional periods of their life journey, promoting diversity and inclusion.

Keywords: vision impairement, life transitions, qualitative research, community-based participatory design, accessibility

Procedia PDF Downloads 19
182 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 178
181 Teaching Business Process Management using IBM’s INNOV8 BPM Simulation Game

Authors: Hossam Ali-Hassan, Michael Bliemel

Abstract:

This poster reflects upon our experiences using INNOV8, IBM’s Business Process Management (BPM) simulation game, in online MBA and undergraduate MIS classes over a period of 2 years. The game is designed to gives both business and information technology players a better understanding of how effective BPM impacts an entire business ecosystem. The game includes three different scenarios: Smarter Traffic, which is used to evaluate existing traffic patterns and re-route traffic based on incoming metrics; Smarter Customer Service where players develop more efficient ways to respond to customers in a call centre environment; and Smarter Supply Chains where players balance supply and demand and reduce environmental impact in a traditional supply chain model. We use the game as an experiential learning tool, where students have to act as managers making real time changes to business processes to meet changing business demands and environments. The students learn how information technology (IT) and information systems (IS) can be used to intelligently solve different problems and how computer simulations can be used to test different scenarios or models based on business decisions without having to actually make the potentially costly and/or disruptive changes to business processes. Moreover, when students play the three different scenarios, they quickly see how practical process improvements can help meet profitability, customer satisfaction and environmental goals while addressing real problems faced by municipalities and businesses today. After spending approximately two hours in the game, students reflect on their experience from it to apply several BPM principles that were presented in their textbook through the use of a structured set of assignment questions. For each final scenario students submit a screenshot of their solution followed by one paragraph explaining what criteria you were trying to optimize, and why they picked their input variables. In this poster we outline the course and the module’s learning objectives where we used the game to place this into context. We illustrate key features of the INNOV8 Simulation Game, and describe how we used them to reinforce theoretical concepts. The poster will also illustrate examples from the simulation, assignment, and learning outcomes.

Keywords: experiential learning, business process management, BPM, INNOV8, simulation, game

Procedia PDF Downloads 304
180 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data

Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira

Abstract:

Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.

Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC

Procedia PDF Downloads 98
179 An Overview of Posterior Fossa Associated Pathologies and Segmentation

Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets

Abstract:

Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.

Keywords: chiari, posterior fossa, segmentation, volumetric

Procedia PDF Downloads 72
178 Multiscale Process Modeling of Ceramic Matrix Composites

Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya

Abstract:

Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.

Keywords: digital engineering, finite elements, manufacturing, molecular dynamics

Procedia PDF Downloads 77
177 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach

Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier

Abstract:

²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.

Keywords: colloids, mining context, radium, transport

Procedia PDF Downloads 131
176 The Effect of Acute Rejection and Delayed Graft Function on Renal Transplant Fibrosis in Live Donor Renal Transplantation

Authors: Wisam Ismail, Sarah Hosgood, Michael Nicholson

Abstract:

The research hypothesis is that early post-transplant allograft fibrosis will be linked to donor factors and that acute rejection and/or delayed graft function in the recipient will be independent risk factors for the development of fibrosis. This research hypothesis is to explore whether acute rejection/delay graft function has an effect on the renal transplant fibrosis within the first year post live donor kidney transplant between 1998 and 2009. Methods: The study has been designed to identify five time points of the renal transplant biopsies [0 (pre-transplant), 1 month, 3 months, 6 months and 12 months] for 300 live donor renal transplant patients over 12 years period between March 1997 – August 2009. Paraffin fixed slides were collected from Leicester General Hospital and Leicester Royal Infirmary. These were routinely sectioned at a thickness of 4 Micro millimetres for standardization. Conclusions: Fibrosis at 1 month after the transplant was found significantly associated with baseline fibrosis (p<0.001) and HTN in the transplant recipient (p<0.001). Dialysis after the transplant showed a weak association with fibrosis at 1 month (p=0.07). The negative coefficient for HTN (-0.05) suggests a reduction in fibrosis in the absence of HTN. Fibrosis at 1 month was significantly associated with fibrosis at baseline (p 0.01 and 95%CI 0.11 to 0.67). Fibrosis at 3, 6 or 12 months was not found to be associated with fibrosis at baseline (p=0.70. 0.65 and 0.50 respectively). The amount of fibrosis at 1 month is significantly associated with graft survival (p=0.01 and 95%CI 0.02 to 0.14). Rejection and severity of rejection were not found to be associated with fibrosis at 1 month. The amount of fibrosis at 1 month was significantly associated with graft survival (p=0.02) after adjusting for baseline fibrosis (p=0.01). Both baseline fibrosis and graft survival were significant predictive factors. The amount of fibrosis at 1 month was not found to be significantly associated with rejection (p=0.64) after adjusting for baseline fibrosis (p=0.01). The amount of fibrosis at 1 month was not found to be significantly associated with rejection severity (p=0.29) after adjusting for baseline fibrosis (p=0.04). Fibrosis at baseline and HTN in the recipient were found to be predictive factors of fibrosis at 1 month. (p 0.02, p <0.001 respectively). Age of the donor, their relation to the patient, the pre-op Creatinine, artery, kidney weight and warm time were not found to be significantly associated with fibrosis at 1 month. In this complex model baseline fibrosis, HTN in the recipient and cold time were found to be predictive factors of fibrosis at 1 month (p=0.01,<0.001 and 0.03 respectively). Donor age was found to be a predictive factor of fibrosis at 6 months. The above analysis was repeated for 3, 6 and 12 months. No associations were detected between fibrosis and any of the explanatory variables with the exception of the donor age which was found to be a predictive factor of fibrosis at 6 months.

Keywords: fibrosis, transplant, renal, rejection

Procedia PDF Downloads 210
175 The Effectiveness of the Recovering from Child Abuse Programme (RCAP) for the Treatment of CPTSD: A Pilot Study

Authors: Siobhan Hegarty, Michael Bloomfield, Kim Entholt, Dorothy Williams, Helen Kennerley

Abstract:

Complex Post-Traumatic Stress Disorder (CPTSD) confers greater risk of poor outcomes than does Post-Traumatic Stress Disorder (PTSD). Despite this, the current treatment guidelines for CPTSD aim to reduce only the ‘core’ symptoms of re-experiencing, hyper-vigilance and avoidance, while not addressing the Disturbances of Self Organisation (DSO) symptoms that distinguish this novel diagnosis from PTSD. The Recovering from Child Abuse Programme (RCAP) is a group protocol, based on the principles of cognitive behavioural therapy (CBT). Preliminary evidence suggests the program is effective at reducing DSO symptoms. This pilot study is the first to investigate the potential effectiveness of the RCAP for the specific treatment of CPTSD. This study was conducted as a service evaluation in a secondary care, traumatic stress service. Treatment was delivered once a week, in two-hour sessions, to ten existing female CPTSD patients of the service, who had experienced sexual abuse in childhood. The programme was administered by two therapists and two additional facilitators, following the RCAP protocol manual. Symptom severity was measured before the administration of therapy and was tracked across a range of measures (International Trauma Questionnaire; Patient Health Questionnaire; Community Assessment of Psychic Experience; Work and Social Adjustment Scale) at five time points, over the course of treatment. Qualitative appraisal of the programme was gathered via weekly feedback forms and from audio-taped recordings of verbal feedback given during group sessions. Preliminary results suggest the programme causes a slight reduction in CPTSD and depressive symptom severity and preliminary qualitative analysis suggests that the RCAP is both helpful and acceptable to group members. Final results and conclusions will follow completed thematic analysis of results.

Keywords: Child sexual abuse, Cognitive behavioural therapy, Complex post-traumatic stress disorder, Recovering from child abuse programme

Procedia PDF Downloads 110
174 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 321
173 Management of Third Stage Labour in a Rural Ugandan Hospital

Authors: Brid Dinnee, Jessica Taylor, Joseph Hartland, Michael Natarajan

Abstract:

Background:The third stage of labour (TSL) can be complicated by Post-Partum Haemorrhage (PPH), which can have a significant impact on maternal mortality and morbidity. In Africa, 33.9% of maternal deaths are attributable to PPH1. In order to minimise this figure, current recommendations for the developing world are that all women have active management of the third stage of labour (AMTSL). The aim of this project was to examine TSL practice in a rural Ugandan Hospital, highlight any deviation from best practice and identify barriers to change in resource limited settings as part of a 4th year medical student External Student Selected Component field trip. Method: Five key elements from the current World Health Organisation (WHO) guidelines on AMTSL were used to develop an audit tool. All daytime vaginal deliveries over a two week period in July 2016 were audited. In addition to this, a retrospective comparison of PPH rates, between 2006 (when ubiquitous use of intramuscular oxytocin for management of TSL was introduced) and 2015 was performed. Results: Eight vaginal deliveries were observed; at all of which intramuscular oxytocin was administered and controlled cord traction used. Against WHO recommendation, all umbilical cords were clamped within one minute, and no infants received early skin-to-skin contact. In only one case was uterine massage performed after placental delivery. A retrospective comparison of data rates identified a 40% reduction in total number of PPHs from November 2006 to November 2015. Maternal deaths per delivery reduced from 2% to 0.5%. Discussion: Maternal mortality and PPH are still major issues in developing countries. Maternal mortality due to PPH can be reduced by good practices regarding TSL, but not all of these are used in low-resource settings. There is a notable difference in outcomes between the developed and developing world. At Kitovu Hospital, there has been a reduction in maternal mortality and number of PPHs following introduction of IM Oxytocin administration. In order to further improve these rates, staff education and further government funding is key.

Keywords: post-partum haemorrhage, PPH, third stage labour, Uganda

Procedia PDF Downloads 175
172 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners

Authors: Michael McMahon

Abstract:

The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.

Keywords: multimedia learning, e-learning, design for learning, ICT

Procedia PDF Downloads 77
171 A Hedonic Valuation Approach to Valuing Combined Sewer Overflow Reductions

Authors: Matt S. Van Deren, Michael Papenfus

Abstract:

Seattle is one of the hundreds of cities in the United States that relies on a combined sewer system to collect and convey municipal wastewater. By design, these systems convey all wastewater, including industrial and commercial wastewater, human sewage, and stormwater runoff, through a single network of pipes. Serious problems arise for combined sewer systems during heavy precipitation events when treatment plants and storage facilities are unable to accommodate the influx of wastewater needing treatment, causing the sewer system to overflow into local waterways through sewer outfalls. CSOs (Combined Sewer Overflows) pose a serious threat to human and environmental health. Principal pollutants found in CSO discharge include microbial pathogens, comprising of bacteria, viruses, parasites, oxygen-depleting substances, suspended solids, chemicals or chemical mixtures, and excess nutrients, primarily nitrogen and phosphorus. While concentrations of these pollutants can vary between overflow events, CSOs have the potential to spread disease and waterborne illnesses, contaminate drinking water supplies, disrupt aquatic life, and effect a waterbody’s designated use. This paper estimates the economic impact of CSOs on residential property values. Using residential property sales data from Seattle, Washington, this paper employs a hedonic valuation model that controls for housing and neighborhood characteristics, as well as spatial and temporal effects, to predict a consumer’s willingness to pay for improved water quality near their homes. Initial results indicate that a 100,000-gallon decrease in the average annual overflow discharged from a sewer outfall within 300 meters of a home is associated with a 0.053% increase in the property’s sale price. For the average home in the sample, the price increase is estimated to be $18,860.23. These findings reveal some of the important economic benefits of improving water quality by reducing the frequency and severity of combined sewer overflows.

Keywords: benefits, hedonic, Seattle, sewer

Procedia PDF Downloads 147
170 Comparison of Cognitive Load in Virtual Reality and Conventional Simulation-Based Training: A Randomized Controlled Trial

Authors: Michael Wagner, Philipp Steinbauer, Andrea Katharina Lietz, Alexander Hoffelner, Johannes Fessler

Abstract:

Background: Cardiopulmonary resuscitations are stressful situations in which vital decisions must be made within seconds. Lack of routine due to the infrequency of pediatric emergencies can lead to serious medical and communication errors. Virtual reality can fundamentally change the way simulation training is conducted in the future. It appears to be a useful learning tool for technical and non-technical skills. It is important to investigate the use of VR in providing a strong sense of presence within simulations. Methods: In this randomized study, we will enroll doctors and medical students from the Medical University of Vienna, who will receive learning material regarding the resuscitation of a one-year-old child. The study will be conducted in three phases. In the first phase, 20 physicians and 20 medical students from the Medical University of Vienna will be included. They will perform simulation-based training with a standardized scenario of a critically ill child with a hypovolemic shock. The main goal of this phase is to establish a baseline for the following two phases to generate comparative values regarding cognitive load and stress. In phase 2 and 3, the same participants will perform the same scenario in a VR setting. In both settings, on three set points of progression, one of three predefined events is triggered. For each event, three different stress levels (easy, medium, difficult) will be defined. Stress and cognitive load will be analyzed using the NASA Task Load Index, eye-tracking parameters, and heart rate. Subsequently, these values will be compared between VR training and traditional simulation-based training. Hypothesis: We hypothesize that the VR training and the traditional training groups will not differ in physiological response (cognitive load, heart rate, and heart rate variability). We further assume that virtual reality training can be used as cost-efficient additional training. Objectives: The aim of this study is to measure cognitive load and stress level during a real-life simulation training and compare it with VR training in order to show that VR training evokes the same physiological response and cognitive load as real-life simulation training.

Keywords: virtual reality, cognitive load, simulation, adaptive virtual reality training

Procedia PDF Downloads 86
169 Immobilizing Quorum Sensing Inhibitors on Biomaterial Surfaces

Authors: Aditi Taunk, George Iskander, Kitty Ka Kit Ho, Mark Willcox, Naresh Kumar

Abstract:

Bacterial infections on biomaterial implants and medical devices accounts for 60-70% of all hospital acquired infections (HAIs). Treatment or removal of these infected devices results in high patient mortality and morbidity along with increased hospital expenses. In addition, with no effective strategies currently available and rapid development of antibacterial resistance has made device-related infections extremely difficult to treat. Therefore, in this project we have developed biomaterial surfaces using antibacterial compounds that inhibit biofilm formation by interfering with the bacterial communication mechanism known as quorum sensing (QS). This study focuses on covalent attachment of potent quorum sensing (QS) inhibiting compounds, halogenated furanones (FUs) and dihydropyrrol-2-ones (DHPs), onto glass surfaces. The FUs were attached by photoactivating the azide groups on the surface, and the acid functionalized DHPs were immobilized on amine surface via EDC/NHS coupling. The modified surfaces were tested in vitro against pathogenic organisms such as Staphylococcus aureus and Pseudomonas aeruginosa using confocal laser scanning microscopy (CLSM). Successful attachment of compounds on the substrates was confirmed by X-ray photoelectron spectroscopy (XPS) and contact angle measurements. The antibacterial efficacy was assessed, and significant reduction in bacterial adhesion and biofilm formation was observed on the FU and DHP coated surfaces. The activity of the coating was dependent upon the type of substituent present on the phenyl group of the DHP compound. For example, the ortho-fluorophenyl DHP (DHP-2) exhibited 79% reduction in bacterial adhesion against S. aureus and para-fluorophenyl DHP (DHP-3) exhibited 70% reduction against P. aeruginosa. The results were found to be comparable to DHP coated surfaces prepared in earlier study via Michael addition reaction. FUs and DHPs were able to retain their in vitro antibacterial efficacy after covalent attachment via azide chemistry. This approach is a promising strategy to develop efficient antibacterial biomaterials to reduce device related infections.

Keywords: antibacterial biomaterials, biomedical device-related infections, quorum sensing, surface functionalization

Procedia PDF Downloads 242
168 Elevated Reductive Defluorination of Branched Per and Polyfluoroalkyl Substances by Soluble Metal-Porphyrins and New Mechanistic Insights on the Degradation

Authors: Jun Sun, Tsz Tin Yu, Maryam Mirabediny, Matthew Lee, Adele Jones, Denis M. O’Carroll, Michael J. Manefield, Björn Åkermark, Biswanath Das, Naresh Kumar

Abstract:

Reductive defluorination has emerged as a sustainable approach to clean water from Per and polyfluoroalkyl substances (PFASs), also known as forever organic containments. For last few decades, nano zero valent metals (nZVMs) have been intensively applied in the reductive remediation of groundwater contaminated with chlorinated organic compounds due to its low redox potential, easy application, and low production cost. However, there is inadequate information on the effective reductive defluorination of linear or branched PFAS using nZVMs as reductants because of the lack of suitable catalysts. CoII-5,10,15,20-Tetraphenyl-21H,23H-porphyrin (CoTPP) has been recently reported for effective catalyzing reductive defluorination of branched (br-) perfluorooctane sulfonate (PFOS) by using TiIII citrate as reductant. However, the low water solubility of CoTPP limited its applicability. Here, we explored a series of structurally related soluble cobalt porphyrin catalysts based on our previously reported best performing CoTPP. All soluble porphyrins [[meso-tetra(4-carboxyphenyl)porphyrinato]cobalt(III)]Cl·₇H₂O (CoTCPP), [[meso-tetra(4-sulfonatophenyl) porphyrinato]cobalt(III)]·9H2O (CoTPPS), and [[meso-tetra(4-N-methylpyridyl) porphyrinato]cobalt(II)](I)₄·₄H₂O (CoTMpyP) displayed better defluorination efficiencies than CoTPP. Especially, CoTMpyP presented the best defluorination efficiency for br-PFOS (94 %), branched perfluorooctanoic acid (PFOA) (89 %), and 3,7-Perfluorodecanoic acid (PFDA) (60 %) after 1 day at 70 0C. CoTMpyP-nZn0 system showed 88-164 times higher defluorination rate than VB12-nZn0 system in terms of all investigated br-PFASs. The CoTMpyP-nZn0 also performed effectively at room temperature, demonstrating the potential prospect for in-situ reductive systems. Based on the analysis of the intermediate products, the calculated bond dissociation energies (BDEs) and possible first interaction between CoTMpyP and PFAS, degradation pathways of 3,7-PFDA and 6-PFOS are proposed.

Keywords: cationic, soluble porphyrin, cobalt, vitamin b12, pfas, reductive defluorination

Procedia PDF Downloads 54
167 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 133
166 Identification of Nutrient Sensitive Signaling Pathways via Analysis of O-GlcNAcylation

Authors: Michael P. Mannino, Gerald W. Hart

Abstract:

The majority of glucose metabolism proceeds through glycolytic pathways such as glycolysis or pentose phosphate pathway, however, about 5% is shunted through the hexosamine biosynthetic pathway, producing uridine diphosphate N-acetyl glucosamine (UDP-GlcNAc). This precursor can then be incorporated into complex oligosaccharides decorating the cell surface or remain as an intracellular post-translational-modification (PTM) of serine/threonine residues (O-GlcNAcylation, OGN), which has been identified on over 4,000 cytosolic or nuclear proteins. Intracellular OGN has major implications on cellularprocesses, typically by modulating protein localization, protein-protein interactions, protein degradation, and gene expression. Additionally, OGN is known to have an extensive cross-talk with phosphorylation, be in a competitive or cooperative manner. Unlike other PTMs there are only two cycling enzymes that are capable of adding or removing the GlcNAc moiety, O-linked N-aceytl glucosamine Transferase (OGT) and O-linked N-acetyl glucoamidase (OGA), respectively. The activity of OGT has been shown to be sensitive to cellular UDP-GlcNAc levels, even changing substrate affinity. Owing to this and that the concentration of UDP-GlcNAc is related to the metabolisms of glucose, amino acid, fatty acid, and nucleotides, O-GlcNAc is often referred to as a nutrient sensing rheostat. Indeed OGN is known to regulate several signaling pathways as a result of nutrient levels, such as insulin signaling. Dysregulation of OGN is associated with several disease states such as cancer, diabetes, and neurodegeneration. Improvements in glycomics over the past 10-15 years has significantly increased the OGT substrate pool, suggesting O-GlcNAc’s involvement in a wide variety of signaling pathways. However, O-GlcNAc’s role at the receptor level has only been identified in a case-by-case basis of known pathways. Examining the OGN of the plasma membrane (PM) may better focus our understanding of O-GlcNAc-effected signaling pathways. In this current study, PM fractions were isolated from several cell types via ultracentrifugation, followed by purification and MS/MS analysis in several cell lines. This process was repeated with or without OGT/OGA inhibitors or with increased/decreased glucose levels in media to ascertain the importance of OGN. Various pathways are followed up on in more detailed studies employing methods to localize OGN at the PM specifically.

Keywords: GlcNAc, nutrient sensitive, post-translational-modification, receptor

Procedia PDF Downloads 81
165 Teaching about Justice With Justice: How Using Experiential, Learner Centered Literacy Methodology Enhances Learning of Justice Related Competencies for Young Children

Authors: Bruna Azzari Puga, Richard Roe, Andre Pagani de Souza

Abstract:

abstract outlines a proposed study to examine how and to what extent interactive, experiential, learner centered methodology develops learning of basic civic and democratic competencies among young children. It stems from the Literacy and Law course taught at Georgetown University Law Center in Washington, DC, since 1998. Law students, trained in best literacy practices and legal cases affecting literacy development, read “law related” children’s books and engage in interactive and extension activities with emerging readers. The law students write a monthly journal describing their experiences and a final paper: a conventional paper or a children’s book illuminating some aspect of literacy and law. This proposal is based on the recent adaptation of Literacy and Law to Brazil at Mackenzie Presbyterian University in São Paulo in three forms: first, a course similar to the US model, often conducted jointly online with Brazilian and US law students; second, a similar course that combines readings of children’s literature with activity based learning, with law students from a satellite Mackenzie campus, for young children from a vulnerable community near the city; and third, a course taught by law students at the main Mackenzie campus for 4th grade students at the Mackenzie elementary school, that is wholly activity and discourse based. The workings and outcomes of these courses are well documented by photographs, reports, lesson plans, and law student journals. The authors, faculty who teach the above courses at Mackenzie and Georgetown, observe that literacy, broadly defined as cognitive and expressive development through reading and discourse-based activities, can be influential in developing democratic civic skills, identifiable by explicit civic competencies. For example, children experience justice in the classroom through cooperation, creativity, diversity, fairness, systemic thinking, and appreciation for rules and their purposes. Moreover, the learning of civic skills as well as the literacy skills is enhanced through interactive, learner centered practices in which the learners experience literacy and civic development. This study will develop rubrics for individual and classroom teaching and supervision by examining 1) the children’s books and students diaries of participating law students and 2) the collection of photos and videos of classroom activities, and 3) faculty and supervisor observations and reports. These rubrics, and the lesson plans and activities which are employed to advance the higher levels of performance outcomes, will be useful in training and supervision and in further replication and promotion of this form of teaching and learning. Examples of outcomes include helping, cooperating and participating; appreciation of viewpoint diversity; knowledge and utilization of democratic processes, including due process, advocacy, individual and shared decision making, consensus building, and voting; establishing and valuing appropriate rules and a reasoned approach to conflict resolution. In conclusion, further development and replication of the learner centered literacy and law practices outlined here can lead to improved qualities of democratic teaching and learning supporting mutual respect, positivity, deep learning, and the common good – foundation qualities of a sustainable world.

Keywords: democracy, law, learner-centered, literacy

Procedia PDF Downloads 91
164 Assessing the Impact of Antiretroviral Mediated Drug-Drug Interactions on Piperaquine Antimalarial Treatment in Pregnant Women Using Physiologically Based Pharmacokinetic Modelling

Authors: Olusola Omolola Olafuyi, Michael Coleman, Raj Kumar Singh Badhan

Abstract:

Introduction: Malaria in pregnancy has morbidity and mortality implication on both mother and unborn child. Piperaquine (PQ) based antimalarial treatment is emerging as a choice antimalarial for pregnant women in the face of resistance to current antimalarial treatment recommendation in pregnancy. Physiological and biochemical changes in pregnant women may affect the pharmacokinetics of the antimalarial drug in these. In malaria endemic regions other infectious diseases like HIV/AIDs are prevalent. Pregnant women who are co-infected with malaria and HIV/AID are at even more greater risk of death not only due to complications of the diseases but also due to drug-drug interactions (DDIs) between antimalarials (AMT) and antiretroviral (ARVs). In this study, physiologically based pharmacokinetic (PBPK) modelling was used to investigate the effect of physiological and biochemical changes on the impact of ARV mediated DDIs in pregnant women in three countries. Method: A PBPK model for PQ was developed on SimCYP® using published physicochemical and pharmacokinetic data of PQ from literature, this was validated in three customized population groups from Thailand, Sudan and Papua New Guinea with clinical data. Validation of PQ model was also done in presence of interaction with efavirenz (pre-validated on SimCYP®). Different albumin levels and pregnancy stages was simulated in the presence of interaction with standard doses of efavirenz and ritonavir. PQ day 7 concentration of 30ng/ml was used as the efficacy endpoint for PQ treatment.. Results: The median day 7 concentration of PQ remained virtually consistent throughout pregnancy and were satisfactory across the three population groups ranging from 26-34.1ng/ml; this implied the efficacy of PQ throughout pregnancy. DDI interaction with ritonavir and efavirenz resulted in modest effect on the day 7 concentrations of PQ with AUCratio ranging from 0.56-0.8 and 1.64-1.79 for efavirenz and ritonavir respectively over 10-40 gestational weeks, however, a reduction in human serum albumin level reflective of severe malaria resulted in significantly reduced the number of subjects attaining the PQ day 7 concentration in the presence of both DDIs. The model demonstrated that the DDI between PQ and ARV in pregnant women with different malaria severities can alter the pharmacokinetic of PQ.

Keywords: antiretroviral, malaria, piperaquine, pregnancy, physiologically-based pharmacokinetics

Procedia PDF Downloads 159
163 Advances in Health Risk Assessment of Mycotoxins in Africa

Authors: Wilfred A. Abiaa, Chibundu N. Ezekiel, Benedikt Warth, Michael Sulyok, Paul C. Turner, Rudolf Krska, Paul F. Moundipa

Abstract:

Mycotoxins are a wide range of toxic secondary metabolites of fungi that contaminate various food commodities worldwide especially in sub-Saharan Africa (SSA). Such contamination seriously compromises food safety and quality posing a serious problem for human health as well as to trade and the economy. Their concentrations depend on various factors, such as the commodity itself, climatic conditions, storage conditions, seasonal variances, and processing methods. When humans consume foods contaminated by mycotoxins, they exert toxic effects to their health through various modes of actions. Rural populations in sub-Saharan Africa, are exposed to dietary mycotoxins, but it is supposed that exposure levels and health risks associated with mycotoxins between SSA countries may vary. Dietary exposures and health risk assessment studies have been limited by lack of equipment for the proper assessment of the associated health implications on consumer populations when they eat contaminated agricultural products. As such, mycotoxin research is premature in several SSA nations with product evaluation for mycotoxin loads below/above legislative limits being inadequate. Few nations have health risk assessment reports mainly based on direct quantification of the toxins in foods ('external exposure') and linking food levels with data from food frequency questionnaires. Nonetheless, the assessment of the exposure and health risk to mycotoxins requires more than the traditional approaches. Only a fraction of the mycotoxins in contaminated foods reaches the blood stream and exert toxicity ('internal exposure'). Also, internal exposure is usually smaller than external exposure thus dependence on external exposure alone may induce confounders in risk assessment. Some studies from SSA earlier focused on biomarker analysis mainly on aflatoxins while a few recent studies have concentrated on the multi-biomarker analysis of exposures in urine providing probable associations between observed disease occurrences and dietary mycotoxins levels. As a result, new techniques that could assess the levels of exposures directly in body tissue or fluid, and possibly link them to the disease state of individuals became urgent.

Keywords: mycotoxins, biomarkers, exposure assessment, health risk assessment, sub-Saharan Africa

Procedia PDF Downloads 542
162 Designing Offshore Pipelines Facing the Geohazard of Active Seismic Faults

Authors: Maria Trimintziou, Michael Sakellariou, Prodromos Psarropoulos

Abstract:

Nowadays, the exploitation of hydrocarbons reserves in deep seas and oceans, in combination with the need to transport hydrocarbons among countries, has made the design, construction and operation of offshore pipelines very significant. Under this perspective, it is evident that many more offshore pipelines are expected to be constructed in the near future. Since offshore pipelines are usually crossing extended areas, they may face a variety of geohazards that impose substantial permanent ground deformations (PGDs) to the pipeline and potentially threaten its integrity. In case of a geohazard area, there exist three options to proceed. The first option is to avoid the problematic area through rerouting, which is usually regarded as an unfavorable solution due to its high cost. The second is to apply (if possible) mitigation/protection measures in order to eliminate the geohazard itself. Finally, the last appealing option is to allow the pipeline crossing through the geohazard area, provided that the pipeline will have been verified against the expected PGDs. In areas with moderate or high seismicity the design of an offshore pipeline is more demanding due to the earthquake-related geohazards, such as landslides, soil liquefaction phenomena, and active faults. It is worthy to mention that although worldwide there is a great experience in offshore geotechnics and pipeline design, the experience in seismic design of offshore pipelines is rather limited due to the fact that most of the pipelines have been constructed in non-seismic regions (e.g. North Sea, West Australia, Gulf of Mexico, etc.). The current study focuses on the seismic design of offshore pipelines against active faults. After an extensive literature review of the provisions of the seismic norms worldwide and of the available analytical methods, the study simulates numerically (through finite-element modeling and strain-based criteria) the distress of offshore pipelines subjected to PGDs induced by active seismic faults at the seabed. Factors, such as the geometrical properties of the fault, the mechanical properties of the ruptured soil formations, and the pipeline characteristics, are examined. After some interesting conclusions regarding the seismic vulnerability of offshore pipelines, potential cost-effective mitigation measures are proposed taking into account constructability issues.

Keywords: offhore pipelines, seismic design, active faults, permanent ground deformations (PGDs)

Procedia PDF Downloads 558