Search results for: Michael Ray Brunt
141 A Review of the Agroecological Farming System as a Viable Alternative Food Production Approach in South Africa
Authors: Michael Rudolph, Evans Muchesa, Katiya Yassim, Venkatesha Prasad
Abstract:
Input-intensive production systems characterise industrial agriculture as an unsustainable means to address food and nutrition security and sustainable livelihoods. There is extensive empirical evidence that supports the diversification and reorientation of industrial agriculture and that incorporates ecological practices viewed as essential for achieving balanced and productive farming systems. An agroecological farming system is a viable alternative approach that can improve food production, especially for the most vulnerable communities and households. Furthermore, substantial proof and supporting evidence show that such a system holds the key to increasing dietary diversity at the local level and reducing the multiple health and environmental risks stemming from industrial agriculture. This paper, therefore, aims to demonstrate the benefits of the agroecology food system through an evidenced-based approach that shows how the broader agricultural network structures can play a meaningful role, particularly for impoverished households in today’s reality. The methodology is centered on a structured literature review that analyses urban agriculture, agroecology, and food insecurity. Notably, ground-truthing, practical experiences, and field observation of agroecological farming were deployed. This paper places particular emphasis on the practical application of the agroecological approach in urban and peri-urban settings. Several evaluation reports on local and provincial initiatives clearly show that very few households engage in food gardens and urban agriculture. These households do not make use of their backyards or nearby open spaces for a number of reasons, such as stringent city by-laws, restricted access to land, little or no knowledge of innovative or alternative farming practices, and a general lack of interest. Furthermore, limited resources such as water and energy and lack of capacity building and training implementation are additional constraints that are hampering small scale food gardens and farms in other settings. The Agroecology systems approach is viewed as one of the key solutions to tackling these problems.Keywords: agroecology, water-energy-food nexus, sutainable development goals, social, environmental and economc impact
Procedia PDF Downloads 113140 Bioefficiency of Cinnamomum verum Loaded Niosomes and Its Microbicidal and Mosquito Larvicidal Activity against Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus
Authors: Aasaithambi Kalaiselvi, Michael Gabriel Paulraj, Ekambaram Nakkeeran
Abstract:
Emergences of mosquito vector-borne diseases are considered as a perpetual problem globally in tropical countries. The outbreak of several diseases such as chikungunya, zika virus infection and dengue fever has created a massive threat towards the living population. Frequent usage of synthetic insecticides like Dichloro Diphenyl Trichloroethane (DDT) eventually had its adverse harmful effects on humans as well as the environment. Since there are no perennial vaccines, prevention, treatment or drugs available for these pathogenic vectors, WHO is more concerned in eradicating their breeding sites effectively without any side effects on humans and environment by approaching plant-derived natural eco-friendly bio-insecticides. The aim of this study is to investigate the larvicidal potency of Cinnamomum verum essential oil (CEO) loaded niosomes. Cholesterol and surfactant variants of Span 20, 60 and 80 were used in synthesizing CEO loaded niosomes using Transmembrane pH gradient method. The synthesized CEO loaded niosomes were characterized by Zeta potential, particle size, Fourier Transform Infrared Spectroscopy (FT-IR), GC-MS and SEM analysis to evaluate charge, size, functional properties, the composition of secondary metabolites and morphology. The Z-average size of the formed niosomes was 1870.84 nm and had good stability with zeta potential -85.3 meV. The entrapment efficiency of the CEO loaded niosomes was determined by UV-Visible Spectrophotometry. The bio-potency of CEO loaded niosomes was treated and assessed against gram-positive (Bacillus subtilis) and gram-negative (Escherichia coli) bacteria and fungi (Aspergillus fumigatus and Candida albicans) at various concentrations. The larvicidal activity was evaluated against II to IV instar larvae of Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus at various concentrations for 24 h. The mortality rate of LC₅₀ and LC₉₀ values were calculated. The results exhibited that CEO loaded niosomes have greater efficiency against mosquito larvicidal activity. The results suggest that niosomes could be used in various applications of biotechnology and drug delivery systems with greater stability by altering the drug of interest.Keywords: Cinnamomum verum, niosomes, entrapment efficiency, bactericidal and fungicidal, mosquito larvicidal activity
Procedia PDF Downloads 164139 The Closed Cavity Façade (CCF): Optimization of CCF for Enhancing Energy Efficiency and Indoor Environmental Quality in Office Buildings
Authors: Michalis Michael, Mauro Overend
Abstract:
Buildings, in which we spend 87-90% of our time, act as a shelter protecting us from environmental conditions and weather phenomena. The building's overall performance is significantly dependent on the envelope’s glazing part, which is particularly critical as it is the most vulnerable part to heat gain and heat loss. However, conventional glazing technologies have relatively low-performance thermo-optical characteristics. In this regard, during winter, the heat losses due to the glazing part of a building envelope are significantly increased as well as the heat gains during the summer period. In this study, the contribution of an innovative glazing technology, namely Closed Cavity Façade (CCF) in improving energy efficiency and IEQ in office buildings is examined, aiming to optimize various design configurations of CCF. Using Energy Plus and IDA ICE packages, the performance of several CCF configurations and geometries for various climate types were investigated, aiming to identify the optimum solution. The model used for the simulations and optimization process was MATELab, a recently constructed outdoor test facility at the University of Cambridge (UK). The model was previously experimentally calibrated. The study revealed that the use of CCF technology instead of conventional double or triple glazing leads to important benefits. Particularly, the replacement of the traditional glazing units, used as the baseline, with the optimal configuration of CCF led to a decrease in energy consumption in the range of 18-37% (depending on the location). This mainly occurs due to integrating shading devices in the cavity and applying proper glass coatings and control strategies, which lead to improvement of thermal transmittance and g-value of the glazing. Since the solar gain through the façade is the main contributor to energy consumption during cooling periods, it was observed that a higher energy improvement is achieved in cooling-dominated locations. Furthermore, it was shown that a suitable selection of the constituents of a closed cavity façade, such as the colour and type of shading devices and the type of coatings, leads to an additional improvement of its thermal performance, avoiding overheating phenomena and consequently ensuring temperatures in the glass cavity below the critical value, and reducing the radiant discomfort providing extra benefits in terms of Indoor Environmental Quality (IEQ).Keywords: building energy efficiency, closed cavity façade, optimization, occupants comfort
Procedia PDF Downloads 65138 Discussion of Blackness in Wrestling
Authors: Jason Michael Crozier
Abstract:
The wrestling territories of the mid-twentieth century in the United States are widely considered the birthplace of modern professional wrestling, and by many professional wrestlers, to be a beacon of hope for the easing of racial tensions during the civil rights era and beyond. The performers writing on this period speak of racial equality but fail to acknowledge the exploitation of black athletes as a racialized capital commodity who suffered the challenges of systemic racism, codified by a false narrative of aspirational exceptionalism and equality measured by audience diversity. The promoters’ ability to equate racial and capital exploitation with equality leads to a broader discussion of the history of Muscular Christianity in the United States and the exploitation of black bodies. Narratives of racial erasure that dominate the historical discourse when examining athleticism and exceptionalism redefined how blackness existed and how physicality and race are conceived of in sport and entertainment spaces. When discussing the implications of race and professional wrestling, it is important to examine the role of promotions as ‘imagined communities’ where the social agency of wrestlers is defined and quantified based on their ‘desired elements’ as a performer. The intentionally vague nature of this language masks a deep history of racialization that has been perpetuated by promoters and never fully examined by scholars. Sympathetic racism and the omission of cultural identity are also key factors in the limitations and racial barriers placed upon black athletes in the squared circle. The use of sympathetic racism within professional wrestling during the twentieth century defined black athletes into two distinct categorizations, the ‘black savage’ or the ‘black minstrel’. Black wrestlers of the twentieth century were defined by their strength as a capital commodity and their physicality rather than their knowledge of the business and in-ring skill. These performers had little agency in their ability to shape their own character development inside and outside the ring. Promoters would often create personas that heavily racialized the performer by tying them to a regional past or memory, such as that of slavery in the deep south using dog collar matches and adoring black characters in chains. Promoters softened cultural memory by satirizing the historic legacy of slavery and the black identity.Keywords: sympathetic racism, social agency, racial commodification, stereotyping
Procedia PDF Downloads 135137 Re-Evaluating the Hegemony of English Language in West Africa: A Meta-Analysis Review of the Research, 2003-2018
Authors: Oris Tom-Lawyer, Michael Thomas
Abstract:
This paper seeks to analyse the hegemony of the English language in Western Africa through the lens of educational policies and the socio-economic functions of the language. It is based on the premise that there is a positive link between the English language and development contexts. The study aims to fill a gap in the research literature by examining the usefulness of hegemony as a concept to explain the role of English language in the region, thus countering the negative connotations that often accompany it. The study identified four main research questions: i. What are the socio-economic functions of English in Francophone/lusophone countries? ii. What factors promote the hegemony of English in anglophone countries? iii. To what extent is the hegemony of English in West Africa? iv. What are the implications of the non-hegemony of English in Western Africa? Based on a meta-analysis of the research literature between 2003 and 2018, the findings of the study revealed that in francophone/lusophone countries, English functions in the following socio-economic domains; they are peace keeping missions, regional organisations, commercial and industrial sectors, as an unofficial international language and as a foreign language. The factors that promote linguistic hegemony of English in anglophone countries are English as an official language, a medium of instruction, lingua franca, cultural language, language of politics, language of commerce, channel of development and English for media and entertainment. In addition, the extent of the hegemony of English in West Africa can be viewed from the factors that contribute to the non-hegemony of English in the region; they are French language, Portuguese language, the French culture, neo-colonialism, level of poverty, and economic ties of French to its former colonies. Finally, the implications of the non-hegemony of English language in West Africa are industrial backwardness, poverty rate, lack of social mobility, drop out of school rate, growing interest in English, access to limited internet information and lack of extensive career opportunities. The paper concludes that the hegemony of English has resulted in the development of anglophone countries in Western Africa, while in the francophone/lusophone regions of the continent, industrial backwardness and low literacy rates have been consequences of English language marginalisation. In conclusion, the paper makes several recommendations, including the need for the early introduction of English into French curricula as part of a potential solution.Keywords: developmental tool, English language, linguistic hegemony, West Africa
Procedia PDF Downloads 141136 Identifying the Determinants of Compliance with Maritime Environmental Legislation in the North and Baltic Sea Area: A Model Developed from Exploratory Qualitative Data Collection
Authors: Thea Freese, Michael Gille, Andrew Hursthouse, John Struthers
Abstract:
Ship operators on the North and Baltic Sea have been experiencing increased political interest in marine environmental protection and cleaner vessel operations. Stricter legislation on SO2 and NOx emissions, ballast water management and other measures of protection are currently being phased in or will come into force in the coming years. These measures benefit the health of the marine environment, while increasing company’s operational costs. In times of excess shipping capacity and linked consolidation in the industry non-compliance with environmental rules is one way companies might hope to stay competitive with both intra- and inter-modal trade. Around 5-15% of industry participants are believed to neglect laws on vessel-source pollution willingly or unwillingly. Exploratory in-depth interviews conducted with 12 experts from various stakeholder groups informed the researchers about variables influencing compliance levels, including awareness and apprehension, willingness to comply, ability to comply and effectiveness of controls. Semi-structured expert interviews were evaluated using qualitative content analysis. A model of determinants of compliance was developed and is presented here. While most vessel operators endeavour to achieve full compliance with environmental rules, a lack of availability of technical solutions, expediency of implementation and operation and economic feasibility might prove a hindrance. Ineffective control systems on the other hand foster willing non-compliance. With respect to motivations, lacking time, lacking financials and the absence of commercial advantages decrease compliance levels. These and other variables were inductively developed from qualitative data and integrated into a model on environmental compliance. The outcomes presented here form part of a wider research project on economic effects of maritime environmental legislation. Research on determinants of compliance might inform policy-makers about actual behavioural effects of shipping companies and might further the development of a comprehensive legal system for environmental protection.Keywords: compliance, marine environmental protection, exploratory qualitative research study, clean vessel operations, North and Baltic Sea area
Procedia PDF Downloads 383135 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance
Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder
Abstract:
Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes
Procedia PDF Downloads 429134 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints
Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes
Abstract:
Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart
Procedia PDF Downloads 252133 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity
Authors: Justus Enninga
Abstract:
Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.Keywords: degrowth, green political theory, polycentricity, institutional robustness
Procedia PDF Downloads 183132 Investigations of the Service Life of Different Material Configurations at Solid-lubricated Rolling Bearings
Authors: Bernd Sauer, Michel Werner, Stefan Emrich, Michael Kopnarski, Oliver Koch
Abstract:
Friction reduction is an important aspect in the context of sustainability and energy transition. Rolling bearings are therefore used in many applications in which components move relative to each other. Conventionally lubricated rolling bearings are used in a wide range of applications, but are not suitable under certain conditions. Conventional lubricants such as grease or oil cannot be used at very high or very low temperatures. In addition, these lubricants evaporate at very low ambient pressure, e.g. in a high vacuum environment, making the use of solid lubricated bearings unavoidable. With the use of solid-lubricated bearings, predicting the service life becomes more complex. While the end of the service life of bearings with conventional lubrication is mainly caused by the failure of the bearing components due to material fatigue, solid-lubricated bearings fail at the moment when the lubrication layer is worn and the rolling elements come into direct contact with the raceway during operation. In order to extend the service life of these bearings beyond the service life of the initial coating, the use of transfer lubrication is recommended, in which pockets or sacrificial cages are used in which the balls run and can thus absorb the lubricant, which is then available for lubrication in tribological contact. This contribution presents the results of wear and service life tests on solid-lubricated rolling bearings with sacrificial cage pockets. The cage of the bearing consists of a polyimide (PI) matrix with 15% molybdenum disulfide (MoS2) and serves as a lubrication depot alongside the silver-coated balls. The bearings are tested under high vacuum (pE < 10-2 Pa) at a temperature of 300 °C on a four-bearing test rig. First, investigations of the bearing system within the bearing service life are presented and the torque curve, the wear mass and surface analyses are discussed. With regard to wear, it can be seen that the bearing rings tend to increase in mass over the service life of the bearing, while the balls and the cage tend to lose mass. With regard to the elementary surface properties, the surfaces of the bearing rings and balls are examined in terms of the mass of the elements on them. Furthermore, service life investigations with different material pairings are presented, whereby the focus here is on the service life achieved in addition to the torque curve, wear development and surface analysis. It was shown that MoS2 in the cage leads to a longer service life, while a silver (Ag) coating on the balls has no positive influence on the service life and even appears to reduce it in combination with MoS2.Keywords: ball bearings, molybdenum disulfide, solid lubricated bearings, solid lubrication mechanisms
Procedia PDF Downloads 49131 Comprehensive Geriatric Assessments: An Audit into Assessing and Improving Uptake on Geriatric Wards at King’s College Hospital, London
Authors: Michael Adebayo, Saheed Lawal
Abstract:
The Comprehensive Geriatric Assessment (CGA) is the multidimensional tool used to assess elderly, frail patients either on admission to hospital care or at a community level in primary care. It is a tool designed with the aim of using a holistic approach to managing patients. A Cochrane review of CGA use in 2011 found that the likelihood of being alive and living in their own home rises by 30% post-discharge. RCTs have also discovered 10–15% reductions in readmission rates and reductions in institutionalization, and resource use and costs. Past audit cycles at King’s College Hospital, Denmark Hill had shown inconsistent evidence of CGA completion inpatient discharge summaries (less than 50%). Junior Doctors in the Health and Ageing (HAU) wards have struggled to sustain the efforts of past audit cycles due to the quick turnover in staff (four-month placements for trainees). This 7th cycle created a multi-faceted approach to solving this problem amongst staff and creating lasting change. Methods: 1. We adopted multidisciplinary team involvement to support Doctors. MDT staff e.g. Nurses, Physiotherapists, Occupational Therapists and Dieticians, were actively encouraged to fill in the CGA document. 2. We added a CGA Document Pro-forma to “Sunrise EPR” (Trust computer system). These CGAs were to automatically be included the discharge summary. 3. Prior to assessing uptake, we used a spot audit questionnaire to assess staff awareness/knowledge of what a CGA was. 4. We designed and placed posters highlighting domains of CGA and MDT roles suited to each domain on geriatric “Health and Ageing Wards” (HAU) in the hospital. 5. We performed an audit of % discharge summaries which include CGA and MDT role input. 6. We nominated ward champions on each ward from each multidisciplinary specialty to monitor and encourage colleagues to actively complete CGAs. 7. We initiated further education of ward staff on CGA's importance by discussion at board rounds and weekly multidisciplinary meetings. Outcomes: 1. The majority of respondents to our spot audit were aware of what a CGA was, but fewer had used the EPR document to complete one. 2. We found that CGAs were not being commenced for nearly 50% of patients discharged on HAU wards and the Frailty Assessment Unit.Keywords: comprehensive geriatric assessment, CGA, multidisciplinary team, quality of life, mortality
Procedia PDF Downloads 83130 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models
Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble
Abstract:
Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate
Procedia PDF Downloads 215129 Exploring Mothers' Knowledge and Experiences of Attachment in the First 1000 Days of Their Child's Life
Authors: Athena Pedro, Zandile Batweni, Laura Bradfield, Michael Dare, Ashley Nyman
Abstract:
The rapid growth and development of an infant in the first 1000 days of life means that this time period provides the greatest opportunity for a positive developmental impact on a child’s life socially, emotionally, cognitively and physically. Current research is being focused on children in the first 1000 days, but there is a lack of research and understanding of mothers and their experiences during this crucial time period. Thus, it is imperative that more research is done to help better understand the experiences of mothers during the first 1000 days of their child’s life, as well as gain more insight into mothers’ knowledge regarding this time period. The first 1000 days of life, from conception to two years, is a critical period, and the child’s attachment to his or her mother or primary caregiver during this period is crucial for a multitude of future outcomes. The aim of this study was to explore mothers’ understanding and experience of the first 1000 days of their child’s life, specifically looking at attachment in the context of Bowlby and Ainsworths’ attachment theory. Using a qualitative methodological framework, data were collected through semi-structured individual interviews with 12 first-time mothers from low-income communities in Cape Town. Thematic analysis of the data revealed that mothers articulated the importance of attachment within the first 1000 days of life and shared experiences of how they bond and form attachment with their babies. Furthermore, these mothers expressed their belief in the long-term effects of early attachment of responsive positive parenting as well as the lasting effects of poor attachment and non-responsive parenting. This study has implications for new mothers and healthcare staff working with mothers of new-born babies, as well as for future contextual research. By gaining insight into the mothers’ experiences, policies and intervention efforts can be formulated in order to assist mothers during this time, which ultimately promote the healthy development of the nation’s children and future adult generation. If researchers are also able to understand the extent of mothers’ general knowledge regarding the first 1000 days and attachment, then there will be a better understanding of where there may be gaps in knowledge and thus, recommendations for effective and relevant intervention efforts may be provided. These interventions may increase knowledge and awareness of new mothers and health care workers at clinics and other service providers, creating a high impact on positive outcome. Thus, improving the developmental trajectory for many young babies allows them the opportunity to pursue optimal development by reaching their full potential.Keywords: attachment, experience, first 1000 days, knowledge, mothers
Procedia PDF Downloads 178128 Improving the Technology of Assembly by Use of Computer Calculations
Authors: Mariya V. Yanyukina, Michael A. Bolotov
Abstract:
Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.Keywords: accuracy, assembly, interacting dimension chains, turbine
Procedia PDF Downloads 373127 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom
Authors: Michael Hast
Abstract:
Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight
Procedia PDF Downloads 190126 Microbiological Assessment of Soft Cheese (Wara), Raw Milk and Dairy Drinking Water from Selected Farms in Ido, Ibadan, Nigeria
Authors: Blessing C. Nwachukwu, Michael O. Taiwo, Wasiu A. Abibu, Isaac O. Ayodeji
Abstract:
Milk is an important source of micro and macronutrients for humans. Soft Cheese (Wara) is an example of a by-product of milk. In addition, water is considered as one of the most vital resources in cattle farms. Due to the high consumption rate of milk and soft cheese and the traditional techniques involved in their production in Nigeria, there was a need for a microbiological assessment which will be of utmost public health importance. The study thus investigated microbial risk assessments associated with consumption of milk and soft cheese (Wara). It also investigated common pathogens present in dairy water in farms and antibiotic sensitivity profiling for implicated pathogens were conducted. Samples were collected from three different Fulani dairy herds in Ido local government, Ibadan, Oyo State, Nigeria and subjected to microbiological evaluation and antimicrobial susceptibility testing. Aspergillus flavus was the only isolated fungal isolate from Wara while Staphylococcus aureus, Vibro cholera, Hafnia alvei, Proteus mirabilis, Escherishia coli, Psuedomonas aeuroginosa, Citrobacter freundii, and Klebsiella pneumonia were the bacteria genera isolated from Wara, dairy milk and dairy drinking water. Bacterial counts from Wara from the three selected farms A, B and C were 3.5×105 CFU/ml, 4.0×105 CFU/ml and 5.3×105 CFU/ml respectively while the fungal count was 3CFU/100µl. The total bacteria count from dairy milk from the three selected farms A, B and C were Farms 2.0 ×105 CFU/ml, 3.5 × 105 CFU/ml and 6.5 × 105 CFU/ml respectively. 1.4×105 CFU/ml, 1.9×105 CFU/ml and 4.9×105 CFU/ml were the recorded bacterial counts from dairy water from farms A, B and C respectively. The highest antimicrobial resistance of 100% was recorded in Wara with Enrofloxacin, Gentamycin, Cefatriaxone and Colistin. The highest antimicrobial susceptibility of 100% was recorded in Raw milk with Enrofloxacin and Gentamicin. Highest antimicrobial intermediate response of 100% was recorded in Raw milk with Streptomycin. The study revealed that most of the cheeses sold at Ido local Government are contaminated with pathogens. Further research is needed on standardizing the production method to prevent pathogens from gaining access. The presence of bacteria in raw milk indicated contamination due to poor handling and unhygienic practices. Thus, drinking unpasteurized milk is hazardous as it increases the risk of zoonoses. Also, the Provision of quality drinking water is crucial for optimum productivity of dairy. Health education programs aiming at increasing awareness of the importance of clean water for animal health will be helpful.Keywords: dairy, raw milk, soft cheese, Wara
Procedia PDF Downloads 181125 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process
Authors: Johannes Gantner, Michael Held, Matthias Fischer
Abstract:
The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation
Procedia PDF Downloads 286124 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 66123 The Impact of Technology and Artificial Intelligence on Children in Autism
Authors: Dina Moheb Rashid Michael
Abstract:
A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills
Procedia PDF Downloads 55122 MicroRNA-1246 Expression Associated with Resistance to Oncogenic BRAF Inhibitors in Mutant BRAF Melanoma Cells
Authors: Jae-Hyeon Kim, Michael Lee
Abstract:
Intrinsic and acquired resistance limits the therapeutic benefits of oncogenic BRAF inhibitors in melanoma. MicroRNAs (miRNA) regulate the expression of target mRNAs by repressing their translation. Thus, we investigated miRNA expression patterns in melanoma cell lines to identify candidate biomarkers for acquired resistance to BRAF inhibitor. Here, we used Affymetrix miRNA V3.0 microarray profiling platform to compare miRNA expression levels in three cell lines containing BRAF inhibitor-sensitive A375P BRAF V600E cells, their BRAF inhibitor-resistant counterparts (A375P/Mdr), and SK-MEL-2 BRAF-WT cells with intrinsic resistance to BRAF inhibitor. The miRNAs with at least a two-fold change in expression between BRAF inhibitor-sensitive and –resistant cell lines, were identified as differentially expressed. Averaged intensity measurements identified 138 and 217 miRNAs that were differentially expressed by 2 fold or more between: 1) A375P and A375P/Mdr; 2) A375P and SK-MEL-2, respectively. The hierarchical clustering revealed differences in miRNA expression profiles between BRAF inhibitor-sensitive and –resistant cell lines for miRNAs involved in intrinsic and acquired resistance to BRAF inhibitor. In particular, 43 miRNAs were identified whose expression was consistently altered in two BRAF inhibitor-resistant cell lines, regardless of intrinsic and acquired resistance. Twenty five miRNAs were consistently upregulated and 18 downregulated more than 2-fold. Although some discrepancies were detected when miRNA microarray data were compared with qPCR-measured expression levels, qRT-PCR for five miRNAs (miR-3617, miR-92a1, miR-1246, miR-1936-3p, and miR-17-3p) results showed excellent agreement with microarray experiments. To further investigate cellular functions of miRNAs, we examined effects on cell proliferation. Synthetic oligonucleotide miRNA mimics were transfected into three cell lines, and proliferation was quantified using a colorimetric assay. Of the 5 miRNAs tested, only miR-1246 altered cell proliferation of A375P/Mdr cells. The transfection of miR-1246 mimic strongly conferred PLX-4720 resistance to A375P/Mdr cells, implying that miR-1246 upregulation confers acquired resistance to BRAF inhibition. We also found that PLX-4720 caused much greater G2/M arrest in A375P/Mdr cells transfected with miR-1246mimic than that seen in scrambled RNA-transfected cells. Additionally, miR-1246 mimic partially caused a resistance to autophagy induction by PLX-4720. These results indicate that autophagy does play an essential death-promoting role inPLX-4720-induced cell death. Taken together, these results suggest that miRNA expression profiling in melanoma cells can provide valuable information for a network of BRAF inhibitor resistance-associated miRNAs.Keywords: microRNA, BRAF inhibitor, drug resistance, autophagy
Procedia PDF Downloads 325121 The Role of Glyceryl Trinitrate (GTN) in 99mTc-HIDA with Morphine Provocation Scan for the Investigation of Type III Sphincter of Oddi Dysfunction (SOD)
Authors: Ibrahim M Hassan, Lorna Que, Michael Rutland
Abstract:
Type I SOD is usually diagnosed by anatomical imaging such as ultrasound, CT and MRCP. However, the types II and III SOD yield negative results despite the presence of significant symptoms. In particular, the type III is difficult to diagnose due to the absence of significant biochemical or anatomical abnormalities. Nuclear Medicine can aid in this diagnostic dilemma by demonstrating functional changes in the bile flow. Low dose Morphine (0.04mg/Kg) stimulates the tone of the sphincter of Oddi (SO) and its usefulness has been shown in diagnosing SOD by causing a delay in bile flow when compared to a non morphine provoked - baseline scan. This work expands on that process by using sublingual GTN at 60 minutes post tracer and morphine injection to relax the SO and induce an improvement in bile outflow, and in some cases show immediate relief of morphine induced abdominal pain. The criteria for positive SOD are as follows: if during the first hour of the morphine provocation showed (1) delayed intrahepatic biliary ducts tracer accumulation; plus (2) delayed appearance but persistent retention of activity in the common bile duct, and (3) delayed bile flow into the duodenum. In addition, patients who required GTN within the first hour to relieve abdominal pain were regarded as highly supportive of the diagnosis. Retrospective analysis of 85 patients (pts) (78F and 6M) referred for suspected SOD (type III) who had been intensively investigated because of recurrent right upper quadrant or abdominal pain post cholecystectomy. 99mTc-HIDA scan with morphine-provocation is performed followed by GTN at 60 minutes post tracer injection and a further thirty minutes of dynamic imaging are acquired. 30 pts were negative. 55 pts were regarded as positive for SOD and 38/55 (60%) of these patients with an abnormal result were further evaluated with a baseline 99mTc-HIDA. As expected, all 38 pts showed better bile flow characteristics than during the morphine provocation. 20/55 (36%) patients were treated by ERCP sphincterotomy and the rest were managed conservatively by medical therapy. In all cases regarded as positive for SOD, the sublingual GTN at 60 minutes showed immediate improvement in bile flow. 11/55(20%) who developed severe post-morphine abdominal pain were relieved by GTN almost instantaneously. We propose that GTN is a useful agent in the diagnosis of SOD when performing 99mTc-HIDA scan and that the satisfactory response to the sublingual GTN could offer additional information in patients who have severe pain at the time the procedure or when presenting to the emergency unit because of biliary pain. And also in determining whether a trial of medical therapy may be used before considering surgery.Keywords: GTN, HIDA, MORPHINE, SOD
Procedia PDF Downloads 304120 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics
Authors: Michael Lousis
Abstract:
This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors
Procedia PDF Downloads 190119 Historic Fire Occurrence in Hemi-Boreal Forests: Exploring Natural and Cultural Scots Pine Multi-Cohort Fire Regimes in Lithuania
Authors: Charles Ruffner, Michael Manton, Gintautas Kibirkstis, Gediminas Brazaitas, Vitas Marozas, Ekaterine Makrickiene, Rutile Pukiene, Per Angelstam
Abstract:
In dynamic boreal forests, fire is an important natural disturbance, which drives regeneration and mortality of living and dead trees, and thus successional trajectories. However, current forest management practices focusing on wood production only have effectively eliminated fire as a stand-level disturbance. While this is generally well studied across much of Europe, in Lithuania, little is known about the historic fire regime and the role fire plays as a management tool towards the sustainable management of future landscapes. Focusing on Scots pine forests, we explore; i) the relevance of fire disturbance regimes on forestlands of Lithuania; ii) fire occurrence in the Dzukija landscape for dry upland and peatland forest sites, and iii) correlate tree-ring data with climate variables to ascertain climatic influences on growth and fire occurrence. We sampled and cross-dated 132 Scots pine samples with fire scars from 4 dry pine forest stands and 4 peatland forest stands, respectively. The fire history of each sample was analyzed using standard dendrochronological methods and presented in FHAES format. Analyses of soil moisture and nutrient conditions revealed a strong probability of finding forests that have a high fire frequency in Scots pine forests (59%), which cover 34.5% of Lithuania’s current forestland. The fire history analysis revealed 455 fire scars and 213 fire events during the period 1742-2019. Within the Dzukija landscape, the mean fire interval was 4.3 years for the dry Scots pine forest and 8.7 years for the peatland Scots pine forest. However, our comparison of fire frequency before and after 1950 shows a marked decrease in mean fire interval. Our data suggest that hemi-boreal forest landscapes of Lithuania provide strong evidence that fire, both human and lightning-ignited fires, has been and should be a natural phenomenon and that the examination of biological archives can be used to guide sustainable forest management into the future. Currently, fire use is prohibited by law as a tool for forest management in Lithuania. We recommend introducing trials that use low-intensity prescribed burning of Scots pine stands as a regeneration tool towards mimicking natural forest disturbance regimes.Keywords: biodiversity conservation, cultural burning, dendrochronology, forest dynamics, forest management, succession
Procedia PDF Downloads 200118 Educational Debriefing in Prehospital Medicine: A Qualitative Study Exploring Educational Debrief Facilitation and the Effects of Debriefing
Authors: Maria Ahmad, Michael Page, Danë Goodsman
Abstract:
‘Educational’ debriefing – a construct distinct from clinical debriefing – is used following simulated scenarios and is central to learning and development in fields ranging from aviation to emergency medicine. However, little research into educational debriefing in prehospital medicine exists. This qualitative study explored the facilitation and effects of prehospital educational debriefing and identified obstacles to debriefing, using the London’s Air Ambulance Pre-Hospital Care Course (PHCC) as a model. Method: Ethnographic observations of moulages and debriefs were conducted over two consecutive days of the PHCC in October 2019. Detailed contemporaneous field notes were made and analysed thematically. Subsequently, seven one-to-one, semi-structured interviews were conducted with four PHCC debrief facilitators and three course participants to explore their experiences of prehospital educational debriefing. Interview data were manually transcribed and analysed thematically. Results: Four overarching themes were identified: the approach to the facilitation of debriefs, effects of debriefing, facilitator development, and obstacles to debriefing. The unpredictable debriefing environment was seen as both hindering and paradoxically benefitting educational debriefing. Despite using varied debriefing structures, facilitators emphasised similar key debriefing components, including exploring participants’ reasoning and sharing experiences to improve learning and prevent future errors. Debriefing was associated with three principal effects: releasing emotion; learning and improving, particularly participant compound learning as they progressed through scenarios; and the application of learning to clinical practice. Facilitator training and feedback were central to facilitator learning and development. Several obstacles to debriefing were identified, including mismatch of participant and facilitator agendas, performance pressure, and time. Interestingly, when used appropriately in the educational environment, these obstacles may paradoxically enhance learning. Conclusions: Educational debriefing in prehospital medicine is complex. It requires the establishment of a safe learning environment, an understanding of participant agendas, and facilitator experience to maximise participant learning. Aspects unique to prehospital educational debriefing were identified, notably the unpredictable debriefing environment, interdisciplinary working, and the paradoxical benefit of educational obstacles for learning. This research also highlights aspects of educational debriefing not extensively detailed in the literature, such as compound participant learning, display of ‘professional honesty’ by facilitators, and facilitator learning, which require further exploration. Future research should also explore educational debriefing in other prehospital services.Keywords: debriefing, prehospital medicine, prehospital medical education, pre-hospital care course
Procedia PDF Downloads 217117 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language
Authors: Tengku Sepora Tengku Mahadi
Abstract:
Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture
Procedia PDF Downloads 149116 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps
Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe
Abstract:
Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion
Procedia PDF Downloads 166115 New Derivatives 7-(diethylamino)quinolin-2-(1H)-one Based Chalcone Colorimetric Probes for Detection of Bisulfite Anion in Cationic Micellar Media
Authors: Guillermo E. Quintero, Edwin G. Perez, Oriel Sanchez, Christian Espinosa-Bustos, Denis Fuentealba, Margarita E. Aliaga
Abstract:
Bisulfite ion (HSO3-) has been used as a preservative in food, drinks, and medication. However, it is well-known that HSO3- can cause health problems like asthma and allergic reactions in people. Due to the above, the development of analytical methods for detecting this ion has gained great interest. In line with the above, the current use of colorimetric and/or fluorescent probes as a detection technique has acquired great relevance due to their high sensitivity and accuracy. In this context, 2-quinolinone derivatives have been found to possess promising activity as antiviral agents, sensitizers in solar cells, antifungals, antioxidants, and sensors. In particular, 7-(diethylamino)-2-quinolinone derivatives have attracted attention in recent years since their suitable photophysical properties become promising fluorescent probes. In Addition, there is evidence that photophysical properties and reactivity can be affected by the study medium, such as micellar media. Based on the above background, 7-(diethylamino)-2-quinolinone derivatives based chalcone will be able to be incorporated into a cationic micellar environment (Cetyltrimethylammonium bromide, CTAB). Furthermore, the supramolecular control induced by the micellar environment will increase the reactivity of these derivatives towards nucleophilic analytes such as HSO3- (Michael-type addition reaction), leading to the generation of new colorimetric and/or fluorescent probes. In the present study, two derivatives of 7-(diethylamino)-2-quinolinone based chalcone DQD1-2 were synthesized according to the method reported by the literature. These derivatives were structurally characterized by 1H, 13C NMR, and HRMS-ESI. In addition, UV-VIS and fluorescence studies determined absorption bands near 450 nm, emission bands near 600 nm, fluorescence quantum yields near 0.01, and fluorescence lifetimes of 5 ps. In line with the foregoing, these photophysical properties aforementioned were improved in the presence of a cationic micellar medium using CTAB thanks to the formation of adducts presenting association constants of the order of 2,5x105 M-1, increasing the quantum yields to 0.12 and the fluorescence lifetimes corresponding to two lifetimes near to 120 and 400 ps for DQD1 and DQD2. Besides, thanks to the presence of the micellar medium, the reactivity of these derivatives with nucleophilic analytes, such as HSO3-, was increased. This was achieved through kinetic studies, which demonstrated an increase in the bimolecular rate constants in the presence of a micellar medium. Finally, probe DQD1 was chosen as the best sensor since it was assessed to detect HSO3- with excellent results.Keywords: bisulfite detection, cationic micelle, colorimetric probes, quinolinone derivatives
Procedia PDF Downloads 94114 School Refusal Behaviours: The Roles of Adolescent and Parental Factors
Authors: Junwen Chen, Celina Feleppa, Tingyue Sun, Satoko Sasagawa, Michael Smithson
Abstract:
School refusal behaviours refer to behaviours to avoid school attendance, chronic lateness in arriving at school, or regular early dismissal. Poor attendance in schools is highly correlated with anxiety, depression, suicide attempts, delinquency, violence, and substance use and abuse. Poor attendance is also a strong indicator of lower achievement in school, as well as problematic social-emotional development. Long-term consequences of school refusal behaviours include fewer opportunities for higher education, employment, and social difficulties, and high risks of later psychiatric illness. Given its negative impacts on youth educational outcomes and well-being, a thorough understanding of factors that are involved in the development of this phenomenon is warranted for developing effective management approaches. This study investigated parental and adolescent factors that may contribute to school refusal behaviours by specifically focusing on the role of parental and adolescents’ anxiety and depression, emotion dysregulation, and parental rearing style. Findings are expected to inform the identification of both parental and adolescents’ factors that may contribute to school refusal behaviours. This knowledge will enable novel and effective approaches that incorporate these factors to managing school refusal behaviours in adolescents, which in turn improve their school and daily functioning. Results are important for an integrative understanding of school refusal behaviours. Furthermore, findings will also provide information for policymakers to weigh the benefits of interventions targeting school refusal behaviours in adolescents. One-hundred-and-six adolescents aged 12-18 years (mean age = 14.79 years old, SD = 1.78, males = 44) and their parents (mean age = 47.49 years old, SD = 5.61, males = 27) completed an online questionnaire measuring both parental and adolescents’ anxiety, depression, emotion dysregulation, parental rearing styles, and adolescents’ school refusal behaviours. Adolescents with school refusal behaviours reported greater anxiety and depression, with their parents showing greater emotion dysregulation. Parental emotion dysregulation and adolescents’ anxiety and depression predicted school refusal behaviours independently. To date, only limited studies have investigated the interplay between parental and youth factors in relation to youth school refusal behaviours. Although parental emotion dysregulation has been investigated in relation to youth emotion dysregulation, little is known about its role in the context of school refusal. This study is one of the very few that investigated both parental and adolescent factors in relation to school refusal behaviours in adolescents. The findings support the theoretical models that emphasise the role of youth and parental psychopathology in school refusal behaviours. Future management of school refusal behaviours should target adolescents’ anxiety and depression while incorporating training for parental emotion regulation skills.Keywords: adolescents, school refusal behaviors, parental factors, anxiety and depression, emotion dysregulation
Procedia PDF Downloads 124113 Telemedicine Versus Face-to-Face Follow up in General Surgery: A Randomized Controlled Trial
Authors: Teagan Fink, Lynn Chong, Michael Hii, Brett Knowles
Abstract:
Background: Telemedicine is a rapidly advancing field providing healthcare to patients at a distance from their treating clinician. There is a paucity of high-quality evidence detailing the safety and acceptability of telemedicine for postoperative outpatient follow-up. This randomized controlled trial – conducted prior to the COVID 19 pandemic – aimed to assess patient satisfaction and safety (as determined by readmission, reoperation and complication rates) of telephone compared to face-to-face clinic follow-up after uncomplicated general surgical procedures. Methods: Patients following uncomplicated laparoscopic appendicectomy or cholecystectomy and laparoscopic or open umbilical or inguinal hernia repairs were randomized to a telephone or face-to-face outpatient clinic follow-up. Data points including patient demographics, perioperative details and postoperative outcomes (eg. wound healing complications, pain scores, unplanned readmission to hospital and return to daily activities) were compared between groups. Patients also completed a Likert patient satisfaction survey following their consultation. Results: 103 patients were recruited over a 12-month period (21 laparoscopic appendicectomies, 65 laparoscopic cholecystectomies, nine open umbilical hernia repairs, six laparoscopic inguinal hernia repairs and two laparoscopic umbilical hernia repairs). Baseline patient demographics and operative interventions were the same in both groups. Patient or clinician-reported concerns on postoperative pain, use of analgesia, wound healing complications and return to daily activities at clinic follow-up were not significantly different between the two groups. Of the 58 patients randomized to the telemedicine arm, 40% reported high and 60% reported very high patient satisfaction. Telemedicine clinic mean consultation times were significantly shorter than face-to-face consultation times (telemedicine 10.3 +/- 7.2 minutes, face-to-face 19.2 +/- 23.8 minutes, p-value = 0.014). Rates of failing to attend clinic were not significantly different (telemedicine 3%, control 6%). There was no increased rate of postoperative complications in patients followed up by telemedicine compared to in-person. There were no unplanned readmissions, return to theatre, or mortalities in this study. Conclusion: Telemedicine follow-up of patients undergoing uncomplicated general surgery is safe and does not result in any missed diagnosis or higher rates of complications. Telemedicine provides high patient satisfaction and steps to implement this modality in inpatient care should be undertaken.Keywords: general surgery, telemedicine, patient satisfaction, patient safety
Procedia PDF Downloads 118112 Congenital Diaphragmatic Hernia Outcomes in a Low-Volume Center
Authors: Michael Vieth, Aric Schadler, Hubert Ballard, J. A. Bauer, Pratibha Thakkar
Abstract:
Introduction: Congenital diaphragmatic hernia (CDH) is a condition characterized by the herniation of abdominal contents into the thoracic cavity requiring postnatal surgical repair. Previous literature suggests improved CDH outcomes at high-volume regional referral centers compared to low-volume centers. The purpose of this study was to examine CDH outcomes at Kentucky Children’s Hospital (KCH), a low-volume center, compared to the Congenital Diaphragmatic Hernia Study Group (CDHSG). Methods: A retrospective chart review was performed at KCH from 2007-2019 for neonates with CDH, and then subdivided into two cohorts: those requiring ECMO therapy and those not requiring ECMO therapy. Basic demographic data and measures of mortality and morbidity including ventilator days and length of stay were compared to the CDHSG. Measures of morbidity for the ECMO cohort including duration of ECMO, clinical bleeding, intracranial hemorrhage, sepsis, need for continuous renal replacement therapy (CRRT), need for sildenafil at discharge, timing of surgical repair, and total ventilator days were collected. Statistical analysis was performed using IBM SPSS Statistics version 28. One-sample t-tests and one-sample Wilcoxon Signed Rank test were utilized as appropriate.Results: There were a total of 27 neonatal patients with CDH at KCH from 2007-2019; 9 of the 27 required ECMO therapy. The birth weight and gestational age were similar between KCH and the CDHSG (2.99 kg vs 2.92 kg, p =0.655; 37.0 weeks vs 37.4 weeks, p =0.51). About half of the patients were inborn in both cohorts (52% vs 56%, p =0.676). KCH cohort had significantly more Caucasian patients (96% vs 55%, p=<0.001). Unadjusted mortality was similar in both groups (KCH 70% vs CDHSG 72%, p =0.857). Using ECMO utilization (KCH 78% vs CDHSG 52%, p =0.118) and need for surgical repair (KCH 95% vs CDHSG 85%, p =0.060) as proxy for severity, both groups’ mortality were comparable. No significant difference was noted for pulmonary outcomes such as average ventilator days (KCH 43.2 vs. CDHSG 17.3, p =0.078) and home oxygen dependency (KCH 44% vs. CDHSG 24%, p =0.108). Average length of hospital stay for patients treated at KCH was similar to CDHSG (64.4 vs 49.2, p=1.000). Conclusion: Our study demonstrates that outcome in CDH patients is independent of center’s case volume status. Management of CDH with a standardized approach in a low-volume center can yield similar outcomes. This data supports the treatment of patients with CDH at low-volume centers as opposed to transferring to higher-volume centers.Keywords: ECMO, case volume, congenital diaphragmatic hernia, congenital diaphragmatic hernia study group, neonate
Procedia PDF Downloads 96