Search results for: ecological risk assessment
399 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 108398 Sugarcane Trash Biochar: Effect of the Temperature in the Porosity
Authors: Gabriela T. Nakashima, Elias R. D. Padilla, Joao L. Barros, Gabriela B. Belini, Hiroyuki Yamamoto, Fabio M. Yamaji
Abstract:
Biochar can be an alternative to use sugarcane trash. Biochar is a solid material obtained from pyrolysis, that is a biomass thermal degradation with low or no O₂ concentration. Pyrolysis transforms the carbon that is commonly found in other organic structures into a carbon with more stability that can resist microbial decomposition. Biochar has a versatility of uses such as soil fertility, carbon sequestration, energy generation, ecological restoration, and soil remediation. Biochar has a great ability to retain water and nutrients in the soil so that this material can improve the efficiency of irrigation and fertilization. The aim of this study was to characterize biochar produced from sugarcane trash in three different pyrolysis temperatures and determine the lowest temperature with the high yield and carbon content. Physical characterization of this biochar was performed to help the evaluation for the best production conditions. Sugarcane (Saccharum officinarum) trash was collected at Corredeira Farm, located in Ibaté, São Paulo State, Brazil. The farm has 800 hectares of planted area with an average yield of 87 t·ha⁻¹. The sugarcane varieties planted on the farm are: RB 855453, RB 867515, RB 855536, SP 803280, SP 813250. Sugarcane trash was dried and crushed into 50 mm pieces. Crucibles and lids were used to settle the sugarcane trash samples. The higher amount of sugarcane trash was added to the crucible to avoid the O₂ concentration. Biochar production was performed in three different pyrolysis temperatures (200°C, 325°C, 450°C) in 2 hours residence time in the muffle furnace. Gravimetric yield of biochar was obtained. Proximate analysis of biochar was done using ASTM E-872 and ABNT NBR 8112. Volatile matter and ash content were calculated by direct weight loss and fixed carbon content calculated by difference. Porosity measurement was evaluated using an automatic gas adsorption device, Autosorb-1, with CO₂ described by Nakatani. Approximately 0.5 g of biochar in 2 mm particle sizes were used for each measurement. Vacuum outgassing was performed as a pre-treatment in different conditions for each biochar temperature. The pore size distribution of micropores was determined using Horváth-Kawazoe method. Biochar presented different colors for each treatment. Biochar - 200°C presented a higher number of pieces with 10mm or more and did not present the dark black color like other treatments after 2 h residence time in muffle furnace. Also, this treatment had the higher content of volatiles and the lower amount of fixed carbon. In porosity analysis, while the temperature treatments increase, the amount of pores also increase. The increase in temperature resulted in a biochar with a better quality. The pores in biochar can help in the soil aeration, adsorption, water retention. Acknowledgment: This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brazil – PROAP-CAPES, PDSE and CAPES - Finance Code 001.Keywords: proximate analysis, pyrolysis, soil amendment, sugarcane straw
Procedia PDF Downloads 214397 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 390396 Quasi-Federal Structure of India: Fault-Lines Exposed in COVID-19 Pandemic
Authors: Shatakshi Garg
Abstract:
As the world continues to grapple with the COVID-19 pandemic, India, one of the most populous democratic federal developing nation, continues to report the highest active cases and deaths, as well as struggle to let its health infrastructure not succumb to the exponentially growing requirements of hospital beds, ventilators, oxygen to save thousands of lives daily at risk. In this context, the paper outlines the handling of the COVID-19 pandemic since it first hit India in January 2020 – the policy decisions taken by the Union and the State governments from the larger perspective of its federal structure. The Constitution of India adopted in 1950 enshrined the federal relations between the Union and the State governments by way of the constitutional division of revenue-raising and expenditure responsibilities. By way of the 72nd and 73rd Amendments in the Constitution, powers and functions were devolved further to the third tier, namely the local governments, with the intention of further strengthening the federal structure of the country. However, with time, several constitutional amendments have shifted the scales in favour of the union government. The paper briefly traces some of these major amendments as well as some policy decisions which made the federal relations asymmetrical. As a result, data on key fiscal parameters helps establish how the union government gained upper hand at the expense of weak state governments, reducing the local governments to mere constitutional bodies without adequate funds and fiscal autonomy to carry out the assigned functions. This quasi-federal structure of India with the union government amassing the majority of power in terms of ‘funds, functions and functionaries’ exposed the perils of weakening sub-national governments post COVID-19 pandemic. With a complex quasi-federal structure and a heterogeneous population of over 1.3 billion, the announcement of a sudden nationwide lockdown by the union government was followed by a plight of migrants struggling to reach homes safely in the absence of adequate arrangements for travel and safety-net made by the union government. With limited autonomy enjoyed by the states, they were mostly dictated by the union government on most aspects of handling the pandemic, including protocols for lockdown, re-opening post lockdown, and vaccination drive. The paper suggests that certain policy decisions like demonetization, the introduction of GST, etc., taken by the incumbent government since 2014 when they first came to power, have further weakened the states and local governments, which have amounted to catastrophic losses, both economic and human. The role of the executive, legislature and judiciary are explored to establish how all these three arms of the government have worked simultaneously to further weaken and expose the fault-lines of the federal structure of India, which has lent the nation incapacitated to handle this pandemic. The paper then suggests the urgency of re-looking at the federal structure of the country and undertaking measures that strengthen the sub-national governments and restore the federal spirit as was enshrined in the constitution to avoid mammoth human and economic losses from a pandemic of this sort.Keywords: COVID-19 pandemic, India, federal structure, economic losses
Procedia PDF Downloads 179395 Refurbishment Methods to Enhance Energy Efficiency of Brick Veneer Residential Buildings in Victoria
Authors: Hamid Reza Tabatabaiefar, Bita Mansoury, Mohammad Javad Khadivi Zand
Abstract:
The current energy and climate change impacts of the residential building sector in Australia are significant. Thus, the Australian Government has introduced more stringent regulations to improve building energy efficiency. In 2006, the Australian residential building sector consumed about 11% (around 440 Petajoule) of the total primary energy, resulting in total greenhouse gas emissions of 9.65 million tonnes CO2-eq. The gas and electricity consumption of residential dwellings contributed to 30% and 52% respectively, of the total primary energy utilised by this sector. Around 40 percent of total energy consumption of Australian buildings goes to heating and cooling due to the low thermal performance of the buildings. Thermal performance of buildings determines the amount of energy used for heating and cooling of the buildings which profoundly influences energy efficiency. Employing sustainable design principles and effective use of construction materials can play a crucial role in improving thermal performance of new and existing buildings. Even though awareness has been raised, the design phase of refurbishment projects is often problematic. One of the issues concerning the refurbishment of residential buildings is mostly the consumer market, where most work consists of moderate refurbishment jobs, often without assistance of an architect and partly without a building permit. There is an individual and often fragmental approach that results in lack of efficiency. Most importantly, the decisions taken in the early stages of the design determine the final result; however, the assessment of the environmental performance only happens at the end of the design process, as a reflection of the design outcome. Finally, studies have identified the lack of knowledge, experience and best-practice examples as barriers in refurbishment projects. In the context of sustainable development and the need to reduce energy demand, refurbishing the ageing residential building constitutes a necessary action. Not only it does provide huge potential for energy savings, but it is also economically and socially relevant. Although the advantages have been identified, the guidelines come in the form of general suggestions that fail to address the diversity of each project. As a result, it has been recognised that there is a strong need to develop guidelines for optimised retrofitting of existing residential buildings in order to improve their energy performance. The current study investigates the effectiveness of different energy retrofitting techniques and examines the impact of employing those methods on energy consumption of residential brick veneer buildings in Victoria (Australia). Proposing different remedial solutions for improving the energy performance of residential brick veneer buildings, in the simulation stage, annual energy usage analyses have been carried out to determine heating and cooling energy consumptions of the buildings for different proposed retrofitting techniques. Then, the results of employing different retrofitting methods have been examined and compared in order to identify the most efficient and cost-effective remedial solution for improving the energy performance of those buildings with respect to the climate condition in Victoria and construction materials of the studied benchmark building.Keywords: brick veneer residential buildings, building energy efficiency, climate change impacts, cost effective remedial solution, energy performance, sustainable design principles
Procedia PDF Downloads 292394 Correlation of Hyperlipidemia with Platelet Parameters in Blood Donors
Authors: S. Nishat Fatima Rizvi, Tulika Chandra, Abbas Ali Mahdi, Devisha Agarwal
Abstract:
Introduction: Blood components are an unexplored area prone to numerous discoveries which influence patient’s care. Experiments at different levels will further change the present concept of blood banking. Hyperlipidemia is a condition of elevated plasma level of low-density lipoprotein (LDL) as well as decreased plasma level of high-density lipoprotein (HDL). Studies show that platelets play a vital role in the progression of atherosclerosis and thrombosis, a major cause of death worldwide. They are activated by many triggers like elevated LDL in the blood resulting in aggregation and formation of plaques. Hyperlipidemic platelets are frequently transfused to patients with various disorders. Screening the random donor platelets for hyperlipidemia and correlating the condition with other donor criteria such as lipid rich diet, oral contraceptive pills intake, weight, alcohol intake, smoking, sedentary lifestyle, family history of heart diseases will lead to further deciding the exclusion criteria for donor selection. This will help in making the patients safe as well as the donor deferral criteria more stringent to improve the quality of blood supply. Technical evaluation and assessment will enable blood bankers to supply safe blood and improve the guidelines for blood safety. Thus, we try to study the correlation between hyperlipidemic platelets with platelets parameters, weight, and specific history of the donors. Methodology: This case control study included 100 blood samples of Blood donors, out of 100 only 30 samples were found to be hyperlipidemic and were included as cases, while rest were taken as controls. Lipid Profile were measured by fully automated analyzer (TRIGL:triglycerides),(LDL-C:LDL –Cholesterol plus 2nd generation),CHOL 2: Cholesterol Gen 2), HDL C 3: HDL-Cholesterol plus 3rdgeneration)-(Cobas C311-Roche Diagnostic).And Platelets parameters were analyzed by the Sysmex KX21 automated hematology analyzer. Results: A significant correlation was found amongst hyperlipidemic level in single time donor. In which 80% donors have history of heart disease, 66.66% donors have sedentary life style, 83.3% donors were smokers, 50% donors were alcoholic, and 63.33% donors had taken lipid rich diet. Active physical activity was found amongst 40% donors. We divided donors sample in two groups based on their body weight. In group 1, hyperlipidemic samples: Platelet Parameters were 75% in normal 25% abnormal in >70Kg weight while in 50-70Kg weight 90% were normal 10% were abnormal. In-group 2, Non Hyperlipidemic samples: platelet Parameters were 95% normal and 5% abnormal in >70Kg weight, while in 50-70Kg Weight, 66.66% normal and 33.33% abnormal. Conclusion: The findings indicate that Hyperlipidemic status of donors may affect the platelet parameters and can be distinguished on history by their weight, Smoking, Alcoholic intake, Sedentary lifestyle, Active physical activity, Lipid rich diet, Oral contraceptive pills intake, and Family history of heart disease. However further studies on a large sample size will affirm this finding.Keywords: blood donors, hyperlipidemia, platelet, weight
Procedia PDF Downloads 314393 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 147392 Systemic Family therapy in the Queensland Foster Care System: The implementation of Integrative Practice as a Purposeful Intervention Implemented with Complex ‘Family’ Systems
Authors: Rachel Jones
Abstract:
Systemic Family therapy in the Queensland Foster Care System is the implementation of Integrative Practice as a purposeful intervention implemented with complex ‘family’ systems (by expanding the traditional concept of family to include all relevant stakeholders for a child) and is shown to improve the overall wellbeing of children (with developmental delays and trauma) in Queensland out of home care contexts. The importance of purposeful integrative practice in the field of systemic family therapy has been highlighted in achieving change in complex family systems. Essentially, it is the purposeful use of multiple interventions designed to meet the myriad of competing needs apparent for a child (with developmental delays resulting from early traumatic experiences - both in utero and in their early years) and their family. In the out-of-home care context, integrative practice is particularly useful to promote positive change for the child and what is an extended concept of whom constitutes their family. Traditionally, a child’s family may have included biological and foster care family members, but when this concept is extended to include all their relevant stakeholders (including biological family, foster carers, residential care workers, child safety, school representatives, Health and Allied Health staff, police and youth justice staff), the use of integrative family therapy can produce positive change for the child in their overall wellbeing, development, risk profile, social and emotional functioning, mental health symptoms and relationships across domains. By tailoring therapeutic interventions that draw on systemic family therapies from the first and second-order schools of family therapy, neurobiology, solution focussed, trauma-informed, play and art therapy, and narrative interventions, disability/behavioural interventions, clinicians can promote change by mixing therapeutic modalities with the individual and their stakeholders. This presentation will unpack the implementation of systemic family therapy using this integrative approach to formulation and treatment for a child in out-of-home care in Queensland (experiencing developmental delays resulting from trauma). It considers the need for intervention for the individual and in the context of the environment and relationships. By reviewing a case example, this study aims to highlight the simultaneous and successful use of pharmacological interventions, psychoeducational programs for carers and school staff, parenting programs, cognitive-behavioural and trauma-informed interventions, traditional disability approaches, play therapy, mapping genograms and meaning-making, and using family and dyadic sessions for the system associated with the foster child. These elements of integrative systemic family practice have seen success in the reduction of symptoms and improved overall well-being of foster children and their stakeholders. Accordingly, a model for best practice using this integrative systemic approach is presented for this population group and preliminary findings for this approach over four years of local data have been reviewed.Keywords: systemic family therapy, treating families of children with delays, trauma and attachment in families systems, improving practice and functioning of children and families
Procedia PDF Downloads 14391 Pharmacokinetic Assessment of Antimicrobial Treatment of Acute Exacerbations of Chronic Obstructive Pulmonary Disease in Hospitalized Patients Colonized with Pseudomonas aeruginosa
Authors: Juliette Begin, Juliano Colapelle, Andrea Taratanu, Daniel Thirion, Amelie Marsot, Bryan A. Ross
Abstract:
Chronic obstructive pulmonary disease (COPD), a leading cause of death globally, is characterized by chronic airflow obstruction and acute exacerbations (AECOPDs) that are often triggered by respiratory infections. Pseudomonas aeruginosa (P. aeruginosa), a potentially serious bacterial cause of AECOPDs, is treated with targeted anti-pseudomonal antibiotics. These select few antimicrobials are often used as first-line therapy in patients who are clinically unwell and/or in those suspected of P. aeruginosa-related infection prior to confirmation, potentially contributing to antimicrobial resistance. The present study evaluates prescribing practices in patients with a confirmed sputum history of P. aeruginosa admitted for AECOPD at the McGill University Health Centre (MUHC) and treated with anti-pseudomonal antibiotics. Serum antibiotic concentrations were measured from the same-day peak, trough, and mid-dose blood sampling intervals after reaching steady-state (on or after day 3) and were quantified using ultra-high-performance liquid chromatography (UHPLC). Demographic, clinical, and treatment outcomes were extracted from patient medical charts. Treatment failure was defined by respiratory-related death or mechanical ventilation after ≥3 days of antibiotics; antibiotic therapy extended beyond 2 weeks or a new antibiotic regimen started; or urgent care readmission within 30 days for AECOPD. To date, 9 of 30 planned participants have completed testing: seven received ciprofloxacin, one received meropenem, and one received piperacillin-tazobactam. Due to serum sample batching requirements, the serum ciprofloxacin concentration results for the first 2/8 participants are presented at the time of writing. The first participant had serum levels of 5.45mg/L (T₀), 4.74mg/L (T₅₀), and 4.49mg/L (T₁₀₀), while the second had serum levels of 5mg/L (T₀), 2.6mg/L (T₅₀), and 2.51mg/L (T₁₀₀). Pharmacokinetic parameters Cmax (5.18±0.43mg/L), T₁/₂ (23.56±18.94hours), and AUC (181.9±155.95mg*h/l) were higher than reported monograph values and met target AUC-to-MIC ratio of >125. The patients treated with meropenem and with piperacillin-tazobactam experienced treatment failure. Preliminary results suggest that standard ciprofloxacin dosing in patients experiencing an AECOPD and colonized with P. aeruginosa appears to achieve effective serum concentrations. Final cohort results will inform the pharmacokinetic appropriateness and clinical sufficiency of current AECOPD antimicrobial strategies in P. aeruginosa-colonized patients. This study will guide clinicians in determining the appropriate dosing for AECOPD treatment to achieve therapeutic levels, optimizing outcomes, and minimizing adverse effects. It could also highlight the value of routine antibiotic level monitoring in patients with treatment failure to ensure optimal serum concentrations.Keywords: acute exacerbation, antimicrobial resistance, chronic obstructive pulmonary disease, pharmacokinetics/pharmacodynamics, Pseudomonas aeruginosa
Procedia PDF Downloads 14390 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment
Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot
Abstract:
Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography
Procedia PDF Downloads 271389 Harnessing Renewable Energy as a Strategy to Combating Climate Change in Sub Saharan Africa
Authors: Gideon Nyuimbe Gasu
Abstract:
Sub Saharan Africa is at a critical point, experiencing rapid population growth, particularly in urban areas and young growing force. At the same time, the growing risk of catastrophic global climate change threatens to weaken food production system, increase intensity and frequency of drought, flood, and fires and undermine gains on development and poverty reduction. Although the region has the lowest per capital greenhouse gas emission level in the world, it will need to join global efforts to address climate change, including action to avoid significant increases and to encourage a green economy. Thus, there is a need for the concept of 'greening the economy' as was prescribed at Rio Summit of 1992. Renewable energy is one of the criterions to achieve this laudable goal of maintaining a green economy. There is need to address climate change while facilitating continued economic growth and social progress as energy today is critical to economic growth. Fossil fuels remain the major contributor of greenhouse gas emission. Thus, cleaner technologies such as carbon capture storage, renewable energy have emerged to be commercially competitive. This paper sets out to examine how to achieve a low carbon economy with minimal emission of carbon dioxide and other greenhouse gases which is one of the outcomes of implementing a green economy. Also, the paper examines the different renewable energy sources such as nuclear, wind, hydro, biofuel, and solar voltaic as a panacea to the looming climate change menace. Finally, the paper assesses the different renewable energy and energy efficiency as a propeller to generating new sources of income and jobs and in turn reduces carbon emission. The research shall engage qualitative, evaluative and comparative methods. The research will employ both primary and secondary sources of information. The primary sources of information shall be drawn from the sub Saharan African region and the global environmental organizations, energy legislation, policies and related industries and the judicial processes. The secondary sources will be made up of some books, journal articles, commentaries, discussions, observations, explanations, expositions, suggestions, prescriptions and other material sourced from the internet on renewable energy as a panacea to climate change. All information obtained from these sources will be subject to content analysis. The research result will show that the entire planet is warming as a result of the activities of mankind which is clear evidence that the current development is fundamentally unsustainable. Equally, the study will reveal that a low carbon development pathway in the sub Saharan African region should be embraced to minimize emission of greenhouse gases such as using renewable energy rather than coal, oil, and gas. The study concludes that until adequate strategies are devised towards the use of renewable energy the region will continue to add and worsen the current climate change menace and other adverse environmental conditions.Keywords: carbon dioxide, climate change, legislation/law, renewable energy
Procedia PDF Downloads 227388 Influence of Protein Malnutrition and Different Stressful Conditions on Aluminum-Induced Neurotoxicity in Rats: Focus on the Possible Protection Using Epigallocatechin-3-Gallate
Authors: Azza A. Ali, Asmaa Abdelaty, Mona G. Khalil, Mona M. Kamal, Karema Abu-Elfotuh
Abstract:
Background: Aluminium (Al) is known as a neurotoxin environmental pollutant that can cause certain diseases as Dementia, Alzheimer's disease, and Parkinsonism. It is widely used in antacid drugs as well as in food additives and toothpaste. Stresses have been linked to cognitive impairment; Social isolation (SI) may exacerbate memory deficits while protein malnutrition (PM) increases oxidative damage in cortex, hippocampus and cerebellum. The risk of cognitive decline may be lower by maintaining social connections. Epigallocatechin-3-gallate (EGCG) is the most abundant catechin in green tea and has antioxidant, anti-inflammatory and anti-atherogenic effects as well as health-promoting effects in CNS. Objective: To study the influence of different stressful conditions as social isolation, electric shock (EC) and inadequate Nutritional condition as PM on neurotoxicity induced by Al in rats as well as to investigate the possible protective effect of EGCG in these stressful and PM conditions. Methods: Rats were divided into two major groups; protected group which was daily treated during three weeks of the experiment by EGCG (10 mg/kg, IP) or non-treated. Protected and non-protected groups included five subgroups as following: One normal control received saline and four Al toxicity groups injected daily for three weeks by ALCl3 (70 mg/kg, IP). One of them served as Al toxicity model, two groups subjected to different stresses either by isolation as mild stressful condition (SI-associated Al toxicity model) or by electric shock as high stressful condition (EC- associated Al toxicity model). The last was maintained on 10% casein diet (PM -associated Al toxicity model). Isolated rats were housed individually in cages covered with black plastic. Biochemical changes in the brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups. Histopathological changes in different brain regions were also evaluated. Results: Rats exposed to Al for three weeks showed brain neurotoxicity and neuronal degenerations. Both mild (SI) and high (EC) stressful conditions as well as inadequate nutrition (PM) enhanced Al-induced neurotoxicity and brain neuronal degenerations; the enhancement induced by stresses especially in its higher conditions (ES) was more pronounced than that of inadequate nutritional conditions (PM) as indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β together with the significant decrease in SOD, TAC, BDNF. On the other hand, EGCG showed more pronounced protection against hazards of Al in both stressful conditions (SI and EC) rather than in PM .The protective effects of EGCG were indicated by the significant decrease in Aβ, ACHE, MDA, TNF-α, IL-1β together with the increase in SOD, TAC, BDNF and confirmed by brain histopathological examinations. Conclusion: Neurotoxicity and brain neuronal degenerations induced by Al were more severe with stresses than with PM. EGCG can protect against Al-induced brain neuronal degenerations in all conditions. Consequently, administration of EGCG together with socialization as well as adequate protein nutrition is advised especially on excessive Al-exposure to avoid the severity of its neuronal toxicity.Keywords: environmental pollution, aluminum, social isolation, protein malnutrition, neuronal degeneration, epigallocatechin-3-gallate, rats
Procedia PDF Downloads 391387 Polar Bears in Antarctica: An Analysis of Treaty Barriers
Authors: Madison Hall
Abstract:
The Assisted Colonization of Polar Bears to Antarctica requires a careful analysis of treaties to understand existing legal barriers to Ursus maritimus transport and movement. An absence of land-based migration routes prevent polar bears from accessing southern polar regions on their own. This lack of access is compounded by current treaties which limit human intervention and assistance to ford these physical and legal barriers. In a time of massive planetary extinctions, Assisted Colonization posits that certain endangered species may be prime candidates for relocation to hospitable environments to which they have never previously had access. By analyzing existing treaties, this paper will examine how polar bears are limited in movement by humankind’s legal barriers. International treaties may be considered codified reflections of anthropocentric values of the best knowledge and understanding of an identified problem at a set point in time, as understood through the human lens. Even as human social values and scientific insights evolve, so too must treaties evolve which specify legal frameworks and structures impacting keystone species and related biomes. Due to costs and other myriad difficulties, only a very select number of species will be given this opportunity. While some species move into new regions and are then deemed invasive, Assisted Colonization considers that some assistance may be mandated due to the nature of humankind’s role in climate change. This moral question and ethical imperative against the backdrop of escalating climate impacts, drives the question forward; what is the potential for successfully relocating a select handful of charismatic and ecologically important life forms? Is it possible to reimagine a different, but balanced Antarctic ecosystem? Listed as a threatened species under the U.S. Endangered Species Act, a result of the ongoing loss of critical habitat by melting sea ice, polar bears have limited options for long term survival in the wild. Our current regime for safeguarding animals facing extinction frequently utilizes zoos and their breeding programs, to keep alive the genetic diversity of the species until some future time when reintroduction, somewhere, may be attempted. By exploring the potential for polar bears to be relocated to Antarctica, we must analyze the complex ethical, legal, political, financial, and biological realms, which are the backdrop to framing all questions in this arena. Can we do it? Should we do it? By utilizing an environmental ethics perspective, we propose that the Ecological Commons of the Arctic and Antarctic should not be viewed solely through the lens of human resource management needs. From this perspective, polar bears do not need our permission, they need our assistance. Antarctica therefore represents a second, if imperfect chance, to buy time for polar bears, in a world where polar regimes, not yet fully understood, are themselves quickly changing as a result of climate change.Keywords: polar bear, climate change, environmental ethics, Arctic, Antarctica, assisted colonization, treaty
Procedia PDF Downloads 421386 Changes in Physicochemical Characteristics of a Serpentine Soil and in Root Architecture of a Hyperaccumulating Plant Cropped with a Legume
Authors: Ramez F. Saad, Ahmad Kobaissi, Bernard Amiaud, Julien Ruelle, Emile Benizri
Abstract:
Agromining is a new technology that establishes agricultural systems on ultramafic soils in order to produce valuable metal compounds such as nickel (Ni), with the final aim of restoring a soil's agricultural functions. But ultramafic soils are characterized by low fertility levels and this can limit yields of hyperaccumulators and metal phytoextraction. The objectives of the present work were to test if the association of a hyperaccumulating plant (Alyssum murale) and a Fabaceae (Vicia sativa var. Prontivesa) could induce changes in physicochemical characteristics of a serpentine soil and in root architecture of a hyperaccumulating plant then lead to efficient agromining practices through soil quality improvement. Based on standard agricultural systems, consisting in the association of legumes and another crop such as wheat or rape, a three-month rhizobox experiment was carried out to study the effect of the co-cropping (Co) or rotation (Ro) of a hyperaccumulating plant (Alyssum murale) with a legume (Vicia sativa) and incorporating legume biomass to soil, in comparison with mineral fertilization (FMo), on the structure and physicochemical properties of an ultramafic soil and on root architecture. All parameters measured (biomass, C and N contents, and taken-up Ni) on Alyssum murale conducted in co-cropping system showed the highest values followed by the mineral fertilization and rotation (Co > FMo > Ro), except for root nickel yield for which rotation was better than the mineral fertilization (Ro > FMo). The rhizosphere soil of Alyssum murale in co-cropping had larger soil particles size and better aggregates stability than other treatments. Using geostatistics, co-cropped Alyssum murale showed a greater root surface area spatial distribution. Moreover, co-cropping and rotation-induced lower soil DTPA-extractable nickel concentrations than other treatments, but higher pH values. Alyssum murale co-cropped with a legume showed a higher biomass production, improved soil physical characteristics and enhanced nickel phytoextraction. This study showed that the introduction of a legume into Ni agromining systems could improve yields of dry biomass of the hyperaccumulating plant used and consequently, the yields of Ni. Our strategy can decrease the need to apply fertilizers and thus minimizes the risk of nitrogen leaching and underground water pollution. Co-cropping of Alyssum murale with the legume showed a clear tendency to increase nickel phytoextraction and plant biomass in comparison to rotation treatment and fertilized mono-culture. In addition, co-cropping improved soil physical characteristics and soil structure through larger and more stabilized aggregates. It is, therefore, reasonable to conclude that the use of legumes in Ni-agromining systems could be a good strategy to reduce chemical inputs and to restore soil agricultural functions. Improving the agromining system by the replacement of inorganic fertilizers could simultaneously be a safe way of rehabilitating degraded soils and a method to restore soil quality and functions leading to the recovery of ecosystem services.Keywords: plant association, legumes, hyperaccumulating plants, ultramafic soil physicochemical properties
Procedia PDF Downloads 166385 Water Ingress into Underground Mine Voids in the Central Rand Goldfields Area, South Africa-Fluid Induced Seismicity
Authors: Artur Cichowicz
Abstract:
The last active mine in the Central Rand Goldfields area (50 km x 15 km) ceased operations in 2008. This resulted in the closure of the pumping stations, which previously maintained the underground water level in the mining voids. As a direct consequence of the water being allowed to flood the mine voids, seismic activity has increased directly beneath the populated area of Johannesburg. Monitoring of seismicity in the area has been on-going for over five years using the network of 17 strong ground motion sensors. The objective of the project is to improve strategies for mine closure. The evolution of the seismicity pattern was investigated in detail. Special attention was given to seismic source parameters such as magnitude, scalar seismic moment and static stress drop. Most events are located within historical mine boundaries. The seismicity pattern shows a strong relationship between the presence of the mining void and high levels of seismicity; no seismicity migration patterns were observed outside the areas of old mining. Seven years after the pumping stopped, the evolution of the seismicity has indicated that the area is not yet in equilibrium. The level of seismicity in the area appears to not be decreasing over time since the number of strong events, with Mw magnitudes above 2, is still as high as it was when monitoring began over five years ago. The average rate of seismic deformation is 1.6x1013 Nm/year. Constant seismic deformation was not observed over the last 5 years. The deviation from the average is in the order of 6x10^13 Nm/year, which is a significant deviation. The variation of cumulative seismic moment indicates that a constant deformation rate model is not suitable. Over the most recent five year period, the total cumulative seismic moment released in the Central Rand Basin was 9.0x10^14 Nm. This is equivalent to one earthquake of magnitude 3.9. This is significantly less than what was experienced during the mining operation. Characterization of seismicity triggered by a rising water level in the area can be achieved through the estimation of source parameters. Static stress drop heavily influences ground motion amplitude, which plays an important role in risk assessments of potential seismic hazards in inhabited areas. The observed static stress drop in this study varied from 0.05 MPa to 10 MPa. It was found that large static stress drops could be associated with both small and large events. The temporal evolution of the inter-event time provides an understanding of the physical mechanisms of earthquake interaction. Changes in the characteristics of the inter-event time are produced when a stress change is applied to a group of faults in the region. Results from this study indicate that the fluid-induced source has a shorter inter-event time in comparison to a random distribution. This behaviour corresponds to a clustering of events, in which short recurrence times tend to be close to each other, forming clusters of events.Keywords: inter-event time, fluid induced seismicity, mine closure, spectral parameters of seismic source
Procedia PDF Downloads 285384 Assessment of Biofilm Production Capacity of Industrially Important Bacteria under Electroinductive Conditions
Authors: Omolola Ojetayo, Emmanuel Garuba, Obinna Ajunwa, Abiodun A. Onilude
Abstract:
Introduction: Biofilm is a functional community of microorganisms that are associated with a surface or an interface. These adherent cells become embedded within an extracellular matrix composed of polymeric substances, i.e., biofilms refer to biological deposits consisting of both microbes and their extracellular products on biotic and abiotic surfaces. Despite their detrimental effects in medicine, biofilms as natural cell immobilization have found several applications in biotechnology, such as in the treatment of wastewater, bioremediation and biodegradation, desulfurization of gas, and conversion of agro-derived materials into alcohols and organic acids. The means of enhancing immobilized cells have been chemical-inductive, and this affects the medium composition and final product. Physical factors including electrical, magnetic, and electromagnetic flux have shown potential for enhancing biofilms depending on the bacterial species, nature, and intensity of emitted signals, the duration of exposure, and substratum used. However, the concept of cell immobilisation by electrical and magnetic induction is still underexplored. Methods: To assess the effects of physical factors on biofilm formation, six American typed culture collection (Acetobacter aceti ATCC15973, Pseudomonas aeruginosa ATCC9027, Serratia marcescens ATCC14756, Gluconobacter oxydans ATCC19357, Rhodobacter sphaeroides ATCC17023, and Bacillus subtilis ATCC6633) were used. Standard culture techniques for bacterial cells were adopted. Natural autoimmobilisation potentials of test bacteria were carried out by simple biofilms ring formation on tubes, while crystal violet binding assay techniques were adopted in the characterisation of biofilm quantity. Electroinduction of bacterial cells by direct current (DC) application in cell broth, static magnetic field exposure, and electromagnetic flux were carried out, and autoimmobilisation of cells in a biofilm pattern was determined on various substrata tested, including wood, glass, steel, polyvinylchloride (PVC) and polyethylene terephthalate. Biot Savart law was used in quantifying magnetic field intensity, and statistical analyses of data obtained were carried out using the analyses of variance (ANOVA) as well as other statistical tools. Results: Biofilm formation by the selected test bacteria was enhanced by the physical factors applied. Electromagnetic induction had the greatest effect on biofilm formation, with magnetic induction producing the least effect across all substrata used. Microbial cell-cell communication could be a possible means via which physical signals affected the cells in a polarisable manner. Conclusion: The enhancement of biofilm formation by bacteria using physical factors has shown that their inherent capability as a cell immobilization method can be further optimised for industrial applications. A possible relationship between the presence of voltage-dependent channels, mechanosensitive channels, and bacterial biofilms could shed more light on this phenomenon.Keywords: bacteria, biofilm, cell immobilization, electromagnetic induction, substrata
Procedia PDF Downloads 189383 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback
Authors: Yael Neumann
Abstract:
Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).Keywords: attention, language, cognitive rehabilitation, neurofeedback
Procedia PDF Downloads 17382 Integration of an Evidence-Based Medicine Curriculum into Physician Assistant Education: Teaching for Today and the Future
Authors: Martina I. Reinhold, Theresa Bacon-Baguley
Abstract:
Background: Medical knowledge continuously evolves and to help health care providers to stay up-to-date, evidence-based medicine (EBM) has emerged as a model. The practice of EBM requires new skills of the health care provider, including directed literature searches, the critical evaluation of research studies, and the direct application of the findings to patient care. This paper describes the integration and evaluation of an evidence-based medicine course sequence into a Physician Assistant curriculum. This course sequence teaches students to manage and use the best clinical research evidence to competently practice medicine. A survey was developed to assess the outcomes of the EBM course sequence. Methodology: The cornerstone of the three-semester sequence of EBM are interactive small group discussions that are designed to introduce students to the most clinically applicable skills to identify, manage and use the best clinical research evidence to improve the health of their patients. During the three-semester sequence, the students are assigned each semester to participate in small group discussions that are facilitated by faculty with varying background and expertise. Prior to the start of the first EBM course in the winter semester, PA students complete a knowledge-based survey that was developed by the authors to assess the effectiveness of the course series. The survey consists of 53 Likert scale questions that address the nine objectives for the course series. At the end of the three semester course series, the same survey was given to all students in the program and the results from before, and after the sequence of EBM courses are compared. Specific attention is paid to overall performance of students in the nine course objectives. Results: We find that students from the Class of 2016 and 2017 consistently improve (as measured by percent correct responses on the survey tool) after the EBM course series (Class of 2016: Pre- 62% Post- 75%; Class of 2017: Pre- 61 % Post-70%). The biggest increase in knowledge was observed in the areas of finding and evaluating the evidence, with asking concise clinical questions (Class of 2016: Pre- 61% Post- 81%; Class of 2017: Pre- 61 % Post-75%) and searching the medical database (Class of 2016: Pre- 24% Post- 65%; Class of 2017: Pre- 35 % Post-66 %). Questions requiring students to analyze, evaluate and report on the available clinical evidence regarding diagnosis showed improvement, but to a lesser extend (Class of 2016: Pre- 56% Post- 77%; Class of 2017: Pre- 56 % Post-61%). Conclusions: Outcomes identified that students did gain skills which will allow them to apply EBM principles. In addition, the outcomes of the knowledge-based survey allowed the faculty to focus on areas needing improvement, specifically the translation of best evidence into patient care. To address this area, the clinical faculty developed case scenarios that were incorporated into the lecture and discussion sessions, allowing students to better connect the research studies with patient care. Students commented that ‘class discussion and case examples’ contributed most to their learning and that ‘it was helpful to learn how to develop research questions and how to analyze studies and their significance to a potential client’. As evident by the outcomes, the EBM courses achieved the goals of the course and were well received by the students.Keywords: evidence-based medicine, clinical education, assessment tool, physician assistant
Procedia PDF Downloads 125381 Sustainable Strategies for Managing Rural Tourism in Abyaneh Village, Isfahan
Authors: Hoda Manafian, Stephen Holland
Abstract:
Problem statement: Rural areas in Iran are one of the most popular tourism destinations. Abyaneh Village is one of them with a long history behind it (more than 1500 years) which is a national heritage site and also is nominated as a world heritage site in UNESCO tentative list from 2007. There is a considerable foundation of religious-cultural heritage and also agricultural history and activities. However, this heritage site suffers from mass tourism which is beyond its social and physical carrying capacity, since the annual number of tourists exceed 500,000. While there are four adjacent villages around Abyaneh which can benefit from advantages of tourism. Local managers also can at the same time prorate the tourists’ flux of Abyaneh on those other villages especially in high-season. The other villages have some cultural and natural tourism attractions as well. Goal: The main goal of this study is to identify a feasible development strategy according to the current strengths, weaknesses, opportunities and threats of rural tourism in this area (Abyaneh Village and four adjacent villages). This development strategy can lead to sustainable management of these destinations. Method: To this end, we used SWOT analysis as a well-established tool for conducting a situational analysis to define a sustainable development strategy. The procedures included following steps: 1) Extracting variables of SWOT chart based on interviewing tourism experts (n=13), local elites (n=17) and personal observations of researcher. 2) Ranking the extracted variables from 1-5 by 13 tourism experts in Isfahan Cultural Heritage, Handcrafts and Tourism Organization (ICHTO). 3) Assigning weights to the ranked variables using Expert Choice Software and the method of Analytical Hierarchical Process (AHP). 4) Defining the Total Weighted Score (TWS) for each part of SWOT chart. 5) Identifying the strategic position according to the TWS 6) Selecting the best development strategy based on the defined position using the Strategic Position and Action Evaluation (SPACE) matrix. 7) Assessing the Probability of Strategic Success (PSS) for the preferred strategy using relevant formulas. 8) Defining two feasible alternatives for sustainable development. Results and recommendations: Cultural heritage attractions were first-ranked variable in strength chart and also lack of sufficient amenities for one-day tourists (catering, restrooms, parking, and accommodation) was firs-ranked weakness. The strategic position was in ST (Strength-Threat) quadrant which is a maxi-mini position. According this position we would suggest ‘Competitive Strategy’ as a development strategy which means relying on strengths in order to neutralization threats. The result of Probability of Strategic Success assessment which was 0.6 shows that this strategy could be successful. The preferred approach for competitive strategy could be rebranding the market of tourism in this area. Rebranding the market can be achieved by two main alternatives which are based on the current strengths and threats: 1) Defining a ‘Heritage Corridor’ from first adjacent village to Abyaneh as a final destination. 2) Focus on ‘educational tourism’ versus mass tourism and also green tourism by developing agritourism in that corridor.Keywords: Abyaneh village, rural tourism, SWOT analysis, sustainable strategies
Procedia PDF Downloads 384380 Combined Civilian and Military Disaster Response: A Critical Analysis of the 2010 Haiti Earthquake Relief Effort
Authors: Matthew Arnaouti, Michael Baird, Gabrielle Cahill, Tamara Worlton, Michelle Joseph
Abstract:
Introduction: Over ten years after the 7.0 magnitude Earthquake struck the capital of Haiti, impacting over three million people and leading to the deaths of over two hundred thousand, the multinational humanitarian response remains the largest disaster relief effort to date. This study critically evaluates the multi-sector and multinational disaster response to the Earthquake, looking at how the lessons learned from this analysis can be applied to future disaster response efforts. We put particular emphasis on assessing the interaction between civilian and military sectors during this humanitarian relief effort, with the hopes of highlighting how concrete guidelines are essential to improve future responses. Methods: An extensive scoping review of the relevant literature was conducted - where library scientists conducted reproducible, verified systematic searches of multiple databases. Grey literature and hand searches were utilised to identify additional unclassified military documents, for inclusion in the study. More than 100 documents were included for data extraction and analysis. Key domains were identified, these included: Humanitarian and Military Response, Communication, Coordination, Resources, Needs Assessment and Pre-Existing Policy. Corresponding information and lessons-learned pertaining to these domains was then extracted - detailing the barriers and facilitators to an effective response. Results: Multiple themes were noted which stratified all identified domains - including the lack of adequate pre-existing policy, as well as extensive ambiguity of actors’ roles. This ambiguity was continually influenced by the complex role the United States military played in the disaster response. At a deeper level, the effects of neo-colonialism and concern about infringements on Haitian sovereignty played a substantial role at all levels: setting the pre-existing conditions and determining the redevelopment efforts that followed. Furthermore, external factors significantly impacted the response, particularly the loss of life within the political and security sectors. This was compounded by the destruction of important infrastructure systems - particularly electricity supplies and telecommunication networks, as well as air and seaport capabilities. Conclusions: This study stands as one of the first and most comprehensive evaluations, systematically analysing the civilian and military response - including their collaborative efforts. This study offers vital information for improving future combined responses and provides a significant opportunity for advancing knowledge in disaster relief efforts - which remains a more pressing issue than ever. The categories and domains formulated serve to highlight interdependent factors that should be applied in future disaster responses, with significant potential to aid the effective performance of humanitarian actors. Further studies will be grounded in these findings, particularly the need for greater inclusion of the Haitian perspective in the literature, through additional qualitative research studies.Keywords: civilian and military collaboration, combined response, disaster, disaster response, earthquake, Haiti, humanitarian response
Procedia PDF Downloads 127379 Assessment of Antioxidant and Cholinergic Systems, and Liver Histopathologies in Lithobates catesbeianus Exposed to the Waters of an Urban Stream
Authors: Diego R. Boiarski, Camila M. Toigo, Thais M. Sobjak, Andrey F. P. Santos, Silvia Romao, Ana T. B. Guimaraes
Abstract:
Anthropogenic activities promote changes in the community’s structures and decrease the species abundance of amphibians. Biological communities of fluvial systems are assemblies of organisms that have adapted to regional conditions, including the physical environment and food resources, and are further refined through interactions with other species. The aim of this study was to assess neurotoxic alterations and in the antioxidant system on tadpoles of Lithobates catesbeianus exposed to waters from Cascavel River, in the south of Brazil. A total of 420 L of water was collected from the Cascavel River, 140 L from each of the three different locations: Site 1 – headwater; Site 2 – stretch of the stream that runs through an urbanized area; Site 3 – a stretch from the rural area. Twelve tadpoles were acclimated in each aquarium (100 L of water) for seven days. The water from each aquarium was replaced with the ones sampled from the river, except the one from the control aquarium. After seven days, a portion of the liver was removed and conditioned for ChE, SOD, CAT and LPO analysis; other part of the tissue was conditioned for histological analysis. The statistical analysis performed was one-way ANOVA, followed by post-hoc Tukey-HSD test, and the multivariate principal components analysis. It was not observed any neurotoxic effect, but a slight increase in SOD activity and elevation of CAT activity in both urban and rural environment. A decrease in LPO reaction was detected, mainly among the tadpoles exposed to the waters from the rural area. The results of the present study demonstrate the alteration of the antioxidant system, as well as liver histopathologies in tadpoles exposed mainly to waters collected in urban and rural environments. These alterations may cause the reduction in the velocity of the metamorphosis process from the tadpoles. Further, were observed histological alterations, highlighting necrotic areas mainly among the animals exposed to urban waters. Those damages can lead to metabolic dysfunction, interfering with survival capacity, diminishing not only individual fitness but for the whole population. In the interpretation synthesis of all biomarkers, the cellular damage gradient is perceptible, characterized by the variables related to the antioxidant system, due to the flow direction of the stream. This result is indicative that along the course of the creek occurs dumping of organic material, which promoted an acute response upon tadpoles of L. catesbeianus. and it was also observed the difference in tissue damage between the experimental groups and the control group, the latter presenting histological alterations, but to a lesser degree than the animals exposed to the waters of the Cascavel river. These damages, caused by reactive oxygen species possibly resulting from the contamination by organic compounds, can lead the animals to a series of metabolic dysfunctions, interfering with its metamorphosis capacity. Interruption of metamorphosis may affect survival, which may impair its growth, development and reproduction, diminishing not only the fitness of each individual but in a long-term, to the entire population.Keywords: American bullfrog, histopathology, oxidative stress, urban creeks pollution
Procedia PDF Downloads 187378 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 297377 Embryonic Aneuploidy – Morphokinetic Behaviors as a Potential Diagnostic Biomarker
Authors: Banafsheh Nikmehr, Mohsen Bahrami, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Mallory Pitts, Tolga B. Mesen, Tamer M. Yalcinkaya
Abstract:
The number of people who receive in vitro fertilization (IVF) treatment has increased on a startling trajectory over the past two decades. Despite advances in this field, particularly the introduction of intracytoplasmic sperm injection (ICSI) and the preimplantation genetic screening (PGS), the IVF success remains low. A major factor contributing to IVF failure is embryonic aneuploidy (abnormal chromosome content), which often results in miscarriage and birth defects. Although PGS is often used as the standard diagnostic tool to identify aneuploid embryos, it is an invasive approach that could affect the embryo development, and yet inaccessible to many patients due its high costs. As such, there is a clear need for a non-invasive cost-effective approach to identify euploid embryos for single embryo transfer (SET). The reported differences between morphokinetic behaviors of aneuploid and euploid embryos has shown promise to address this need. However, current literature is inconclusive and further research is urgently needed to translate current findings into clinical diagnostics. In this ongoing study, we found significant differences between morphokinetic behaviors of euploid and aneuploid embryos that provides important insights and reaffirms the promise of such behaviors for developing non-invasive methodologies. Methodology—A total of 242 embryos (euploid: 149, aneuploid: 93) from 74 patients who underwent IVF treatment in Carolinas Fertility Clinics in Winston-Salem, NC, were analyzed. All embryos were incubated in an EmbryoScope incubator. The patients were randomly selected from January 2019 to June 2021 with most patients having both euploid and aneuploid embryos. All embryos reached the blastocyst stage and had known PGS outcomes. The ploidy assessment was done by a third-party testing laboratory on day 5-7 embryo biopsies. The morphokinetic variables of each embryo were measured by the EmbryoViewer software (Uniesense FertiliTech) on time-lapse images using 7 focal depths. We compared the time to: pronuclei fading (tPNf), division to 2,3,…,9 cells (t2, t3,…,t9), start of embryo compaction (tSC), Morula formation (tM), start of blastocyst formation (tSC), blastocyst formation (tB), and blastocyst expansion (tEB), as well as intervals between them (e.g., c23 = t3 – t2). We used a mixed regression method for our statistical analyses to account for the correlation between multiple embryos per patient. Major Findings— The average age of the patients was 35.04 yrs. The average patient age associated with euploid and aneuploid embryos was not different (P = 0.6454). We found a significant difference in c45 = t5-t4 (P = 0.0298). Our results indicated this interval on average lasts significantly longer for aneuploid embryos - c45(aneuploid) = 11.93hr vs c45(euploid) = 7.97hr. In a separate analysis limited to embryos from the same patients (patients = 47, total embryos=200, euploid=112, aneuploid=88), we obtained the same results (P = 0.0316). The statistical power for this analysis exceeded 87%. No other variable was different between the two groups. Conclusion— Our results demonstrate the importance of morphokinetic variables as potential biomarkers that could aid in non-invasively characterizing euploid and aneuploid embryos. We seek to study a larger population of embryos and incorporate the embryo quality in future studies.Keywords: IVF, embryo, euploidy, aneuploidy, morphokinteic
Procedia PDF Downloads 88376 Elements of Creativity and Innovation
Authors: Fadwa Al Bawardi
Abstract:
In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.Keywords: innovation, creativity, factors, tools
Procedia PDF Downloads 55375 The Effectiveness of Prenatal Breastfeeding Education on Breastfeeding Uptake Postpartum: A Systematic Review.
Authors: Jennifer Kehinde, Claire O'donnell, Annmarie Grealish
Abstract:
Introduction: Breastfeeding has been shown to provide numerous health benefits for both infants and mothers. The decision to breastfeed is influenced by physiological, psychological, and emotional factors. However, the importance of equipping mothers with the necessary knowledge for successful breastfeeding practice cannot be ruled out. The decline in global breastfeeding rate can be linked to lack of adequate breastfeeding education during prenatal stage.This systematic review examined the effectiveness of prenatal breastfeeding education on breastfeeding uptake postpartum. Method: This review was undertaken and reported in conformity with the Preferred Reporting Items for Systemic Reviews and Meta-Analysis statement (PRISMA) and was registered on the international prospective register for systematic reviews (PROSPERO: CRD42020213853). A PICO analysis (population, intervention, comparison, outcome) was undertaken to inform the choice of keywords in the search strategy to formulate the review question which was aimed at determining the effectiveness of prenatal breastfeeding educational programs at improving breastfeeding uptake following birth. A systematic search of five databases (Cumulative Index to Nursing and Allied Health Literature, Medline, Psych INFO, and Applied Social Sciences Index and Abstracts) were searched between January 2014 until July 2021 to identify eligible studies. Quality assessment and narrative synthesis were subsequently undertaken. Results: Fourteen studies were included. All 14 studies used different types of breastfeeding programs; eight used a combination of curriculum based breastfeeding education program, group prenatal breastfeeding counselling and one-to-one breastfeeding educational programs which were all delivered in person; four studies used web-based learning platforms to deliver breastfeeding education prenatally which were both delivered online and face to face over a period of 3 weeks to 2 months with follow-up periods ranging from 3 weeks to 6 months; one study delivered breastfeeding educational intervention using mother-to-mother breastfeeding support groups in promoting exclusive breastfeeding and one study disseminated breastfeeding education to participants based on the theory of planned behaviour. The most effective interventions were those that included both theory and hands-on demonstrations. Results showed an increase in breastfeeding uptake, breastfeeding knowledge, increase in positive attitude to breastfeeding and an increase in maternal breastfeeding self-efficacy among mothers who participated in breastfeeding educational programs during prenatal care. Conclusion: Prenatal breastfeeding education increases women’s knowledge of breastfeeding. Mothers who are knowledgeable about breastfeeding and hold a positive approach towards breastfeeding have the tendency to initiate breastfeeding and continue for a lengthened period. Findings demonstrates a general correlation between prenatal breastfeeding education and increased breastfeeding uptake postpartum. The high level of positive breastfeeding outcome inherent in all the studies can be attributed to prenatal breastfeeding education. This review provides rigorous contemporary evidence that healthcare professionals and policymakers can apply when developing effective strategies to improve breastfeeding rates and ultimately improve the health outcomes of mothers and infants.Keywords: breastfeeding, breastfeeding programs, breastfeeding self-efficacy, prenatal breastfeedng education
Procedia PDF Downloads 67374 Navigating States of Emergency: A Preliminary Comparison of Online Public Reaction to COVID-19 and Monkeypox on Twitter
Authors: Antonia Egli, Theo Lynn, Pierangelo Rosati, Gary Sinclair
Abstract:
The World Health Organization (WHO) defines vaccine hesitancy as the postponement or complete denial of vaccines and estimates a direct linkage to approximately 1.5 million avoidable deaths annually. This figure is not immune to public health developments, as has become evident since the global spread of COVID-19 from Wuhan, China in early 2020. Since then, the proliferation of influential, but oftentimes inaccurate, outdated, incomplete, or false vaccine-related information on social media has impacted hesitancy levels to a degree described by the WHO as an infodemic. The COVID-19 pandemic and related vaccine hesitancy levels have in 2022 resulted in the largest drop in childhood vaccinations of the 21st century, while the prevalence of online stigma towards vaccine hesitant consumers continues to grow. Simultaneously, a second disease has risen to global importance: Monkeypox is an infection originating from west and central Africa and, due to racially motivated online hate, was in August 2022 set to be renamed by the WHO. To better understand public reactions towards two viral infections that became global threats to public health no two years apart, this research examines user replies to threads published by the WHO on Twitter. Replies to two Tweets from the @WHO account declaring COVID-19 and Monkeypox as ‘public health emergencies of international concern’ on January 30, 2020, and July 23, 2022, are gathered using the Twitter application programming interface and user mention timeline endpoint. Research methodology is unique in its analysis of stigmatizing, racist, and hateful content shared on social media within the vaccine discourse over the course of two disease outbreaks. Three distinct analyses are conducted to provide insight into (i) the most prevalent topics and sub-topics among user reactions, (ii) changes in sentiment towards the spread of the two diseases, and (iii) the presence of stigma, racism, and online hate. Findings indicate an increase in hesitancy to accept further vaccines and social distancing measures, the presence of stigmatizing content aimed primarily at anti-vaccine cohorts and racially motivated abusive messages, and a prevalent fatigue towards disease-related news overall. This research provides value to non-profit organizations or government agencies associated with vaccines and vaccination programs in emphasizing the need for public health communication fitted to consumers' vaccine sentiments, levels of health information literacy, and degrees of trust towards public health institutions. Considering the importance of addressing fears among the vaccine hesitant, findings also illustrate the risk of alienation through stigmatization, lead future research in probing the relatively underexamined field of online, vaccine-related stigma, and discuss the potential effects of stigma towards vaccine hesitant Twitter users in their decisions to vaccinate.Keywords: social marketing, social media, public health communication, vaccines
Procedia PDF Downloads 98373 Polarization as a Proxy of Misinformation Spreading
Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo
Abstract:
Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.Keywords: information spreading, misinformation, narratives, online social networks, polarization
Procedia PDF Downloads 291372 Cultivating Concentration and Flow: Evaluation of a Strategy for Mitigating Digital Distractions in University Education
Authors: Vera G. Dianova, Lori P. Montross, Charles M. Burke
Abstract:
In the digital age, the widespread and frequently excessive use of mobile phones amongst university students is recognized as a significant distractor which interferes with their ability to enter a deep state of concentration during studies and diminishes their prospects of experiencing the enjoyable and instrumental state of flow, as defined and described by psychologist M. Csikszentmihalyi. This study has targeted 50 university students with the aim of teaching them to cultivate their ability to engage in deep work and to attain the state of flow, fostering more effective and enjoyable learning experiences. Prior to the start of the intervention, all participating students completed a comprehensive survey based on a variety of validated scales assessing their inclination toward lifelong learning, frequency of flow experiences during study, frustration tolerance, sense of agency, as well as their love of learning and daily time devoted to non-academic mobile phone activities. Several days after this initial assessment, students received a 90-minute lecture on the principles of flow and deep work, accompanied by a critical discourse on the detrimental effects of excessive mobile phone usage. They were encouraged to practice deep work and strive for frequent flow states throughout the semester. Subsequently, students submitted weekly surveys, including the 10-item CORE Dispositional Flow Scale, a 3-item agency scale and furthermore disclosed their average daily hours spent on non-academic mobile phone usage. As a final step, at the end of the semester students engaged in reflective report writing, sharing their experiences and evaluating the intervention's effectiveness. They considered alterations in their love of learning, reflected on the implications of their mobile phone usage, contemplated improvements in their tolerance for boredom and perseverance in complex tasks, and pondered the concept of lifelong learning. Additionally, students assessed whether they actively took steps towards managing their recreational phone usage and towards improving their commitment to becoming lifelong learners. Employing a mixed-methods approach our study offers insights into the dynamics of concentration, flow, mobile phone usage and attitudes towards learning among undergraduate and graduate university students. The findings of this study aim to promote profound contemplation, on the part of both students and instructors, on the rapidly evolving digital-age higher education environment. In an era defined by digital and AI advancements, the ability to concentrate, to experience the state of flow, and to love learning has never been more crucial. This study underscores the significance of addressing mobile phone distractions and providing strategies for cultivating deep concentration. The insights gained can guide educators in shaping effective learning strategies for the digital age. By nurturing a love for learning and encouraging lifelong learning, educational institutions can better prepare students for a rapidly changing labor market, where adaptability and continuous learning are paramount for success in a dynamic career landscape.Keywords: deep work, flow, higher education, lifelong learning, love of learning
Procedia PDF Downloads 68371 Rainwater Management: A Case Study of Residential Reconstruction of Cultural Heritage Buildings in Russia
Authors: V. Vsevolozhskaia
Abstract:
Since 1990, energy-efficient development concepts have constituted both a turning point in civil engineering and a challenge for an environmentally friendly future. Energy and water currently play an essential role in the sustainable economic growth of the world in general and Russia in particular: the efficiency of the water supply system is the second most important parameter for energy consumption according to the British assessment method, while the water-energy nexus has been identified as a focus for accelerating sustainable growth and developing effective, innovative solutions. The activities considered in this study were aimed at organizing and executing the renovation of the property in residential buildings located in St. Petersburg, specifically buildings with local or federal historical heritage status under the control of the St. Petersburg Committee for the State Inspection and Protection of Historic and Cultural Monuments (KGIOP) and UNESCO. Even after reconstruction, these buildings still fall into energy efficiency class D. Russian Government Resolution No. 87 on the structure and required content of project documentation contains a section entitled ‘Measures to ensure compliance with energy efficiency and equipment requirements for buildings, structures, and constructions with energy metering devices’. Mention is made of the need to install collectors and meters, which only calculate energy, neglecting the main purpose: to make buildings more energy-efficient, potentially even energy efficiency class A. The least-explored aspects of energy-efficient technology in the Russian Federation remain the water balance and the possibility of implementing rain and meltwater collection systems. These modern technologies are used exclusively for new buildings due to a lack of government directive to create project documentation during the planning of major renovations and reconstruction that would include the collection and reuse of rainwater. Energy-efficient technology for rain and meltwater collection is currently applied only to new buildings, even though research has proved that using rainwater is safe and offers a huge step forward in terms of eco-efficiency analysis and water innovation. Where conservation is mandatory, making changes to protected sites is prohibited. In most cases, the protected site is the cultural heritage building itself, including the main walls and roof. However, the installation of a second water supply system and collection of rainwater would not affect the protected building itself. Water efficiency in St. Petersburg is currently considered only from the point of view of the installation that regulates the flow of the pipeline shutoff valves. The development of technical guidelines for the use of grey- and/or rainwater to meet the needs of residential buildings during reconstruction or renovation is not yet complete. The ideas for water treatment, collection and distribution systems presented in this study should be taken into consideration during the reconstruction or renovation of residential cultural heritage buildings under the protection of KGIOP and UNESCO. The methodology applied also has the potential to be extended to other cultural heritage sites in northern countries and lands with an average annual rainfall of over 600 mm to cover average toilet-flush needs.Keywords: cultural heritage, energy efficiency, renovation, rainwater collection, reconstruction, water management, water supply
Procedia PDF Downloads 92370 Microbial Analysis of Street Vended Ready-to-Eat Meat around Thohoyandou Area, Vhembe District, Limpopo Province, RSA
Authors: Tshimangadzo Jeanette Raedani, Edgar Musie, Afsatou Traore
Abstract:
Background: Street-vended meats, including chicken, pork, and beef, are popular in urban areas worldwide due to their convenience and affordability. However, these meats often pose a significant risk of foodborne diseases. The high water activity, protein content, and nearly neutral pH of meat create conditions conducive to the growth of pathogenic bacteria. Street foods, particularly meats, are frequently linked to outbreaks of foodborne illnesses due to potential contamination from improper handling and preparation. This study aimed to assess the microbial quality and safety of street-vended ready-to-eat meat sold in the Thohoyandou area. Method: The study involved collecting 168 samples of street-vended meat, split evenly between chicken (n=84) and beef (n=84), from various vendors around Thohoyandou. The samples were randomly selected and transported in sterile conditions to the Department of Food Microbiology at the University of Venda for analysis. Each 10-gram sample was cultured in selective media: MSA for Staphylococcus aureus, EMB for E. coli O157, XLD agar for Salmonella, and Sorbitol McConkey for Shigella. After initial culturing, the presumptive colonies were sub-cultured for purification and identified through Gram staining and biochemical tests, including Catalase, API 20E, Klingler Iron Agar Test, and Vitek 2 system. Antibiotic susceptibility was tested using agents such as Ampicillin, Chloramphenicol, Penicillin, Neomycin, Tetracycline, Streptomycin, and Amoxicillin. Molecular characterization was performed to identify E. coli pathotypes using multiplex PCR. Results: Out of 168 samples tested, 32 (19%) were positive for Staphylococcus spp., with the highest prevalence found in cooked chicken meat. The most common staphylococcus species identified were S. xylosus (13.2%) and S. saprophyticus (10.5%). E. coli was present in 29 (19.3%) of the samples, with the highest prevalence in fried chicken. Antibiotic susceptibility testing showed that 100% of E. coli isolates were resistant to Ampicillin, Tetracycline, and Penicillin, but 100% were susceptible to Neomycin. Staphylococcus spp. isolates were also 100% resistant to Ampicillin and 100% susceptible to Neomycin. The study detected a range of virulence genes in E. coli, with prevalence rates from 13.33% to 86.67%. The identified pathotypes included EPEC, EHEC, ETEC, EAEC, and EIEC, with many isolates showing mixed pathotypes. Conclusion: The study highlighted that the microbial quality and safety of street-vended meats in Thohoyandou are inadequate, rendering them unsafe for consumption. The presence of pathogenic microorganisms in both beef and chicken samples indicates significant risks associated with poor personal hygiene and food preparation practices. This underscores the need for improved monitoring and stricter food safety measures to prevent foodborne diseases and ensure consumer safety.Keywords: meat, microbial analysis, street vendors, E. coli
Procedia PDF Downloads 27