Search results for: logistic regression model
541 Impact of Experiential Learning on Executive Function, Language Development, and Quality of Life for Adults with Intellectual and Developmental Disabilities (IDD)
Authors: Mary Deyo, Zmara Harrison
Abstract:
This study reports the outcomes of an 8-week experiential learning program for 6 adults with Intellectual and Developmental Disabilities (IDD) at a day habilitation program. The intervention foci for this program include executive function, language learning in the domains of expressive, receptive, and pragmatic language, and quality of life. The interprofessional collaboration aimed at supporting adults with IDD to reach person-centered, functional goals across skill domains is critical. This study is a significant addition to the speech-language pathology literature in that it examines a therapy method that potentially meets this need while targeting domains within the speech-language pathology scope of practice. Communication therapy was provided during highly valued and meaningful hands-on learning experiences, referred to as the Garden Club, which incorporated all aspects of planting and caring for a garden as well as related journaling, sensory, cooking, art, and technology-based activities. Direct care staff and an undergraduate research assistant were trained by SLP to be impactful language guides during their interactions with participants in the Garden Club. SLP also provided direct therapy and modeling during Garden Club. Research methods used in this study included a mixed methods analysis of a literature review, a quasi-experimental implementation of communication therapy in the context of experiential learning activities, Quality of Life participant surveys, quantitative pre- post- data collection and linear mixed model analysis, qualitative data collection with qualitative content analysis and coding for themes. Outcomes indicated overall positive changes in expressive vocabulary, following multi-step directions, sequencing, problem-solving, planning, skills for building and maintaining meaningful social relationships, and participant perception of the Garden Project’s impact on their own quality of life. Implementation of this project also highlighted supports and barriers that must be taken into consideration when planning similar projects. Overall findings support the use of experiential learning projects in day habilitation programs for adults with IDD, as well as additional research to deepen understanding of best practices, supports, and barriers for implementation of experiential learning with this population. This research provides an important contribution to research in the fields of speech-language pathology and other professions serving adults with IDD by describing an interprofessional experiential learning program with positive outcomes for executive function, language learning, and quality of life.Keywords: experiential learning, adults, intellectual and developmental disabilities, expressive language, receptive language, pragmatic language, executive function, communication therapy, day habilitation, interprofessionalism, quality of life
Procedia PDF Downloads 127540 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143539 Experimental Measurement of Equatorial Ring Current Generated by Magnetoplasma Sail in Three-Dimensional Spatial Coordinate
Authors: Masato Koizumi, Yuya Oshio, Ikkoh Funaki
Abstract:
Magnetoplasma Sail (MPS) is a future spacecraft propulsion that generates high levels of thrust by inducing an artificial magnetosphere to capture and deflect solar wind charged particles in order to transfer momentum to the spacecraft. By injecting plasma in the spacecraft’s magnetic field region, the ring current azimuthally drifts on the equatorial plane about the dipole magnetic field generated by the current flowing through the solenoid attached on board the spacecraft. This ring current results in magnetosphere inflation which improves the thrust performance of MPS spacecraft. In this present study, the ring current was experimentally measured using three Rogowski Current Probes positioned in a circular array about the laboratory model of MPS spacecraft. This investigation aims to determine the detailed structure of ring current through physical experimentation performed under two different magnetic field strengths engendered by varying the applied voltage on the solenoid with 300 V and 600 V. The expected outcome was that the three current probes would detect the same current since all three probes were positioned at equal radial distance of 63 mm from the center of the solenoid. Although experimental results were numerically implausible due to probable procedural error, the trends of the results revealed three pieces of perceptive evidence of the ring current behavior. The first aspect is that the drift direction of the ring current depended on the strength of the applied magnetic field. The second aspect is that the diamagnetic current developed at a radial distance not occupied by the three current probes under the presence of solar wind. The third aspect is that the ring current distribution varied along the circumferential path about the spacecraft’s magnetic field. Although this study yielded experimental evidence that differed from the original hypothesis, the three key findings of this study have informed two critical MPS design solutions that will potentially improve thrust performance. The first design solution is the positioning of the plasma injection point. Based on the implication of the first of the three aspects of ring current behavior, the plasma injection point must be located at a distance instead of at close proximity from the MPS Solenoid for the ring current to drift in the direction that will result in magnetosphere inflation. The second design solution, predicated by the third aspect of ring current behavior, is the symmetrical configuration of plasma injection points. In this study, an asymmetrical configuration of plasma injection points using one plasma source resulted in a non-uniform distribution of ring current along the azimuthal path. This distorts the geometry of the inflated magnetosphere which minimizes the deflection area for the solar wind. Therefore, to realize a ring current that best provides the maximum possible inflated magnetosphere, multiple plasma sources must be spaced evenly apart for the plasma to be injected evenly along its azimuthal path.Keywords: Magnetoplasma Sail, magnetosphere inflation, ring current, spacecraft propulsion
Procedia PDF Downloads 310538 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices
Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.Keywords: anthropometry, childhood obesity, gender, morbid obesity
Procedia PDF Downloads 325537 Advancing Equitable Healthcare for Trans and Gender-Diverse Students: A Community-Based Participatory Action Project
Authors: Al Huuskonen, Clio Lake, K. M. Naude, Polina Petlitsyna, Sorsha Henning, Julia Wimmers-Klick
Abstract:
This project presents the outcomes of a community-based participatory action initiative aimed at advocating for equitable healthcare and human rights for trans, two-spirit, and gender-diverse individuals, building upon the University of British Columbia (UBC) Trans Coalition's ongoing efforts. Participatory Action Research (PAR) was chosen as the research method with the goal of improving trans rights on the UBC campus, particularly regarding equitable access to healthcare. PAR involves active community contribution throughout the research process, which in this case was done by way of liaising with student resource groups and advocacy leaders. The goals of this project were as follows: a) identify gaps in gender-affirming healthcare for UBC students by consulting the community and collaborating with UBC services, b) develop an information package outlining provincial and university-based health insurance for gender-affirming care (including hormone therapy and surgeries), FAQs, and resources for UBC's trans students, c) make this package available to UBC students and other national transgender advocacy organizations. The initiative successfully expanded the UBC AMS Student Health and Dental Plan to include gender-affirming procedural coverage, developed a care access guide for students, and advocated for improved health records inclusivity, mechanisms for trans students to report negative care experiences, and increased access to gender-affirming primary care through the on-campus health clinic. Collaboration with other universities' pride organizations and Trans Care BC yielded positive outcomes through broader coalition building and resource sharing. Ongoing efforts are underway to update provincial policies, particularly through expanding coverage under fair pharma care and addressing the compounding effects of the primary care crisis for trans individuals. The project's tangible results include improved trans rights on campus, especially in terms of healthcare access. Expanding healthcare coverage through student care benefits thousands of students, making the ability to undergo important affirming procedures more affordable. Providing students with information on extended coverage options and communication with their doctors further removes barriers to care and positively impacts student wellbeing. This initiative demonstrates the effectiveness of community-based participatory action in advancing equitable healthcare for trans and gender-diverse individuals and serves as a model for other institutions and organizations striving to promote inclusivity and advocate for marginalized populations' rights.Keywords: equitable healthcare, trans and gender-diverse individuals, inclusivity, participatory action research project
Procedia PDF Downloads 93536 Changing the Landscape of Fungal Genomics: New Trends
Authors: Igor V. Grigoriev
Abstract:
Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics
Procedia PDF Downloads 208535 Scale up of Isoniazid Preventive Therapy: A Quality Management Approach in Nairobi County, Kenya
Authors: E. Omanya, E. Mueni, G. Makau, M. Kariuki
Abstract:
HIV infection is the strongest risk factor for a person to develop TB. Isoniazid preventive therapy (IPT) for People Living with HIV (PLWHIV) not only reduces the individual patients’ risk of developing active TB but mitigates cross infection. In Kenya, IPT for six months was recommended through the National TB, Leprosy and Lung Disease Program to treat latent TB. In spite of this recommendation by the national government, uptake of IPT among PLHIV remained low in Kenya by the end of 2015. The USAID/Kenya and East Africa Afya Jijini project, which supports 42 TBHIV health facilities in Nairobi County, began addressing low uptake of IPT through Quality Improvement (QI) teams set up at the facility level. Quality is characterized by WHO as one of the four main connectors between health systems building blocks and health systems outputs. Afya Jijini implements the Kenya Quality Model for Health, which involves QI teams being formed at the county, sub-county and facility levels. The teams review facility performance to identify gaps in service delivery and use QI tools to monitor and improve performance. Afya Jijini supported the formation of these teams in 42 facilities and built the teams’ capacity to review data and use QI principles to identify and address performance gaps. When the QI teams began working on improving IPT uptake among PLHIV, uptake was at 31.8%. The teams first conducted a root cause analysis using cause and effect diagrams, which help the teams to brainstorm on and to identify barriers to IPT uptake among PLHIV at the facility level. This is a participatory process where program staff provides technical support to the QI teams in problem identification and problem-solving. The gaps identified were inadequate knowledge and skills on the use of IPT among health care workers, lack of awareness of IPT by patients, inadequate monitoring and evaluation tools, and poor quantification and forecasting of IPT commodities. In response, Afya Jijini trained over 300 health care workers on the administration of IPT, supported patient education, supported quantification and forecasting of IPT commodities, and provided IPT data collection tools to help facilities monitor their performance. The facility QI teams conducted monthly meetings to monitor progress on implementation of IPT and took corrective action when necessary. IPT uptake improved from 31.8% to 61.2% during the second year of the Afya Jijini project and improved to 80.1% during the third year of the project’s support. Use of QI teams and root cause analysis to identify and address service delivery gaps, in addition to targeted program interventions and continual performance reviews, can be successful in increasing TB related service delivery uptake at health facilities.Keywords: isoniazid, quality, health care workers, people leaving with HIV
Procedia PDF Downloads 99534 Enhancing of Antibacterial Activity of Essential Oil by Rotating Magnetic Field
Authors: Tomasz Borowski, Dawid Sołoducha, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy
Abstract:
Essential oils (EOs) are fragrant volatile oils obtained from plants. These are used for cooking (for flavor and aroma), cleaning, beauty (e.g., rosemary essential oil is used to promote hair growth), health (e.g. thyme essential oil cures arthritis, normalizes blood pressure, reduces stress on the heart, cures chest infection and cough) and in the food industry as preservatives and antioxidants. Rosemary and thyme essential oils are considered the most eminent herbs based on their history and medicinal properties. They possess a wide range of activity against different types of bacteria and fungi compared with the other oils in both in vitro and in vivo studies. However, traditional uses of EOs are limited due to rosemary and thyme oils in high concentrations can be toxic. In light of the accessible data, the following hypothesis was put forward: Low frequency rotating magnetic field (RMF) increases the antimicrobial potential of EOs. The aim of this work was to investigate the antimicrobial activity of commercial Salvia Rosmarinus L. and Thymus vulgaris L. essential oil from Polish company Avicenna-Oil under Rotating Magnetic Field (RMF) at f = 25 Hz. The self-constructed reactor (MAP) was applied for this study. The chemical composition of oils was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Minimum inhibitory concentrations (MIC) against E. coli were determined for the essential oils. Tested oils in very small concentrations were prepared (from 1 to 3 drops of essential oils per 3 mL working suspensions). From the results of disc diffusion assay and MIC tests, it can be concluded that thyme oil had the highest antibacterial activity against E. coli. Moreover, the study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of tested oils. The extended radiation exposure to RMF at the frequency f= 25 Hz beyond 160 minutes resulted in a significant increase in antibacterial potential against E. coli. Bacteria were killed within 40 minutes in thyme oil in lower tested concentration (1 drop of essential oils per 3 mL working suspension). Rapid decrease (>3 log) of bacteria number was observed with rosemary oil within 100 minutes (in concentration 3 drops of essential oils per 3 mL working suspension). Thus, a method for improving the antimicrobial performance of essential oil in low concentrations was developed. However, it still remains to be investigated how bacteria get killed by the EOs treated by an electromagnetic field. The possible mechanisms relies on alteration in the permeability of ionic channels in ionic channels in the bacterial cell walls that transport in the cells was proposed. For further studies, it is proposed to examine other types of essential oils and other antibiotic-resistant bacteria (ARB), which are causing a serious concern throughout the world.Keywords: rotating magnetic field, rosemary, thyme, essential oils, Escherichia coli
Procedia PDF Downloads 156533 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year
Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans
Abstract:
Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality
Procedia PDF Downloads 310532 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology
Authors: Hemendra Singh Rathod
Abstract:
Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.Keywords: frequency control, grid stability, li-ion battery storage, smart grid
Procedia PDF Downloads 150531 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78530 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 65529 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 31528 2,7-Diazaindole as a Photophysical Probe for Excited State Hydrogen/Proton Transfer
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among the scientific community over the past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried out on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phases. Derivatives of the above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are studies in the solution phase that suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization-time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy, i.e., fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to the S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1, whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red-shifted transition in the case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV which is significantly higher than the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in the case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red-shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronically excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in the excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: excited state hydrogen transfer, supersonic expansion, gas phase spectroscopy, IR-UV double resonance spectroscopy, laser induced fluorescence, photoionization efficiency spectroscopy
Procedia PDF Downloads 75527 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 490526 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review
Authors: R. Christopher, C. Brandt, N. Damons
Abstract:
Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity
Procedia PDF Downloads 179525 Testing Depression in Awareness Space: A Proposal to Evaluate Whether a Psychotherapeutic Method Based on Spatial Cognition and Imagination Therapy Cures Moderate Depression
Authors: Lucas Derks, Christine Beenhakker, Michiel Brandt, Gert Arts, Ruud van Langeveld
Abstract:
Background: The method Depression in Awareness Space (DAS) is a psychotherapeutic intervention technique based on the principles of spatial cognition and imagination therapy with spatial components. The basic assumptions are: mental space is the primary organizing principle in the mind, and all psychological issues can be treated by first locating and by next relocating the conceptualizations involved. The most clinical experience was gathered over the last 20 years in the area of social issues (with the social panorama model). The latter work led to the conclusion that a mental object (image) gains emotional impact when it is placed more central, closer and higher in the visual field – and vice versa. Changing the locations of mental objects in space thus alters the (socio-) emotional meaning of the relationships. The experience of depression seems always associated with darkness. Psychologists tend to see the link between depression and darkness as a metaphor. However, clinical practice hints to the existence of more literal forms of darkness. Aims: The aim of the method Depression in Awareness Space is to reduce the distress of clients with depression in the clinical counseling practice, as a reliable alternative method of psychological therapy for the treatment of depression. The method Depression in Awareness Space aims at making dark areas smaller, lighter and more transparent in order to identify the problem or the cause of the depression which lies behind the darkness. It was hypothesized that the darkness is a subjective side-effect of the neurological process of repression. After reducing the dark clouds the real problem behind the depression becomes more visible, allowing the client to work on it and in that way reduce their feelings of depression. This makes repression of the issue obsolete. Results: Clients could easily get into their 'sadness' when asked to do so and finding the location of the dark zones proved pretty easy as well. In a recent pilot study with five participants with mild depressive symptoms (measured on two different scales and tested against an untreated control group with similar symptoms), the first results were also very promising. If the mental spatial approach to depression can be proven to be really effective, this would be very good news. The Society of Mental Space Psychology is now looking for sponsoring of an up scaled experiment. Conclusions: For spatial cognition and the research into spatial psychological phenomena, the discovery of dark areas can be a step forward. Beside out of pure scientific interest, it is great to know that this discovery has a clinical implication: when darkness can be connected to depression. Also, darkness seems to be more than metaphorical expression. Progress can be monitored over measurement tools that quantify the level of depressive symptoms and by reviewing the areas of darkness.Keywords: depression, spatial cognition, spatial imagery, social panorama
Procedia PDF Downloads 169524 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines
Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky
Abstract:
Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods
Procedia PDF Downloads 113523 Functional Analysis of Variants Implicated in Hearing Loss in a Cohort from Argentina: From Molecular Diagnosis to Pre-Clinical Research
Authors: Paula I. Buonfiglio, Carlos David Bruque, Lucia Salatino, Vanesa Lotersztein, Sebastián Menazzi, Paola Plazas, Ana Belén Elgoyhen, Viviana Dalamón
Abstract:
Hearing loss (HL) is the most prevalent sensorineural disorder affecting about 10% of the global population, with more than half due to genetic causes. About 1 in 500-1000 newborns present congenital HL. Most of the patients are non-syndromic with an autosomal recessive mode of inheritance. To date, more than 100 genes are related to HL. Therefore, the Whole-exome sequencing (WES) technique has become a cost-effective alternative approach for molecular diagnosis. Nevertheless, new challenges arise from the detection of novel variants, in particular missense changes, which can lead to a spectrum of genotype-to-phenotype correlations, which is not always straightforward. In this work, we aimed to identify the genetic causes of HL in isolated and familial cases by designing a multistep approach to analyze target genes related to hearing impairment. Moreover, we performed in silico and in vivo analyses in order to further study the effect of some of the novel variants identified in the hair cell function using the zebrafish model. A total of 650 patients were studied by Sanger Sequencing and Gap-PCR in GJB2 and GJB6 genes, respectively, diagnosing 15.5% of sporadic cases and 36% of familial ones. Overall, 50 different sequence variants were detected. Fifty of the undiagnosed patients with moderate HL were tested for deletions in STRC gene by Multiplex ligation-dependent probe amplification technique (MLPA), leading to 6% of diagnosis. After this initial screening, 50 families were selected to be analyzed by WES, achieving diagnosis in 44% of them. Half of the identified variants were novel. A missense variant in MYO6 gene detected in a family with postlingual HL was selected to be further analyzed. A protein modeling with AlphaFold2 software was performed, proving its pathogenic effect. In order to functionally validate this novel variant, a knockdown phenotype rescue assay in zebrafish was carried out. Injection of wild-type MYO6 mRNA in embryos rescued the phenotype, whereas using the mutant MYO6 mRNA (carrying c.2782C>A variant) had no effect. These results strongly suggest the deleterious effect of this variant on the mobility of stereocilia in zebrafish neuromasts, and hence on the auditory system. In the present work, we demonstrated that our algorithm is suitable for the sequential multigenic approach to HL in our cohort. These results highlight the importance of a combined strategy in order to identify candidate variants as well as the in silico and in vivo studies to analyze and prove their pathogenicity and accomplish a better understanding of the mechanisms underlying the physiopathology of the hearing impairment.Keywords: diagnosis, genetics, hearing loss, in silico analysis, in vivo analysis, WES, zebrafish
Procedia PDF Downloads 94522 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials
Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar
Abstract:
Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)
Procedia PDF Downloads 389521 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 141520 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches
Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli
Abstract:
Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR
Procedia PDF Downloads 109519 SME Internationalisation and Its Financing: An Exploratory Study That Analyses Government Support and Funding Mechanisms for Irish and Scottish International SMEs
Authors: L. Spencer, S. O’ Donohoe
Abstract:
Much of the research to date on internationalisation relates to large firms with much less known about how small and medium-sized enterprises (SMEs) engage in internationalisation. Given the crucial role of SMEs in contributing to economic growth, there is now an emphasis on the need for SMEs internationalise. Yet little is known about how SMEs undertake and finance such expansion and whether or not internationalisation actually hinders or helps them in securing finance. The purpose of this research is to explore the internationalisation process for SMEs, the sources of funding used in financing this expansion and support received from the state agencies in assisting their overseas expansion. A conceptual framework has been devised which marries the two strands of literature together (internationalisation and financing the firm). The exploratory nature of this research dictates that the most appropriate methodology was to use semi-structured interviews with SME owners; bank representatives and support agencies. In essence, a triangulated approach to the research problem facilitates assessment of the perceptions and experiences from firms, the state and the financial institutions. Our sample is drawn from SMEs operating in Ireland and Scotland, two small but very open economies where SMEs are the dominant form of organisation. The sample includes a range of industry sectors. Key findings to date suggest some SMEs are born global; others are born again global whilst a significant cohort can be classed as traditional internationalisers. Unsurprisingly there is a strong industry effect with firms in the high tech sector more likely to be faster internationalisers in contrast to those in the traditional manufacturing sectors. Owner manager’s own funds are deemed key to financing initial internationalisation lending support for the financial growth life cycle model albeit more important for the faster internationalisers in contrast to the slower cohort who are more likely to deploy external sources especially bank finance. Retained earnings remain the predominant source of on-going financing for internationalising firms but trade credit is often used and invoice discounting is utilised quite frequently. In terms of lending, asset based lending backed by personal guarantees appears paramount for securing bank finance. Whilst the lack of diversified sources of funding for internationalising SMEs was found in both jurisdictions there appears no evidence to suggest that internationalisation impedes firms in securing finance. Finally state supports were cited as important to the internationalisation process, in particular those provided by Enterprise Ireland were deemed very valuable. Considering the paucity of studies to date on SME internationalisation and in particular the funding mechanisms deployed by them; this study seeks to contribute to the body of knowledge in both the international business and finance disciplines.Keywords: funding, government support, international pathways, modes of entry
Procedia PDF Downloads 245518 Motivational Profiles of the Entrepreneurial Career in Spanish Businessmen
Authors: Magdalena Suárez-Ortega, M. Fe. Sánchez-García
Abstract:
This paper focuses on the analysis of the motivations that lead people to undertake and consolidate their business. It is addressed from the framework of planned behavior theory, which recognizes the importance of the social environment and cultural values, both in the decision to undertake business and in business consolidation. Similarly, it is also based on theories of career development, which emphasize the importance of career management competencies and their connections to other vital aspects of people, including their roles within their families and other personal activities. This connects directly with the impact of entrepreneurship on the career and the professional-personal project of each individual. This study is part of the project titled Career Design and Talent Management (Ministry of Economy and Competitiveness of Spain, State Plan 2013-2016 Excellence Ref. EDU2013-45704-P). The aim of the study is to identify and describe entrepreneurial competencies and motivational profiles in a sample of 248 Spanish entrepreneurs, considering the consolidated profile and the profile in transition (n = 248).In order to obtain the information, the Questionnaire of Motivation and conditioners of the entrepreneurial career (MCEC) has been applied. This consists of 67 items and includes four scales (E1-Conflicts in conciliation, E2-Satisfaction in the career path, E3-Motivations to undertake, E4-Guidance Needs). Cluster analysis (mixed method, combining k-means clustering with a hierarchical method) was carried out, characterizing the groups profiles according to the categorical variables (chi square, p = 0.05), and the quantitative variables (ANOVA). The results have allowed us to characterize three motivational profiles relevant to the motivation, the degree of conciliation between personal and professional life, and the degree of conflict in conciliation, levels of career satisfaction and orientation needs (in the entrepreneurial project and life-career). The first profile is formed by extrinsically motivated entrepreneurs, professionally satisfied and without conflict of vital roles. The second profile acts with intrinsic motivation and also associated with family models, and although it shows satisfaction with their professional career, it finds a high conflict in their family and professional life. The third is composed of entrepreneurs with high extrinsic motivation, professional dissatisfaction and at the same time, feel the conflict in their professional life by the effect of personal roles. Ultimately, the analysis has allowed us to line the kinds of entrepreneurs to different levels of motivation, satisfaction, needs and articulation in professional and personal life, showing characterizations associated with the use of time for leisure, and the care of the family. Associations related to gender, age, activity sector, environment (rural, urban, virtual), and the use of time for domestic tasks are not identified. The model obtained and its implications for the design of training actions and orientation to entrepreneurs is also discussed.Keywords: motivation, entrepreneurial career, guidance needs, life-work balance, job satisfaction, assessment
Procedia PDF Downloads 301517 Improvement of the Traditional Techniques of Artistic Casting through the Development of Open Source 3D Printing Technologies Based on Digital Ultraviolet Light Processing
Authors: Drago Diaz Aleman, Jose Luis Saorin Perez, Cecile Meier, Itahisa Perez Conesa, Jorge De La Torre Cantero
Abstract:
Traditional manufacturing techniques used in artistic contexts compete with highly productive and efficient industrial procedures. The craft techniques and associated business models tend to disappear under the pressure of the appearance of mass-produced products that compete in all niche markets, including those traditionally reserved for the work of art. The surplus value derived from the prestige of the author, the exclusivity of the product or the mastery of the artist, do not seem to be sufficient reasons to preserve this productive model. In the last years, the adoption of open source digital manufacturing technologies in small art workshops can favor their permanence by assuming great advantages such as easy accessibility, low cost, and free modification, adapting to specific needs of each workshop. It is possible to use pieces modeled by computer and made with FDM (Fused Deposition Modeling) 3D printers that use PLA (polylactic acid) in the procedures of artistic casting. Models printed by PLA are limited to approximate minimum sizes of 3 cm, and optimal layer height resolution is 0.1 mm. Due to these limitations, it is not the most suitable technology for artistic casting processes of smaller pieces. An alternative to solve size limitation, are printers from the type (SLS) "selective sintering by laser". And other possibility is a laser hardens, by layers, metal powder and called DMLS (Direct Metal Laser Sintering). However, due to its high cost, it is a technology that is difficult to introduce in small artistic foundries. The low-cost DLP (Digital Light Processing) type printers can offer high resolutions for a reasonable cost (around 0.02 mm on the Z axis and 0.04 mm on the X and Y axes), and can print models with castable resins that allow the subsequent direct artistic casting in precious metals or their adaptation to processes such as electroforming. In this work, the design of a DLP 3D printer is detailed, using backlit LCD screens with ultraviolet light. Its development is totally "open source" and is proposed as a kit made up of electronic components, based on Arduino and easy to access mechanical components in the market. The CAD files of its components can be manufactured in low-cost FDM 3D printers. The result is less than 500 Euros, high resolution and open-design with free access that allows not only its manufacture but also its improvement. In future works, we intend to carry out different comparative analyzes, which allow us to accurately estimate the print quality, as well as the real cost of the artistic works made with it.Keywords: traditional artistic techniques, DLP 3D printer, artistic casting, electroforming
Procedia PDF Downloads 142516 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 79515 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces
Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur
Abstract:
In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.Keywords: aerodynamic, bi-dimensional, vegetation, synergistic
Procedia PDF Downloads 269514 Cognitive Deficits and Association with Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder in 22q11.2 Deletion Syndrome
Authors: Sinead Morrison, Ann Swillen, Therese Van Amelsvoort, Samuel Chawner, Elfi Vergaelen, Michael Owen, Marianne Van Den Bree
Abstract:
22q11.2 Deletion Syndrome (22q11.2DS) is caused by the deletion of approximately 60 genes on chromosome 22 and is associated with high rates of neurodevelopmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorders (ASD). The presentation of these disorders in 22q11.2DS is reported to be comparable to idiopathic forms and therefore presents a valuable model for understanding mechanisms of neurodevelopmental disorders. Cognitive deficits are thought to be a core feature of neurodevelopmental disorders, and possibly manifest in behavioural and emotional problems. There have been mixed findings in 22q11.2DS on whether the presence of ADHD or ASD is associated with greater cognitive deficits. Furthermore, the influence of developmental stage has never been taken into account. The aim was therefore to examine whether the presence of ADHD or ASD was associated with cognitive deficits in childhood and/or adolescence in 22q11.2DS. We conducted the largest study to date of this kind in 22q11.2DS. The same battery of tasks measuring processing speed, attention and spatial working memory were completed by 135 participants with 22q11.2DS. Wechsler IQ tests were completed, yielding Full Scale (FSIQ), Verbal (VIQ) and Performance IQ (PIQ). Age-standardised difference scores were produced for each participant. Developmental stages were defined as children (6-10 years) and adolescents (10-18 years). ADHD diagnosis was ascertained from a semi-structured interview with a parent. ASD status was ascertained from a questionnaire completed by a parent. Interaction and main effects of cognitive performance of those with or without a diagnosis of ADHD or ASD in childhood or adolescence were conducted with 2x2 ANOVA. Significant interactions were followed up with t-tests of simple effects. Adolescents with ASD displayed greater deficits in all measures (processing speed, p = 0.022; sustained attention, p = 0.016; working memory, p = 0.006) than adolescents without ASD; there was no difference between children with and without ASD. There were no significant differences on IQ measures. Both children and adolescents with ADHD displayed greater deficits on sustained attention (p = 0.002) than those without ADHD. There were no significant differences on any other measures for ADHD. Magnitude of cognitive deficit in individuals with 22q11.2DS varied by cognitive domain, developmental stage and presence of neurodevelopmental disorder. Adolescents with 22q11.2DS and ASD showed greater deficits on all measures, which suggests there may be a sensitive period in childhood to acquire these domains, or reflect increasing social and academic demands in adolescence. The finding of poorer sustained attention in children and adolescents with ADHD supports previous research and suggests a specific deficit which can be separated from processing speed and working memory. This research provides unique insights into the association of ASD and ADHD with cognitive deficits in a group at high genomic risk of neurodevelopmental disorders.Keywords: 22q11.2 deletion syndrome, attention deficit hyperactivity disorder, autism spectrum disorder, cognitive development
Procedia PDF Downloads 151513 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 109512 Belarus Rivers Runoff: Current State, Prospects
Authors: Aliaksandr Volchak, Мaryna Barushka
Abstract:
The territory of Belarus is studied quite well in terms of hydrology but runoff fluctuations over time require more detailed research in order to forecast changes in rivers runoff in future. Generally, river runoff is shaped by natural climatic factors, but man-induced impact has become so big lately that it can be compared to natural processes in forming runoffs. In Belarus, a heavy man load on the environment was caused by large-scale land reclamation in the 1960s. Lands of southern Belarus were reclaimed most, which contributed to changes in runoff. Besides, global warming influences runoff. Today we observe increase in air temperature, decrease in precipitation, changes in wind velocity and direction. These result from cyclic climate fluctuations and, to some extent, the growth of concentration of greenhouse gases in the air. Climate change affects Belarus’s water resources in different ways: in hydropower industry, other water-consuming industries, water transportation, agriculture, risks of floods. In this research we have done an assessment of river runoff according to the scenarios of climate change and global climate forecast presented in the 4th and 5th Assessment Reports conducted by Intergovernmental Panel on Climate Change (IPCC) and later specified and adjusted by experts from Vilnius Gediminas Technical University with the use of a regional climatic model. In order to forecast changes in climate and runoff, we analyzed their changes from 1962 up to now. This period is divided into two: from 1986 up to now in comparison with the changes observed from 1961 to 1985. Such a division is a common world-wide practice. The assessment has revealed that, on the average, changes in runoff are insignificant all over the country, even with its irrelevant increase by 0.5 – 4.0% in the catchments of the Western Dvina River and north-eastern part of the Dnieper River. However, changes in runoff have become more irregular both in terms of the catchment area and inter-annual distribution over seasons and river lengths. Rivers in southern Belarus (the Pripyat, the Western Bug, the Dnieper, the Neman) experience reduction of runoff all year round, except for winter, when their runoff increases. The Western Bug catchment is an exception because its runoff reduces all year round. Significant changes are observed in spring. Runoff of spring floods reduces but the flood comes much earlier. There are different trends in runoff changes in spring, summer, and autumn. Particularly in summer, we observe runoff reduction in the south and west of Belarus, with its growth in the north and north-east. Our forecast of runoff up to 2035 confirms the trend revealed in 1961 – 2015. According to it, in the future, there will be a strong difference between northern and southern Belarus, between small and big rivers. Although we predict irrelevant changes in runoff, it is quite possible that they will be uneven in terms of seasons or particular months. Especially, runoff can change in summer, but decrease in the rest seasons in the south of Belarus, whereas in the northern part the runoff is predicted to change insignificantly.Keywords: assessment, climate fluctuation, forecast, river runoff
Procedia PDF Downloads 121