Search results for: copy number variant
128 The Negative Effects of Controlled Motivation on Mathematics Achievement
Authors: John E. Boberg, Steven J. Bourgeois
Abstract:
The decline in student engagement and motivation through the middle years is well documented and clearly associated with a decline in mathematics achievement that persists through high school. To combat this trend and, very often, to meet high-stakes accountability standards, a growing number of parents, teachers, and schools have implemented various methods to incentivize learning. However, according to Self-Determination Theory, forms of incentivized learning such as public praise, tangible rewards, or threats of punishment tend to undermine intrinsic motivation and learning. By focusing on external forms of motivation that thwart autonomy in children, adults also potentially threaten relatedness measures such as trust and emotional engagement. Furthermore, these controlling motivational techniques tend to promote shallow forms of cognitive engagement at the expense of more effective deep processing strategies. Therefore, any short-term gains in apparent engagement or test scores are overshadowed by long-term diminished motivation, resulting in inauthentic approaches to learning and lower achievement. The current study focuses on the relationships between student trust, engagement, and motivation during these crucial years as students transition from elementary to middle school. In order to test the effects of controlled motivational techniques on achievement in mathematics, this quantitative study was conducted on a convenience sample of 22 elementary and middle schools from a single public charter school district in the south-central United States. The study employed multi-source data from students (N = 1,054), parents (N = 7,166), and teachers (N = 356), along with student achievement data and contextual campus variables. Cross-sectional questionnaires were used to measure the students’ self-regulated learning, emotional and cognitive engagement, and trust in teachers. Parents responded to a single item on incentivizing the academic performance of their child, and teachers responded to a series of questions about their acceptance of various incentive strategies. Structural equation modeling (SEM) was used to evaluate model fit and analyze the direct and indirect effects of the predictor variables on achievement. Although a student’s trust in teacher positively predicted both emotional and cognitive engagement, none of these three predictors accounted for any variance in achievement in mathematics. The parents’ use of incentives, on the other hand, predicted a student’s perception of his or her controlled motivation, and these two variables had significant negative effects on achievement. While controlled motivation had the greatest effects on achievement, parental incentives demonstrated both direct and indirect effects on achievement through the students’ self-reported controlled motivation. Comparing upper elementary student data with middle-school student data revealed that controlling forms of motivation may be taking their toll on student trust and engagement over time. While parental incentives positively predicted both cognitive and emotional engagement in the younger sub-group, such forms of controlling motivation negatively predicted both trust in teachers and emotional engagement in the middle-school sub-group. These findings support the claims, posited by Self-Determination Theory, about the dangers of incentivizing learning. Short-term gains belie the underlying damage to motivational processes that lead to decreased intrinsic motivation and achievement. Such practices also appear to thwart basic human needs such as relatedness.Keywords: controlled motivation, student engagement, incentivized learning, mathematics achievement, self-determination theory, student trust
Procedia PDF Downloads 221127 Supporting 'vulnerable' Students to Complete Their Studies During the Economic Crisis in Greece: The Umbrella Program of International Hellenic University
Authors: Rigas Kotsakis, Nikolaos Tsigilis, Vasilis Grammatikopoulos, Evridiki Zachopoulou
Abstract:
During the last decade, Greece has faced an unprecedented financial crisis, affecting various aspects and functionalities of Higher Education. Besides the restricted funding of academic institutions, the students and their families encountered economical difficulties that undoubtedly influenced the effective completion of their studies. In this context, a fairly large number of students in Alexander campus of International Hellenic University (IHU) delay, interrupt, or even abandon their studies, especially when they come from low-income families, belong to sensitive social or special needs groups, they have different cultural origins, etc. For this reason, a European project, named “Umbrella”, was initiated aiming at providing the necessary psychological support and counseling, especially to disadvantaged students, towards the completion of their studies. To this end, a network of various academic members (academic staff and students) from IHU, namely iMentor, were implicated in different roles. Specifically, experienced academic staff trained students to serve as intermediate links for the integration and educational support of students that fall into the aforementioned sensitive social groups and face problems for the completion of their studies. The main idea of the project is held upon its person-centered character, which facilitates direct student-to-student communication without the intervention of the teaching staff. The backbone of the iMentors network are senior students that face no problem in their academic life and volunteered for this project. It should be noted that there is a provision from the Umbrella structure for substantial and ethical rewards for their engagement. In this context, a well-defined, stringent methodology was implemented for the evaluation of the extent of the problem in IHU and the detection of the profile of the “candidate” disadvantaged students. The first phase included two steps, (a) data collection and (b) data cleansing/ preprocessing. The first step involved the data collection process from the Secretary Services of all Schools in IHU, from 1980 to 2019, which resulted in 96.418 records. The data set included the School name, the semester of studies, a student enrolling criteria, the nationality, the graduation year or the current, up-to-date academic state (still studying, delayed, dropped off, etc.). The second step of the employed methodology involved the data cleansing/preprocessing because of the existence of “noisy” data, missing and erroneous values, etc. Furthermore, several assumptions and grouping actions were imposed to achieve data homogeneity and an easy-to-interpret subsequent statistical analysis. Specifically, the duration of 40 years recording was limited to the last 15 years (2004-2019). In 2004 the Greek Technological Institutions were evolved into Higher Education Universities, leading into a stable and unified frame of graduate studies. In addition, the data concerning active students were excluded from the analysis since the initial processing effort was focused on the detection of factors/variables that differentiated graduate and deleted students. The final working dataset included 21.432 records with only two categories of students, those that have a degree and those who abandoned their studies. Findings of the first phase are presented across faculties and further discussed.Keywords: higher education, students support, economic crisis, mentoring
Procedia PDF Downloads 115126 Introducing Transport Engineering through Blended Learning Initiatives
Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi
Abstract:
Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.Keywords: blended learning, highway design, teaching, transport planning
Procedia PDF Downloads 149125 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers
Authors: Margarita Dufresne
Abstract:
This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel
Procedia PDF Downloads 72124 A Case of Severe Iatrogenic Cushing’s Syndrome Followed by Adrenal Crisis, Multifocal Pneumonia, Sepsis, Pulmonary Embolism and Prolonged Adrenal Insufficiency
Authors: Jelena Maletkovic
Abstract:
Background: Endogenous Cushing’s syndrome is a rare disease, but iatrogenic or drug related Cushing syndrome from glucocorticoid products is commonly seen in clinical practice. With high dose and long term use of glucocorticoids, patients can develop isolated hypothalamic-pituitary-adrenal (HPA) suppression, or HPA axis suppression can be accompanied by overt iatrogenic Cushing’s syndrome. This is a rare case where severe Cushing’s syndrome developed from an unknown medication and was followed by severe and prolonged adrenal insufficiency and multiple potentially fatal complications. Case: This is a 37-year-old woman who is presented to Emergency Room (ER) with shortness of breath and chest pain. Four months prior to this presentation the patient was a generally healthy woman who was looking for improvement in her appearance and visited local Rejuvenation Clinic. After initial consultation with a nurse, she was contacted by a physician over the phone and was advised to start taking multiple injectable medications that will arrive by mail. Medications without any labels on bottles were delivered and the patient started daily intramuscular injections. Over the next two months, she noticed rounding of her face and swelling around her eyes. She gained 20 pounds, mostly abdominal fat and became extremely fatigued. Her muscles on legs were visibly decreasing in size and she felt significant muscle weakness. Unexplained bruising occurred. She started growing hair on face and developed secondary amenorrhea. New severe back pain started. She developed depression and headaches. Finally, over a few days, a number of red-purple stretch marks that were sensitive and painful appeared over her abdomen, upper part of arms and legs. She then became suspicious that these dramatic symptoms are caused by injectable medication and she discontinued injections. Over the next few days she presented to ER with low blood pressure and oxygen saturation of 75%. Studies revealed extensive pneumonia as well as multiple pulmonary emboli. Her white blood count was elevated with 32 000 and she also had acute kidney failure on admission. She was treated for sepsis and was also given stress dose steroids. Steroids were tapered over 48 hours and discontinued. After being discharged to home, on her first visit to endocrinology clinic she had undetectable ACTH of < 2pg/mL and undetectable 8am cortisol of < 0.2mcg/dL. She did not respond to an intramuscular injection of cosyntropin 250mcg and her repeated cortisol after 60 minutes was only 1mcg/dL. The patient was diagnosed with adrenal insufficiency and was started on hydrocortisone 20mg+10mg. It took close to 2 years of slow tapering for recovery of this patient’s HPA axis and resolve all the sequelae from Cushing’s syndrome. Conclusion: Misuse and abuse of glucocorticoids have been present almost since these medications were discovered. This is a rare case where not only severe Cushing’s syndrome in full clinical picture developed but also the patient suffered multiple potentially fatal complications and prolonged adrenal insufficiency. Visits to herbal, rejuvenation, esthetic, and similar clinics are becoming more and more popular and physicians need to be aware of possible non-benign nature of medications that their patients may be using.Keywords: iatrogenic, Cushing's syndrome, adrenal crisis, steroid abuse
Procedia PDF Downloads 169123 Developing a Sustainable Transit Planning Index Using Analytical Hierarchy Process Method for ZEB Implementation in Canada
Authors: Mona Ghafouri-Azar, Sara Diamond, Jeremy Bowes, Grace Yuan, Aimee Burnett, Michelle Wyndham-West, Sara Wagner, Anand Pariyarath
Abstract:
Transportation is the fastest growing source of greenhouse gas emissions worldwide. In Canada, it is responsible for 23% of total CO2emissions from fuel combustion, and emissions from the transportation sector are the second largest source of emissions after the oil and gas sector. Currently, most Canadian public transportation systems rely on buses that operateon fossil fuels.Canada is currently investing billions of dollars to replacediesel buses with electric busesas this isperceived to have a significant impact on climate mitigation. This paper focuses on the possible impacts of zero emission buses (ZEB) on sustainable development, considering three dimensions of sustainability; environmental quality, economic growth, and social development.A sustainable transportation system is one that is safe, affordable, accessible, efficient, and resilient and that contributes minimal emissions of carbon and other pollutants.To enable implementation of these goals, relevant indicators were selected and defined that measure progress towards a sustainable transportation system. These were drawn from Canadian and international examples. Studies compare different European cities in terms of development, sustainability, and infrastructures, by using transport performance indicators. A Normalized Transport Sustainability index measures and compares policies in different urban areas and allows fine-tuning of policies. Analysts use a number ofmethods for sustainable analysis, like cost-benefit analysis (CBA) toassess economic benefit, life-cycle assessment (LCA) to assess social, economic, and environment factors and goals, and multi-criteria decision making (MCDM) analysis which can comparediffering stakeholder preferences.A multi criteria decision making approach is an appropriate methodology to plan and evaluate sustainable transit development and to provide insights and meaningful information for decision makers and transit agencies. It is essential to develop a system thataggregates specific discrete indices to assess the sustainability of transportation systems.Theseprioritize indicators appropriate for the differentCanadian transit system agencies and theirpreferences and requirements. This studywill develop an integrating index that alliesexistingdiscrete indexes to supporta reliable comparison between the current transportation system (diesel buses) and the new ZEB system emerging in Canada. As a first step, theindexes for each category are selected, and the index matrix constructed. Second, the selected indicators arenormalized to remove anyinconsistency between them. Next, the normalized matrix isweighted based on the relative importance of each index to the main domains of sustainability using the analytical hierarchy process (AHP) method. This is accomplished through expert judgement around the relative importance of different attributes with respect to the goals through apairwise comparison matrix. The considerationof multiple environmental, economic, and social factors (including equity and health) is integrated intoa sustainable transit planning index (STPI) which supportsrealistic ZEB implementation in Canada and beyond and is useful to different stakeholders, agencies, and ministries.Keywords: zero emission buses, sustainability, sustainable transit, transportation, analytical hierarchy process, environment, economy, social
Procedia PDF Downloads 129122 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys
Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz
Abstract:
There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys
Procedia PDF Downloads 245121 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 114120 'Sextually' Active: Teens, 'Sexting' and Gendered Double Standards in the Digital Age
Authors: Annalise Weckesser, Alex Wade, Clara Joergensen, Jerome Turner
Abstract:
Introduction: Digital mobile technologies afford Generation M a number of opportunities in terms of communication, creativity and connectivity in their social interactions. Yet these young people’s use of such technologies is often the source of moral panic with accordant social anxiety especially prevalent in media representations of teen ‘sexting,’ or the sending of sexually explicit images via smartphones. Thus far, most responses to youth sexting have largely been ineffective or unjust with adult authorities sometimes blaming victims of non-consensual sexting, using child pornography laws to paradoxically criminalise those they are designed to protect, and/or advising teenagers to simply abstain from the practice. Prevention strategies are further skewed, with sex education initiatives often targeted at girls, implying that they shoulder the responsibility of minimising the risks associated with sexting (e.g. revenge porn and sexual predation). Purpose of Study: Despite increasing public interest and concern about ‘teen sexting,’ there remains a dearth of research with young people regarding their experiences of navigating sex and relationships in the current digital media landscape. Furthermore, young people's views on sexting are rarely solicited in the policy and educational strategies aimed at them. To address this research-policy-education gap, an interdisciplinary team of four researchers (from anthropology, media, sociology and education) have undertaken a peer-to-peer research project to co-create a sexual health intervention. Methods: In the winter of 2015-2016, the research team conducted serial group interviews with four cohorts of students (aged 13 to 15) from a secondary school in the West Midlands, UK. To facilitate open dialogue, girls and boys were interviewed separately, and each group consisted of no more than four pupils. The team employed a range of participatory techniques to elicit young people’s views on sexting, its consequences, and its interventions. A final focus group session was conducted with all 14 male and female participants to explore developing a peer-to-peer ‘safe sexting’ education intervention. Findings: This presentation will highlight the ongoing, ‘old school’ sexual double standards at work within this new digital frontier. In the sharing of ‘nudes’ (teens’ preferred term to ‘sexting’) via social media apps (e.g. Snapchat and WhatsApp), girls felt sharing images was inherently risky and feared being blamed and ‘slut-shamed.’ In contrast, boys were seen to gain in social status if they accumulated nudes of female peers. Further, if boys had nudes of themselves shared without consent, they felt they were expected to simply ‘tough it out.’ The presentation will also explore what forms of supports teens desire to help them in their day-to-day navigation of these digitally mediated, heteronormative performances of teen femininity and masculinity expected of them. Conclusion: This is the first research project, within UK, conducted with rather than about teens and the phenomenon of sexting. It marks a timely and important contribution to the nascent, but growing body of knowledge on gender, sexual politics and the digital mobility of sexual images created by and circulated amongst young people.Keywords: teens, sexting, gender, sexual politics
Procedia PDF Downloads 238119 Poverty Reduction in European Cities: Local Governments’ Strategies and Programmes to Reduce Poverty; Interview Results from Austria
Authors: Melanie Schinnerl, Dorothea Greiling
Abstract:
In the context of the 2020 strategy, poverty and its fight returned to the center of national political efforts. This served as motivation for an Austrian research grant-funded project to focus on the under-researched local government level with the aim to identify municipal best-practice cases and to derive policy implications for Austria. Designing effective poverty reduction strategies is a complex challenge which calls for an integrated multi-actor in approach. Cities are increasingly confronted to combat poverty, even in rich EU-member states. By doing so cities face substantial demographic, cultural, economic and social challenges as well as changing welfare state regimes. Furthermore, there is a low willingness of (right-wing) governments to support the poor. Against this background, the research questions are: 1. How do local governments define poverty? 2. Who are the main risk groups and what are the most pressing problems when fighting urban poverty? 3. What is regarded as successful anti-poverty initiatives? 4. What is the underlying welfare state concept? To address the research questions a multi-method approach was chosen, consisting of a systematic literature analysis, a comprehensive document analysis, and expert interviews. For interpreting the data the project follows the qualitative-interpretive paradigm. Municipal approaches for reducing poverty are compared based on deductive, as well as inductive identified criteria. In addition to an intensive literature analysis, interviews (40) were conducted in Austria since the project started in March 2018. From the other countries, 14 responses have been collected, providing a first insight. Regarding the definition of poverty the EU SILC-definition as well as counting the persons who receive need-based minimum social benefits, the Austrian form of social welfare, are the predominant approaches in Austria. In addition to homeless people, single-parent families, un-skilled persons, long-term unemployed persons, migrants (first and second generation), refugees and families with at least 3 children were frequently mentioned. The most pressing challenges for Austrian cities are: expected reductions of social budgets, a great insecurity of the central government's social policy reform plans, the growing number of homeless people and a lack of affordable housing. Together with affordable housing, old-age poverty will gain more importance in the future. The Austrian best practice examples, suggested by interviewees, focused primarily on homeless, children and young people (till 25). Central government’s policy changes have already negative effects on programs for refugees and elderly unemployed. Social Housing in Vienna was frequently mentioned as an international best practice case, other growing cities can learn from. The results from Austria indicate a change towards the social investment state, which primarily focuses on children and labour market integration. The first insights from the other countries indicate that affordable housing and labor market integration are cross-cutting issues. Inherited poverty and old-age poverty seems to be more pressing outside Austria.Keywords: anti-poverty policies, European cities, empirical study, social investment
Procedia PDF Downloads 118118 Multilocus Phylogenetic Approach Reveals Informative DNA Barcodes for Studying Evolution and Taxonomy of Fusarium Fungi
Authors: Alexander A. Stakheev, Larisa V. Samokhvalova, Sergey K. Zavriev
Abstract:
Fusarium fungi are among the most devastating plant pathogens distributed all over the world. Significant reduction of grain yield and quality caused by Fusarium leads to multi-billion dollar annual losses to the world agricultural production. These organisms can also cause infections in immunocompromised persons and produce the wide range of mycotoxins, such as trichothecenes, fumonisins, and zearalenone, which are hazardous to human and animal health. Identification of Fusarium fungi based on the morphology of spores and spore-forming structures, colony color and appearance on specific culture media is often very complicated due to the high similarity of these features for closely related species. Modern Fusarium taxonomy increasingly uses data of crossing experiments (biological species concept) and genetic polymorphism analysis (phylogenetic species concept). A number of novel Fusarium sibling species has been established using DNA barcoding techniques. Species recognition is best made with the combined phylogeny of intron-rich protein coding genes and ribosomal DNA sequences. However, the internal transcribed spacer of (ITS), which is considered to be universal DNA barcode for Fungi, is not suitable for genus Fusarium, because of its insufficient variability between closely related species and the presence of non-orthologous copies in the genome. Nowadays, the translation elongation factor 1 alpha (TEF1α) gene is the “gold standard” of Fusarium taxonomy, but the search for novel informative markers is still needed. In this study, we used two novel DNA markers, frataxin (FXN) and heat shock protein 90 (HSP90) to discover phylogenetic relationships between Fusarium species. Multilocus phylogenetic analysis based on partial sequences of TEF1α, FXN, HSP90, as well as intergenic spacer of ribosomal DNA (IGS), beta-tubulin (β-TUB) and phosphate permease (PHO) genes has been conducted for 120 isolates of 19 Fusarium species from different climatic zones of Russia and neighboring countries using maximum likelihood (ML) and maximum parsimony (MP) algorithms. Our analyses revealed that FXN and HSP90 genes could be considered as informative phylogenetic markers, suitable for evolutionary and taxonomic studies of Fusarium genus. It has been shown that PHO gene possesses more variable (22 %) and parsimony informative (19 %) characters than other markers, including TEF1α (12 % and 9 %, correspondingly) when used for elucidating phylogenetic relationships between F. avenaceum and its closest relatives – F. tricinctum, F. acuminatum, F. torulosum. Application of novel DNA barcodes confirmed the fact that F. arthrosporioides do not represent a separate species but only a subspecies of F. avenaceum. Phylogeny based on partial PHO and FXN sequences revealed the presence of separate cluster of four F. avenaceum strains which were closer to F. torulosum than to major F. avenaceum clade. The strain F-846 from Moldova, morphologically identified as F. poae, formed a separate lineage in all the constructed dendrograms, and could potentially be considered as a separate species, but more information is needed to confirm this conclusion. Variable sites in PHO sequences were used for the first-time development of specific qPCR-based diagnostic assays for F. acuminatum and F. torulosum. This work was supported by Russian Foundation for Basic Research (grant № 15-29-02527).Keywords: DNA barcode, fusarium, identification, phylogenetics, taxonomy
Procedia PDF Downloads 324117 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 134116 Quality in Healthcare: An Autism-Friendly Hospital Emergency Waiting Room
Authors: Elena Bellini, Daniele Mugnaini, Michele Boschetto
Abstract:
People with an Autistic Spectrum Disorder and an Intellectual Disability who need to attend a Hospital Emergency Waiting Room frequently present high levels of discomfort and challenging behaviors due to stress-related hyperarousal, sensory sensitivity, novelty-anxiety, communication and self-regulation difficulties. Increased agitation and acting out also disturb the diagnostic and therapeutic processes, and the emergency room climate. Architectural design disciplines aimed at reducing distress in hospitals or creating autism-friendly environments are called for to find effective answers to this particular need. A growing number of researchers are considering the physical environment as an important point of intervention for people with autism. It has been shown that providing the right setting can help enhance confidence and self-esteem and can have a profound impact on their health and wellbeing. Environmental psychology has evaluated the perceived quality of care, looking at the design of hospital rooms, paths and circulation, waiting rooms, services and devices. Furthermore, many studies have investigated the influence of the hospital environment on patients, in terms of stress-reduction and therapeutic intervention’ speed, but also on health professionals and their work. Several services around the world are organizing autism-friendly hospital environments which involve the architecture and the specific staff training. In Italy, the association Spes contra spem has promoted and published, in 2013, the ‘Chart of disabled people in the hospital’. It stipulates that disabled people should have equal rights to accessible and high-quality care. There are a few Italian examples of therapeutic programmes for autistic people as the Dama project in Milan and the recent experience of Children and Autism Foundation in Pordenone. Careggi’s Emergency Waiting Room in Florence has been built to satisfy this challenge. This project of research comes from a collaboration between the technical staff of Careggi Hospital, the Center for autism PAMAPI and some architects expert in the sensory environment. The methodology of focus group involved architects, psychologists and professionals through a transdisciplinary research, centered on the links between the spatial characteristics and clinical state of people with ASD. The relationship between architectural space and quality of life is studied to pay maximum attention to users’ needs and to support the medical staff in their work by a specific program of training. The result of this research is a sum of criteria used to design the emergency waiting room, that will be illustrated. A protected room, with a clear space design, maximizes comprehension and predictability. The multisensory environment is thought to help sensory integration and relaxation. Visual communication through Ipad allows an anticipated understanding of medical procedures, and a specific technological system supports requests, choices and self-determination in order to fit sensory stimulation to personal preferences, especially for hypo and hypersensitive people. All these characteristics should ensure a better regulation of the arousal, less behavior problems, improving treatment accessibility, safety, and effectiveness. First results about patient-satisfaction levels will be presented.Keywords: accessibility of care, autism-friendly architecture, personalized therapeutic process, sensory environment
Procedia PDF Downloads 268115 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories
Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy
Abstract:
Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography
Procedia PDF Downloads 191114 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects
Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha
Abstract:
The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).Keywords: artificial intelligence, space traffic management, space situational awareness, space debris
Procedia PDF Downloads 261113 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 181112 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach
Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa
Abstract:
Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation
Procedia PDF Downloads 189111 The Use of Non-Parametric Bootstrap in Computing of Microbial Risk Assessment from Lettuce Consumption Irrigated with Contaminated Water by Sanitary Sewage in Infulene Valley
Authors: Mario Tauzene Afonso Matangue, Ivan Andres Sanchez Ortiz
Abstract:
The Metropolitan area of Maputo (Mozambique Capital City) is located in semi-arid zone (800 mm annual rainfall) with 1101170 million inhabitants. On the west side, there are the flatlands of Infulene where the Mulauze River flows towards to the Indian Ocean, receiving at this site, the storm water contaminated with sanitary sewage from Maputo, transported through a concrete open channel. In Infulene, local communities grow salads crops such as tomato, onion, garlic, lettuce, and cabbage, which are then commercialized and consumed in several markets in Maputo City. Lettuce is the most daily consumed salad crop in different meals, generally in fast-foods, breakfasts, lunches, and dinners. However, the risk of infection by several pathogens due to the consumption of lettuce, using the Quantitative Microbial Risk Assessment (QMRA) tools, is still unknown since there are few studies or publications concerning to this matter in Mozambique. This work is aimed at determining the annual risk arising from the consumption of lettuce grown in Infulene valley, in Maputo, using QMRA tools. The exposure model was constructed upon the volume of contaminated water remaining in the lettuce leaves, the empirical relations between the number of pathogens and the indicator of microorganisms (E. coli), the consumption of lettuce (g) and reduction of pathogens (days). The reference pathogens were Vibrio cholerae, Cryptosporidium, norovirus, and Ascaris. The water quality samples (E. coli) were collected in the storm water channel from January 2016 to December 2018, comprising 65 samples, and the urban lettuce consumption data were collected through inquiry in Maputo Metropolis covering 350 persons. A non-parametric bootstrap was performed involving 10,000 iterations over the collected dataset, namely, water quality (E. coli) and lettuce consumption. The dose-response models were: Exponential for Cryptosporidium, Kummer Confluent hypergeomtric function (1F1) for Vibrio and Ascaris Gaussian hypergeometric function (2F1-(a,b;c;z) for norovirus. The annual infection risk estimates were performed using R 3.6.0 (CoreTeam) software by Monte Carlo (Latin hypercubes), a sampling technique involving 10,000 iterations. The annual infection risks values expressed by Median and the 95th percentile, per person per year (pppy) arising from the consumption of lettuce are as follows: Vibrio cholerae (1.00, 1.00), Cryptosporidium (3.91x10⁻³, 9.72x 10⁻³), nororvirus (5.22x10⁻¹, 9.99x10⁻¹) and Ascaris (2.59x10⁻¹, 9.65x10⁻¹). Thus, the consumption of the lettuce would result in greater risks than the tolerable levels ( < 10⁻³ pppy or 10⁻⁶ DALY) for all pathogens, and the Vibrio cholerae is the most virulent pathogens, according to the hit-single models followed by the Ascaris lumbricoides and norovirus. The sensitivity analysis carried out in this work pointed out that in the whole QMRA, the most important input variable was the reduction of pathogens (Spearman rank value was 0.69) between harvest and consumption followed by water quality (Spearman rank value was 0.69). The decision-makers (Mozambique Government) must strengthen the prevention measures related to pathogens reduction in lettuce (i.e., washing) and engage in wastewater treatment engineering.Keywords: annual infections risk, lettuce, non-parametric bootstrapping, quantitative microbial risk assessment tools
Procedia PDF Downloads 122110 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 146109 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 34108 Organization Structure of Towns and Villages System in County Area Based on Fractal Theory and Gravity Model: A Case Study of Suning, Hebei Province, China
Authors: Liuhui Zhu, Peng Zeng
Abstract:
With the rapid development in China, the urbanization has entered the transformation and promotion stage, and its direction of development has shifted to overall regional synergy. China has a large number of towns and villages, with comparative small scale and scattered distribution, which always support and provide resources to cities leading to urban-rural opposition, so it is difficult to achieve common development in a single town or village. In this context, the regional development should focus more on towns and villages to form a synergetic system, joining the regional association with cities. Thus, the paper raises the question about how to effectively organize towns and villages system to regulate the resource allocation and improve the comprehensive value of the regional area. To answer the question, it is necessary to find a suitable research unit and analysis of its present situation of towns and villages system for optimal development. By combing relevant researches and theoretical models, the county is the most basic administrative unit in China, which can directly guide and regulate the development of towns and villages, so the paper takes county as the research unit. Following the theoretical concept of ‘three structures and one network’, the paper concludes the research framework to analyse the present situation of towns and villages system, including scale structure, functional structure, spatial structure, and organization network. The analytical methods refer to the fractal theory and gravity model, using statistics and spatial data. The scale structure analyzes rank-size dimensions and uses the principal component method to calculate the comprehensive scale of towns and villages. The functional structure analyzes the functional types and industrial development of towns and villages. The spatial structure analyzes the aggregation dimension, network dimension, and correlation dimension of spatial elements to represent the overall spatial relationships. In terms of organization network, from the perspective of entity and ono-entity, the paper analyzes the transportation network and gravitational network. Based on the present situation analysis, the optimization strategies are proposed in order to achieve a synergetic relationship between towns and villages in the county area. The paper uses Suning county in the Beijing-Tianjin-Hebei region as a case study to apply the research framework and methods and then proposes the optimization orientations. The analysis results indicate that: (1) The Suning county is lack of medium-scale towns to transfer effect from towns to villages. (2) The distribution of gravitational centers is uneven, and the effect of gravity is limited only for nearby towns and villages. The gravitational network is not complete, leading to economic activities scattered and isolated. (3) The overall development of towns and villages system is immature, staying at ‘single heart and multi-core’ stage, and some specific optimization strategies are proposed. This study provides a regional view for the development of towns and villages and concludes the research framework and methods of towns and villages system for forming an effective synergetic relationship between them, contributing to organize resources and stimulate endogenous motivation, and form counter magnets to join the urban-rural integration.Keywords: towns and villages system, organization structure, county area, fractal theory, gravity model
Procedia PDF Downloads 138107 First Attempts Using High-Throughput Sequencing in Senecio from the Andes
Authors: L. Salomon, P. Sklenar
Abstract:
The Andes hold the highest plant species diversity in the world. How this occurred is one of the most intriguing questions in studies addressing the origin and patterning of plant diversity worldwide. Recently, the explosive adaptive radiations found in high Andean groups have been pointed as triggers to this spectacular diversity. The Andes is the species-richest area for the biggest genus from the Asteraceae family: Senecio. There, the genus presents an incredible diversity of species, striking growth form variation, and large niche span. Even when some studies tried to disentangle the evolutionary story for some Andean species in Senecio, they obtained partially resolved and low supported phylogenies, as expected for recently radiated groups. The high-throughput sequencing (HTS) approaches have proved to be a powerful tool answering phylogenetic questions in those groups whose evolutionary stories are recent and traditional techniques like Sanger sequencing are not informative enough. Although these tools have been used to understand the evolution of an increasing number of Andean groups, nowadays, their scope has not been applied for Senecio. This project aims to contribute to a better knowledge of the mechanisms shaping the hyper diversity of Senecio in the Andean region, using HTS focusing on Senecio ser. Culcitium (Asteraceae), recently recircumscribed. Firstly, reconstructing a highly resolved and supported phylogeny, and after assessing the role of allopatric differentiation, hybridization, and genome duplication in the diversification of the group. Using the Hyb-Seq approach, combining target enrichment using Asteraceae COS loci baits and genome skimming, more than 100 new accessions were generated. HybPhyloMaker and HybPiper pipelines were used for the phylogenetic analyses, and another pipeline in development (Paralogue Wizard) was used to deal with paralogues. RAxML was used to generate gene trees and Astral for species tree reconstruction. Phyparts were used to explore as first step of gene tree discordance along the clades. Fully resolved with moderated supported trees were obtained, showing Senecio ser. Culcitium as monophyletic. Within the group, some species formed well-supported clades with morphologically related species, while some species would not have exclusive ancestry, in concordance with previous studies using amplified fragment length polymorphism (AFLP) showing geographical differentiation. Discordance between gene trees was detected. Paralogues were detected for many loci, indicating possible genome duplications; ploidy level estimation using flow cytometry will be carried out during the next months in order to identify the role of this process in the diversification of the group. Likewise, TreeSetViz package for Mesquite, hierarchical likelihood ratio congruence test using Concaterpillar, and Procrustean Approach to Cophylogeny (PACo), will be used to evaluate the congruence among different inheritance patterns. In order to evaluate the influence of hybridization and Incomplete Lineage Sorting (ILS) in each resultant clade from the phylogeny, Joly et al.'s 2009 method in a coalescent scenario and Paterson’s D-statistic will be performed. Even when the main discordance sources between gene trees were not explored in detail yet, the data show that at least to some degree, processes such as genome duplication, hybridization, and/or ILS could be involved in the evolution of the group.Keywords: adaptive radiations, Andes, genome duplication, hybridization, Senecio
Procedia PDF Downloads 140106 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis
Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon
Abstract:
Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles
Procedia PDF Downloads 388105 Meta-Analysis of Previously Unsolved Cases of Aviation Mishaps Employing Molecular Pathology
Authors: Michael Josef Schwerer
Abstract:
Background: Analyzing any aircraft accident is mandatory based on the regulations of the International Civil Aviation Organization and the respective country’s criminal prosecution authorities. Legal medicine investigations are unavoidable when fatalities involve the flight crew or when doubts arise concerning the pilot’s aeromedical health status before the event. As a result of frequently tremendous blunt and sharp force trauma along with the impact of the aircraft to the ground, consecutive blast or fire exposition of the occupants or putrefaction of the dead bodies in cases of delayed recovery, relevant findings can be masked or destroyed and therefor being inaccessible in standard pathology practice comprising just forensic autopsy and histopathology. Such cases are of considerable risk of remaining unsolved without legal consequences for those responsible. Further, no lessons can be drawn from these scenarios to improve flight safety and prevent future mishaps. Aims and Methods: To learn from previously unsolved aircraft accidents, re-evaluations of the investigation files and modern molecular pathology studies were performed. Genetic testing involved predominantly PCR-based analysis of gene regulation, studying DNA promotor methylations, RNA transcription and posttranscriptional regulation. In addition, the presence or absence of infective agents, particularly DNA- and RNA-viruses, was studied. Technical adjustments of molecular genetic procedures when working with archived sample material were necessary. Standards for the proper interpretation of the respective findings had to be settled. Results and Discussion: Additional molecular genetic testing significantly contributes to the quality of forensic pathology assessment in aviation mishaps. Previously undetected cardiotropic viruses potentially explain e.g., a pilot’s sudden incapacitation resulting from cardiac failure or myocardial arrhythmia. In contrast, negative results for infective agents participate in ruling out concerns about an accident pilot’s fitness to fly and the aeromedical examiner’s precedent decision to issue him or her an aeromedical certificate. Care must be taken in the interpretation of genetic testing for pre-existing diseases such as hypertrophic cardiomyopathy or ischemic heart disease. Molecular markers such as mRNAs or miRNAs, which can establish these diagnoses in clinical patients, might be misleading in-flight crew members because of adaptive changes in their tissues resulting from repeated mild hypoxia during flight, for instance. Military pilots especially demonstrate significant physiological adjustments to their somatic burdens in flight, such as cardiocirculatory stress and air combat maneuvers. Their non-pathogenic alterations in gene regulation and expression will likely be misinterpreted for genuine disease by inexperienced investigators. Conclusions: The growing influence of molecular pathology on legal medicine practice has found its way into aircraft accident investigation. As appropriate quality standards for laboratory work and data interpretation are provided, forensic genetic testing supports the medico-legal analysis of aviation mishaps and potentially reduces the number of unsolved events in the future.Keywords: aviation medicine, aircraft accident investigation, forensic pathology, molecular pathology
Procedia PDF Downloads 47104 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 62103 Impact of Increased Radiology Staffing on After-Hours Radiology Reporting Efficiency and Quality
Authors: Peregrine James Dalziel, Philip Vu Tran
Abstract:
Objective / Introduction: Demand for radiology services from Emergency Departments (ED) continues to increase with greater demands placed on radiology staff providing reports for the management of complex cases. Queuing theory indicates that wide variability of process time with the random nature of request arrival increases the probability of significant queues. This can lead to delays in the time-to-availability of radiology reports (TTA-RR) and potentially impaired ED patient flow. In addition, greater “cognitive workload” of greater volume may lead to reduced productivity and increased errors. We sought to quantify the potential ED flow improvements obtainable from increased radiology providers serving 3 public hospitals in Melbourne Australia. We sought to assess the potential productivity gains, quality improvement and the cost-effectiveness of increased labor inputs. Methods & Materials: The Western Health Medical Imaging Department moved from single resident coverage on weekend days 8:30 am-10:30 pm to a limited period of 2 resident coverage 1 pm-6 pm on both weekend days. The TTA-RR for weekend CT scans was calculated from the PACs database for the 8 month period symmetrically around the date of staffing change. A multivariate linear regression model was developed to isolate the improvement in TTA-RR, between the two 4-months periods. Daily and hourly scan volume at the time of each CT scan was calculated to assess the impact of varying department workload. To assess any improvement in report quality/errors a random sample of 200 studies was assessed to compare the average number of clinically significant over-read addendums to reports between the 2 periods. Cost-effectiveness was assessed by comparing the marginal cost of additional staffing against a conservative estimate of the economic benefit of improved ED patient throughput using the Australian national insurance rebate for private ED attendance as a revenue proxy. Results: The primary resident on call and the type of scan accounted for most of the explained variability in time to report availability (R2=0.29). Increasing daily volume and hourly volume was associated with increased TTA-RR (1.5m (p<0.01) and 4.8m (p<0.01) respectively per additional scan ordered within each time frame. Reports were available 25.9 minutes sooner on average in the 4 months post-implementation of double coverage (p<0.01) with additional 23.6 minutes improvement when 2 residents were on-site concomitantly (p<0.01). The aggregate average improvement in TTA-RR was 24.8 hours per weekend day This represents the increased decision-making time available to ED physicians and potential improvement in ED bed utilisation. 5% of reports from the intervention period contained clinically significant addendums vs 7% in the single resident period but this was not statistically significant (p=0.7). The marginal cost was less than the anticipated economic benefit based assuming a 50% capture of improved TTA-RR inpatient disposition and using the lowest available national insurance rebate as a proxy for economic benefit. Conclusion: TTA-RR improved significantly during the period of increased staff availability, both during the specific period of increased staffing and throughout the day. Increased labor utilisation is cost-effective compared with the potential improved productivity for ED cases requiring CT imaging.Keywords: workflow, quality, administration, CT, staffing
Procedia PDF Downloads 113102 Long-Term Tillage, Lime Matter and Cover Crop Effects under Heavy Soil Conditions in Northern Lithuania
Authors: Aleksandras Velykis, Antanas Satkus
Abstract:
Clay loam and clay soils are typical for northern Lithuania. These soils are susceptible to physical degradation in the case of intensive use of heavy machinery for field operations. However, clayey soils having poor physical properties by origin require more intensive tillage to maintain proper physical condition for grown crops. Therefore not only choice of suitable tillage system is very important for these soils in the region, but also additional search of other measures is essential for good soil physical state maintenance. Research objective: To evaluate the long-term effects of different intensity tillage as well as its combinations with supplementary agronomic practices on improvement of soil physical conditions and environmental sustainability. The experiment examined the influence of deep and shallow ploughing, ploughless tillage, combinations of ploughless tillage with incorporation of lime sludge and cover crop for green manure and application of the same cover crop for mulch without autumn tillage under spring and winter crop growing conditions on clay loam (27% clay, 50% silt, 23% sand) Endocalcaric Endogleyic Cambisol. Methods: The indicators characterizing the impact of investigated measures were determined using the following methods and devices: Soil dry bulk density – by Eijkelkamp cylinder (100 cm3), soil water content – by weighing, soil structure – by Retsch sieve shaker, aggregate stability – by Eijkelkamp wet sieving apparatus, soil mineral nitrogen – in 1 N KCL extract using colorimetric method. Results: Clay loam soil physical state (dry bulk density, structure, aggregate stability, water content) depends on tillage system and its combination with additional practices used. Application of cover crop winter mulch without tillage in autumn, ploughless tillage and shallow ploughing causes the compaction of bottom (15-25 cm) topsoil layer. However, due to ploughless tillage the soil dry bulk density in subsoil (25-35 cm) layer is less compared to deep ploughing. Soil structure in the upper (0-15 cm) topsoil layer and in the seedbed (0-5 cm), prepared for spring crops is usually worse when applying the ploughless tillage or cover crop mulch without autumn tillage. Application of lime sludge under ploughless tillage conditions helped to avoid the compaction and structure worsening in upper topsoil layer, as well as increase aggregate stability. Application of reduced tillage increased soil water content at upper topsoil layer directly after spring crop sowing. However, due to reduced tillage the water content in all topsoil markedly decreased when droughty periods lasted for a long time. Combination of reduced tillage with cover crop for green manure and winter mulch is significant for preserving the environment. Such application of cover crops reduces the leaching of mineral nitrogen into the deeper soil layers and environmental pollution. This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.Keywords: clay loam, endocalcaric endogleyic cambisol, mineral nitrogen, physical state
Procedia PDF Downloads 227101 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data
Authors: Linna Li
Abstract:
The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.Keywords: geovisualization, human mobility pattern, Los Angeles, social media
Procedia PDF Downloads 121100 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe
Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira
Abstract:
Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust
Procedia PDF Downloads 26799 Complete Genome Sequence Analysis of Pasteurella multocida Subspecies multocida Serotype A Strain PMTB2.1
Authors: Shagufta Jabeen, Faez J. Firdaus Abdullah, Zunita Zakaria, Nurulfiza M. Isa, Yung C. Tan, Wai Y. Yee, Abdul R. Omar
Abstract:
Pasteurella multocida (PM) is an important veterinary opportunistic pathogen particularly associated with septicemic pasteurellosis, pneumonic pasteurellosis and hemorrhagic septicemia in cattle and buffaloes. P. multocida serotype A has been reported to cause fatal pneumonia and septicemia. Pasteurella multocida subspecies multocida of serotype A Malaysian isolate PMTB2.1 was first isolated from buffaloes died of septicemia. In this study, the genome of P. multocida strain PMTB2.1 was sequenced using third-generation sequencing technology, PacBio RS2 system and analyzed bioinformatically via de novo analysis followed by in-depth analysis based on comparative genomics. Bioinformatics analysis based on de novo assembly of PacBio raw reads generated 3 contigs followed by gap filling of aligned contigs with PCR sequencing, generated a single contiguous circular chromosome with a genomic size of 2,315,138 bp and a GC content of approximately 40.32% (Accession number CP007205). The PMTB2.1 genome comprised of 2,176 protein-coding sequences, 6 rRNA operons and 56 tRNA and 4 ncRNAs sequences. The comparative genome sequence analysis of PMTB2.1 with nine complete genomes which include Actinobacillus pleuropneumoniae, Haemophilus parasuis, Escherichia coli and five P. multocida complete genome sequences including, PM70, PM36950, PMHN06, PM3480, PMHB01 and PMTB2.1 was carried out based on OrthoMCL analysis and Venn diagram. The analysis showed that 282 CDs (13%) are unique to PMTB2.1and 1,125 CDs with orthologs in all. This reflects overall close relationship of these bacteria and supports the classification in the Gamma subdivision of the Proteobacteria. In addition, genomic distance analysis among all nine genomes indicated that PMTB2.1 is closely related with other five Pasteurella species with genomic distance less than 0.13. Synteny analysis shows subtle differences in genetic structures among different P.multocida indicating the dynamics of frequent gene transfer events among different P. multocida strains. However, PM3480 and PM70 exhibited exceptionally large structural variation since they were swine and chicken isolates. Furthermore, genomic structure of PMTB2.1 is more resembling that of PM36950 with a genomic size difference of approximately 34,380 kb (smaller than PM36950) and strain-specific Integrative and Conjugative Elements (ICE) which was found only in PM36950 is absent in PMTB2.1. Meanwhile, two intact prophages sequences of approximately 62 kb were found to be present only in PMTB2.1. One of phage is similar to transposable phage SfMu. The phylogenomic tree was constructed and rooted with E. coli, A. pleuropneumoniae and H. parasuis based on OrthoMCL analysis. The genomes of P. multocida strain PMTB2.1 were clustered with bovine isolates of P. multocida strain PM36950 and PMHB01 and were separated from avian isolate PM70 and swine isolates PM3480 and PMHN06 and are distant from Actinobacillus and Haemophilus. Previous studies based on Single Nucleotide Polymorphism (SNPs) and Multilocus Sequence Typing (MLST) unable to show a clear phylogenetic relatedness between Pasteurella multocida and the different host. In conclusion, this study has provided insight on the genomic structure of PMTB2.1 in terms of potential genes that can function as virulence factors for future study in elucidating the mechanisms behind the ability of the bacteria in causing diseases in susceptible animals.Keywords: comparative genomics, DNA sequencing, phage, phylogenomics
Procedia PDF Downloads 188