Search results for: fuzzy logic based analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 45221

Search results for: fuzzy logic based analysis

881 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 108
880 Management Potentialities Of Rice Blast Disease Caused By Magnaporthe Grisae Using New Nanofungicides Derived From Chitosan

Authors: Abdulaziz Bashir Kutawa1, 2, *, Khairulmazmi Ahmad 1, 3, Mohd Zobir Hussein 4, Asgar Ali 5, * Mohd Aswad Abdul Wahab1, Amara Rafi3, Mahesh Tiran Gunasena1, 6, Muhammad Ziaur Rahman1, 7, Md Imam Hossain1, And Syazwan Afif Mohd Zobir1

Abstract:

Various abiotic and biotic stresses have an impact on rice production all around the world. The most serious and prevalent disease in rice plants, known as rice blast, is one of the major obstacles to the production of rice. It is one of the diseases that has the greatest negative effects on rice farming globally, the disease is caused by a fungus called Magnaporthe grisae. Since nanoparticles were shown to have an inhibitory impact on certain types of fungus, nanotechnology is a novel notion to enhance agriculture by battling plant diseases. Utilizing nanocarrier systems enables the active chemicals to be absorbed, attached, and encapsulated to produce efficient nanodelivery formulations. The objectives of this research work were to determine the efficacy and mode of action of the nanofungicides (in-vitro) and in field conditions (in-vivo). Ionic gelation method was used in the development of the nanofungicides. Using the poisoned media method, the synthesized agronanofungicides' in-vitro antifungal activity was assessed against M. grisae. The potato dextrose agar (PDA) was amended in several concentrations; 0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.15, 0.20, 0.25, 0.30, and 0.35 ppm for the nanofungicides. Medium with the only solvent served as a control. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Based on the results of the zone of inhibition, the chitosan-hexaconazole agronanofungicide (2g/mL) was the most effective fungicide to inhibit the growth of the fungus with 100% inhibition at 0.2, 0.25, 0.30, and 0.35 ppm, respectively. Then followed by carbendazim analytical fungicide that inhibited the growth of the fungus (100%) at 5, 10, 25, 50, and 100 ppm, respectively. The least were found to be propiconazole and basamid fungicides with 100% inhibition only at 100 ppm. The scanning electron microscope (SEM), confocal laser scanning microscope (CLSM), and transmission electron microscope (TEM) were used to study the mechanisms of action of the M. grisae fungal cells. The results showed that both carbendazim, chitosan-hexaconazole, and HXE were found to be the most effective fungicides in disrupting the mycelia of the fungus, and internal structures of the fungal cells. The results of the field assessment showed that the CHDEN treatment (5g/L, double dosage) was found to be the most effective fungicide to reduce the intensity of the rice blast disease with DSI of 17.56%, lesion length (0.43 cm), DR of 82.44%, AUDPC of 260.54 Unit2, and PI of 65.33%, respectively. The least treatment was found to be chitosan-hexaconazole-dazomet (2.5g/L, MIC). The usage of CHDEN and CHEN nanofungicides will significantly assist in lessening the severity of rice blast in the fields, increasing output and profit for rice farmers.

Keywords: chitosan, hexaconazole, disease incidence, and magnaporthe grisae

Procedia PDF Downloads 50
879 Navigating States of Emergency: A Preliminary Comparison of Online Public Reaction to COVID-19 and Monkeypox on Twitter

Authors: Antonia Egli, Theo Lynn, Pierangelo Rosati, Gary Sinclair

Abstract:

The World Health Organization (WHO) defines vaccine hesitancy as the postponement or complete denial of vaccines and estimates a direct linkage to approximately 1.5 million avoidable deaths annually. This figure is not immune to public health developments, as has become evident since the global spread of COVID-19 from Wuhan, China in early 2020. Since then, the proliferation of influential, but oftentimes inaccurate, outdated, incomplete, or false vaccine-related information on social media has impacted hesitancy levels to a degree described by the WHO as an infodemic. The COVID-19 pandemic and related vaccine hesitancy levels have in 2022 resulted in the largest drop in childhood vaccinations of the 21st century, while the prevalence of online stigma towards vaccine hesitant consumers continues to grow. Simultaneously, a second disease has risen to global importance: Monkeypox is an infection originating from west and central Africa and, due to racially motivated online hate, was in August 2022 set to be renamed by the WHO. To better understand public reactions towards two viral infections that became global threats to public health no two years apart, this research examines user replies to threads published by the WHO on Twitter. Replies to two Tweets from the @WHO account declaring COVID-19 and Monkeypox as ‘public health emergencies of international concern’ on January 30, 2020, and July 23, 2022, are gathered using the Twitter application programming interface and user mention timeline endpoint. Research methodology is unique in its analysis of stigmatizing, racist, and hateful content shared on social media within the vaccine discourse over the course of two disease outbreaks. Three distinct analyses are conducted to provide insight into (i) the most prevalent topics and sub-topics among user reactions, (ii) changes in sentiment towards the spread of the two diseases, and (iii) the presence of stigma, racism, and online hate. Findings indicate an increase in hesitancy to accept further vaccines and social distancing measures, the presence of stigmatizing content aimed primarily at anti-vaccine cohorts and racially motivated abusive messages, and a prevalent fatigue towards disease-related news overall. This research provides value to non-profit organizations or government agencies associated with vaccines and vaccination programs in emphasizing the need for public health communication fitted to consumers' vaccine sentiments, levels of health information literacy, and degrees of trust towards public health institutions. Considering the importance of addressing fears among the vaccine hesitant, findings also illustrate the risk of alienation through stigmatization, lead future research in probing the relatively underexamined field of online, vaccine-related stigma, and discuss the potential effects of stigma towards vaccine hesitant Twitter users in their decisions to vaccinate.

Keywords: social marketing, social media, public health communication, vaccines

Procedia PDF Downloads 82
878 Assessing the Outcomes of Collaboration with Students on Curriculum Development and Design on an Undergraduate Art History Module

Authors: Helen Potkin

Abstract:

This paper presents a practice-based case study of a project in which the student group designed and planned the curriculum content, classroom activities and assessment briefs in collaboration with the tutor. It focuses on the co-creation of the curriculum within a history and theory module, Researching the Contemporary, which runs for BA (Hons) Fine Art and Art History and for BA (Hons) Art Design History Practice at Kingston University, London. The paper analyses the potential of collaborative approaches to engender students’ investment in their own learning and to encourage reflective and self-conscious understandings of themselves as learners. It also addresses some of the challenges of working in this way, attending to the risks involved and feelings of uncertainty produced in experimental, fluid and open situations of learning. Alongside this, it acknowledges the tensions inherent in adopting such practices within the framework of the institution and within the wider of context of the commodification of higher education in the United Kingdom. The concept underpinning the initiative was to test out co-creation as a creative process and to explore the possibilities of altering the traditional hierarchical relationship between teacher and student in a more active, participatory environment. In other words, the project asked about: what kind of learning could be imagined if we were all in it together? It considered co-creation as producing different ways of being, or becoming, as learners, involving us reconfiguring multiple relationships: to learning, to each other, to research, to the institution and to our emotions. The project provided the opportunity for students to bring their own research and wider interests into the classroom, take ownership of sessions, collaborate with each other and to define the criteria against which they would be assessed. Drawing on students’ reflections on their experience of co-creation alongside theoretical considerations engaging with the processual nature of learning, concepts of equality and the generative qualities of the interrelationships in the classroom, the paper suggests that the dynamic nature of collaborative and participatory modes of engagement have the potential to foster relevant and significant learning experiences. The findings as a result of the project could be quantified in terms of the high level of student engagement in the project, specifically investment in the assessment, alongside the ambition and high quality of the student work produced. However, reflection on the outcomes of the experiment prompts a further set of questions about the nature of positionality in connection to learning, the ways our identities as learners are formed in and through our relationships in the classroom and the potential and productive nature of creative practice in education. Overall, the paper interrogates questions of what it means to work with students to invent and assemble the curriculum and it assesses the benefits and challenges of co-creation. Underpinning it is the argument that, particularly in the current climate of higher education, it is increasingly important to ask what it means to teach and to envisage what kinds of learning can be possible.

Keywords: co-creation, collaboration, learning, participation, risk

Procedia PDF Downloads 104
877 The Lived Experiences and Coping Strategies of Women with Attention Deficit and Hyperactivity Disorder (ADHD)

Authors: Oli Sophie Meredith, Jacquelyn Osborne, Sarah Verdon, Jane Frawley

Abstract:

PROJECT OVERVIEW AND BACKGROUND: Over one million Australians are affected by ADHD at an economic and social cost of over $20 billion per annum. Despite health outcomes being significantly worse compared with men, women have historically been overlooked in ADHD diagnosis and treatment. While research suggests physical activity and other non-prescription options can help with ADHD symptoms, the frontline response to ADHD remains expensive stimulant medications that can have adverse side effects. By interviewing women with ADHD, this research will examine women’s self-directed approaches to managing symptoms, including alternatives to prescription medications. It will investigate barriers and affordances to potentially helpful approaches and identify any concerning strategies pursued in lieu of diagnosis. SIGNIFICANCE AND INNOVATION: Despite the economic and societal impact of ADHD on women, research investigating how women manage their symptoms is scant. This project is significant because although women’s ADHD symptoms are markedly different to those of men, mainstream treatment has been based on the experiences of men. Further, it is thought that in developing nuanced coping strategies, women may have masked their symptoms. Thus, this project will highlight strategies which women deem effective in ‘thriving’ rather than just ‘hiding’. By investigating the health service use, self-care and physical activity of women with ADHD, this research aligns with a priority research areas as identified by the November 2023 senate ADHD inquiry report. APPROACH AND METHODS: Semi-structured interviews will be conducted with up to 20 women with ADHD. Interviews will be conducted in person and online to capture experience across rural and metropolitan Australia. Participants will be recruited in partnership with the peak representative body, ADHD Australia. The research will use an intersectional framework, and data will be analysed thematically. This project is led by an interdisciplinary and cross-institutional team of women with ADHD. Reflexive interviewing skills will be employed to help interviewees feel more comfortable disclosing their experiences, especially where they share common ground ENGAGEMENT, IMPACT AND BENEFIT: This research will benefit women with ADHD by increasing knowledge of strategies and alternative treatments to prescription medications, reducing the social and economic burden of ADHD on Australia and on individuals. It will also benefit women by identifying risks involved with some self-directed approaches in lieu of medical advice. The project has an accessible impact plan to directly benefit end-users, which includes the development of a podcast and a PDF resource translating findings. The resources will reach a wide audience through ADHD Australia’s extensive national networks. We will collaborate with Charles Sturt’s Accessibility and Inclusion Division of Safety, Security and Well-being to create a targeted resource for students with ADHD.

Keywords: ADHD, women's health, self-directed strategies, health service use, physical activity, public health

Procedia PDF Downloads 51
876 Comparative Economic Evaluation of Additional Respiratory Resources Utilized after Methylxanthine Initiation for the Treatment of Apnea of Prematurity in a South Asian Country

Authors: Shivakumar M, Leslie Edward S Lewis, Shashikala Devadiga, Sonia Khurana

Abstract:

Introduction: Methylxanthines are used for the treatment of AOP, to facilitate extubation and as a prophylactic agent to prevent apnea. Though the popularity of Caffeine has risen, it is expensive in a resource constrained developing countries like India. Objective: To evaluate the cost-effectiveness of Caffeine compared with Aminophylline treatment for AOP with respect to additional ventilatory resource utilized in different birth weight categorization. Design, Settings and Participants – Single centered, retrospective economic evaluation was done. Participants included preterm newborns with < 34 completed weeks of gestation age that were recruited under an Indian Council of Medical Research funded randomized clinical trial. Per protocol data was included from Neonatal Intensive Care Unit, Kasturba Hospital, Manipal, India between April 2012 and December 2014. Exposure: Preterm neonates were randomly allocated to either Caffeine or Aminophylline as per the trial protocol. Outcomes and Measures – We assessed surfactant requirement, duration of Invasive and Non-Invasive Ventilation, Total Methylxanthine cost and additional cost for respiratory support bared by the payers per day during hospital stay. For the purpose of this study Newborns were stratified as Category A – < 1000g, Category B – 1001 to 1500g and Category C – 1501 to 2500g. Results: Total 146 (Caffeine -72 and Aminophylline – 74) babies with Mean ± SD gestation age of 29.63 ± 1.89 weeks were assessed. 32.19% constitute of Category A, 55.48% were B and 12.33% were C. The difference in median duration of additional NIV and IMV support was statistically insignificant. However 60% of neonates who received Caffeine required additional surfactant therapy (p=0.02). The total median (IQR) cost of Caffeine was significantly high with Rs.10535 (Q3-6317.50, Q1-15992.50) where against Aminophylline cost was Rs.352 (Q3-236, Q1-709) (p < 0.001). The additional costs spent on respiratory support per day in neonates on either Methylxanthines were found to be statistically insignificant in the entire weight based category of our study. Whereas in Category B, the median O2 charges per day were found to have more in Caffeine treated newborns (p=0.05) with border line significance. In category A, providing one day NIV or IMV support significantly increases the unit log cost of Caffeine by 13.6% (CI – 95% ranging from 4 to 24; p=0.005) over log cost of Aminophylline. Conclusion: Cost of Caffeine is expensive than Aminophylline. It was found to be equally efficacious in reducing the number duration of NIV or IMV support. However adjusted with the NIV and IMV days of support, neonates fall in category A and category B who were on Caffeine pays excess amount of respiratory charges per day over aminophylline. In perspective of resource poor settings Aminophylline is cost saving and economically approachable.

Keywords: methylxanthines include caffeine and aminophylline, AOP (apnea of prematurity), IMV (invasive mechanical ventilation), NIV (non invasive ventilation), category a – <1000g, category b – 1001 to 1500g and category c – 1501 to 2500g

Procedia PDF Downloads 415
875 Biomimetic Dinitrosyl Iron Complexes: A Synthetic, Structural, and Spectroscopic Study

Authors: Lijuan Li

Abstract:

Nitric oxide (NO) has become a fascinating entity in biological chemistry over the past few years. It is a gaseous lipophilic radical molecule that plays important roles in several physiological and pathophysiological processes in mammals, including activating the immune response, serving as a neurotransmitter, regulating the cardiovascular system, and acting as an endothelium-derived relaxing factor. NO functions in eukaryotes both as a signal molecule at nanomolar concentrations and as a cytotoxic agent at micromolar concentrations. The latter arises from the ability of NO to react readily with a variety of cellular targets leading to thiol S-nitrosation, amino acid N-nitrosation, and nitrosative DNA damage. Nitric oxide can readily bind to metals to give metal-nitrosyl (M-NO) complexes. Some of these species are known to play roles in biological NO storage and transport. These complexes have different biological, photochemical, or spectroscopic properties due to distinctive structural features. These recent discoveries have spawned a great interest in the development of transition metal complexes containing NO, particularly its iron complexes that are central to the role of nitric oxide in the body. Spectroscopic evidence would appear to implicate species of “Fe(NO)2+” type in a variety of processes ranging from polymerization, carcinogenesis, to nitric oxide stores. Our research focuses on isolation and structural studies of non-heme iron nitrosyls that mimic biologically active compounds and can potentially be used for anticancer drug therapy. We have shown that reactions between Fe(NO)2(CO)2 and a series of imidazoles generated new non-heme iron nitrosyls of the form Fe(NO)2(L)2 [L = imidazole, 1-methylimidazole, 4-methylimidazole, benzimidazole, 5,6-dimethylbenzimidazole, and L-histidine] and a tetrameric cluster of [Fe(NO)2(L)]4 (L=Im, 4-MeIm, BzIm, and Me2BzIm), resulted from the interactions of Fe(NO)2 with a series of substituted imidazoles was prepared. Recently, a series of sulfur bridged iron di nitrosyl complexes with the general formula of [Fe(µ-RS)(NO)2]2 (R = n-Pr, t-Bu, 6-methyl-2-pyridyl, and 4,6-dimethyl-2-pyrimidyl), were synthesized by the reaction of Fe(NO)2(CO)2 with thiols or thiolates. Their structures and properties were studied by IR, UV-vis, 1H-NMR, EPR, electrochemistry, X-ray diffraction analysis and DFT calculations. IR spectra of these complexes display one weak and two strong NO stretching frequencies (νNO) in solution, but only two strong νNO in solid. DFT calculations suggest that two spatial isomers of these complexes bear 3 Kcal energy difference in solution. The paramagnetic complexes [Fe2(µ-RS)2(NO)4]-, have also been investigated by EPR spectroscopy. Interestingly, the EPR spectra of complexes exhibit an isotropic signal of g = 1.998 - 2.004 without hyperfine splitting. The observations are consistent with the results of calculations, which reveal that the unpaired electron dominantly delocalize over the two sulfur and two iron atoms. The difference of the g values between the reduced form of iron-sulfur clusters and the typical monomeric di nitrosyl iron complexes is explained, for the first time, by of the difference in unpaired electron distributions between the two types of complexes, which provides the theoretical basis for the use of g value as a spectroscopic tool to differentiate these biologically active complexes.

Keywords: di nitrosyl iron complex, metal nitrosyl, non-heme iron, nitric oxide

Procedia PDF Downloads 291
874 India’s Energy Transition, Pathways for Green Economy

Authors: B. Sudhakara Reddy

Abstract:

In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.

Keywords: energy, renewables, green technology, scenario

Procedia PDF Downloads 234
873 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations

Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos

Abstract:

Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.

Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest

Procedia PDF Downloads 157
872 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas

Authors: A. Odoom, A. Salama, H. Ibrahim

Abstract:

Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.

Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model

Procedia PDF Downloads 123
871 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors

Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria

Abstract:

The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.

Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels

Procedia PDF Downloads 147
870 Monitoring of Vector Mosquitors of Diseases in Areas of Energy Employment Influence in the Amazon (Amapa State), Brazil

Authors: Ribeiro Tiago Magalhães

Abstract:

Objective: The objective of this study was to evaluate the influence of a hydroelectric power plant in the state of Amapá, and to present the results obtained by dimensioning the diversity of the main mosquito vectors involved in the transmission of pathogens that cause diseases such as malaria, dengue and leishmaniasis. Methodology: The present study was conducted on the banks of the Araguari River, in the municipalities of Porto Grande and Ferreira Gomes in the southern region of Amapá State. Nine monitoring campaigns were conducted, the first in April 2014 and the last in March 2016. The selection of the catch sites was done in order to prioritize areas with possible occurrence of the species considered of greater importance for public health and areas of contact between the wild environment and humans. Sampling efforts aimed to identify the local vector fauna and to relate it to the transmission of diseases. In this way, three phases of collection were established, covering the schedules of greater hematophageal activity. Sampling was carried out using Shannon Shack and CDC types of light traps and by means of specimen collection with the hold method. This procedure was carried out during the morning (between 08:00 and 11:00), afternoon-twilight (between 15:30 and 18:30) and night (between 18:30 and 22:00). In the specific methodology of capture with the use of the CDC equipment, the delimited times were from 18:00 until 06:00 the following day. Results: A total of 32 species of mosquitoes was identified, and a total of 2,962 specimens was taxonomically subdivided into three genera (Culicidae, Psychodidae and Simuliidae) Psorophora, Sabethes, Simulium, Uranotaenia and Wyeomyia), besides those represented by the family Psychodidae that due to the morphological complexities, allows the safe identification (without the method of diaphanization and assembly of slides for microscopy), only at the taxonomic level of subfamily (Phlebotominae). Conclusion: The nine monitoring campaigns carried out provided the basis for the design of the possible epidemiological structure in the areas of influence of the Cachoeira Caldeirão HPP, in order to point out among the points established for sampling, which would represent greater possibilities, according to the group of identified mosquitoes, of disease acquisition. However, what should be mainly considered, are the future events arising from reservoir filling. This argument is based on the fact that the reproductive success of Culicidae is intrinsically related to the aquatic environment for the development of its larvae until adulthood. From the moment that the water mirror is expanded in new environments for the formation of the reservoir, a modification in the process of development and hatching of the eggs deposited in the substrate can occur, causing a sudden explosion in the abundance of some genera, in special Anopheles, which holds preferences for denser forest environments, close to the water portions.

Keywords: Amazon, hydroelectric, power, plants

Procedia PDF Downloads 173
869 Patterns of Libido, Sexual Activity and Sexual Performance in Female Migraineurs

Authors: John Farr Rothrock

Abstract:

Although migraine traditionally has been assumed to convey a relative decrease in libido, sexual activity and sexual performance, recent data have suggested that the female migraine population is far from homogenous in this regard. We sought to determine the levels of libido, sexual activity and sexual performance in the female migraine patient population both generally and according to clinical phenotype. In this single-blind study, a consecutive series of sexually active new female patients ages 25-55 initially presenting to a university-based headache clinic and having a >1 year history of migraine were asked to complete anonymously a survey assessing their sexual histories generally and as they related to their headache disorder and the 19-item Female Sexual Function Index (FSFI). To serve as 2 separate control groups, 100 sexually active females with no history of migraine and 100 female migraineurs from the general (non-clinic) population but matched for age, marital status, educational background and socioeconomic status completed a similar survey. Over a period of 3 months, 188 consecutive migraine patients were invited to participate. Twenty declined, and 28 of the remaining 160 potential subjects failed to meet the inclusion criterion utilized for “sexually active” (ie, heterosexual intercourse at a frequency of > once per month in each of the preceding 6 months). In all groups younger age (p<.005), higher educational level attained (p<.05) and higher socioeconomic status (p<.025) correlated with a higher monthly frequency of intercourse and a higher likelihood of intercourse resulting in orgasm. Relative to the 100 control subjects with no history of migraine, the two migraine groups (total n=232) reported a lower monthly frequency of intercourse and recorded a lower FSFI score (both p<.025), but the contribution to this difference came primarily from the chronic migraine (CM) subgroup (n=92). Patients with low frequency episodic migraine (LFEM) and mid frequency episodic migraine (MFEM) reported a higher FSFI score, higher monthly frequency of intercourse, higher likelihood of intercourse resulting in orgasm and higher likelihood of multiple active sex partners than controls. All migraine subgroups reported a decreased likelihood of engaging in intercourse during an active migraine attack, but relative to the CM subgroup (8/92=9%), a higher proportion of patients in the LFEM (12/49=25%), MFEM (14/67=21%) and high frequency episodic migraine (HFEM: 6/14=43%) subgroups reported utilizing intercourse - and orgasm specifically - as a means of potentially terminating a migraine attack. In the clinic vs no-clinic groups there were no significant differences in the dependent variables assessed. Research subjects with LFEM and MFEM may report a level of libido, frequency of intercourse and likelihood of orgasm-associated intercourse that exceeds what is reported by age-matched controls free of migraine. Many patients with LFEM, MFEM and HFEM appear to utilize intercourse/orgasm as a means to potentially terminate an acute migraine attack.

Keywords: migraine, female, libido, sexual activity, phenotype

Procedia PDF Downloads 64
868 Effects of Glucogenic and Lipogenic Diets on Ruminal Microbiota and Metabolites in Vitro

Authors: Beihai Xiong, Dengke Hua, Wouter Hendriks, Wilbert Pellikaan

Abstract:

To improve the energy status of dairy cows in the early lactation, lots of jobs have been done on adjusting the starch to fiber ratio in the diet. As a complex ecosystem, the rumen contains a large population of microorganisms which plays a crucial role in feed degradation. Further study on the microbiota alterations and metabolic changes under different dietary energy sources is essential and valuable to better understand the function of the ruminal microorganisms and thereby to optimize the rumen function and enlarge feed efficiency. The present study will focus on the effects of two glucogenic diets (G: ground corn and corn silage; S: steam-flaked corn and corn silage) and a lipogenic diet (L: sugar beet pulp and alfalfa silage) on rumen fermentation, gas production, the ruminal microbiota and metabolome, and also their correlations in vitro. The gas production was recorded consistently, and the gas volume and producing rate at times 6, 12, 24, 48 h were calculated separately. The fermentation end-products were measured after fermenting for 48 h. The ruminal bacteria and archaea communities were determined by 16S RNA sequencing technique, the metabolome profile was tested through LC-MS methods. Compared to the diet G and S, the L diet had a lower dry matter digestibility, propionate production, and ammonia-nitrogen concentration. The two glucogenic diets performed worse in controlling methane and lactic acid production compared to the L diet. The S diet produced the greatest cumulative gas volume at any time points during incubation compared to the G and L diet. The metabolic analysis revealed that the lipid digestion was up-regulated by the diet L than other diets. On the subclass level, most metabolites belonging to the fatty acids and conjugates were higher, but most metabolites belonging to the amino acid, peptides, and analogs were lower in diet L than others. Differences in rumen fermentation characteristics were associated with (or resulting from) changes in the relative abundance of bacterial and archaeal genera. Most highly abundant bacteria were stable or slightly influenced by diets, while several amylolytic and cellulolytic bacteria were sensitive to the dietary changes. The L diet had a significantly higher number of cellulolytic bacteria, including the genera of Ruminococcus, Butyrivibrio, Eubacterium, Lachnospira, unclassified Lachnospiraceae, and unclassified Ruminococcaceae. The relative abundances of amylolytic bacteria genera including Selenomonas_1, Ruminobacter, and Succinivibrionaceae_UCG-002 were higher in diet G and S. These affected bacteria was also proved to have high associations with certain metabolites. The Selenomonas_1 and Succinivibrionaceae_UCG-002 may contribute to the higher propionate production in the diet G and S through enhancing the succinate pathway. The results indicated that the two glucogenic diets had a greater extent of gas production, a higher dry matter digestibility, and produced more propionate than diet L. The steam-flaked corn did not show a better performance on fermentation end-products than ground corn. This study has offered a deeper understanding of ruminal microbial functions which could assistant the improvement in rumen functions and thereby in the ruminant production.

Keywords: gas production, metabolome, microbiota, rumen fermentation

Procedia PDF Downloads 131
867 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 235
866 Presence and Severity of Language Deficits in Comprehension, Production and Pragmatics in a Group of ALS Patients: Analysis with Demographic and Neuropsychological Data

Authors: M. Testa, L. Peotta, S. Giusiano, B. Lazzolino, U. Manera, A. Canosa, M. Grassano, F. Palumbo, A. Bombaci, S. Cabras, F. Di Pede, L. Solero, E. Matteoni, C. Moglia, A. Calvo, A. Chio

Abstract:

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease of adulthood, which primarily affects the central nervous system and is characterized by progressive bilateral degeneration of motor neurons. The degeneration processes in ALS extend far beyond the neurons of the motor system, and affects cognition, behaviour and language. To outline the prevalence of language deficits in an ALS cohort and explore their profile along with demographic and neuropsychological data. A full neuropsychological battery and language assessment was administered to 56 ALS patients. Neuropsychological assessment included tests of executive functioning, verbal fluency, social cognition and memory. Language was assessed using tests for verbal comprehension, production and pragmatics. Patients were cognitively classified following the Revised Consensus Criteria and divided in three groups showing different levels of language deficits: group 1 - no language deficit; group 2 - one language deficit; group 3 - two or more language deficits. Chi-square for independence and non-parametric measures to compare groups were applied. Nearly half of ALS-CN patients (48%) reported one language test under the clinical cut-off, and only 13% of patents classified as ALS-CI showed no language deficits, while the rest 87% of ALS-CI reported two or more language deficits. ALS-BI and ALS-CBI cases all reported two or more language deficits. Deficits in production and in comprehension appeared more frequent in ALS-CI patients (p=0.011, p=0.003 respectively), with a higher percentage of comprehension deficits (83%). Nearly all ALS-CI reported at least one deficit in pragmatic abilities (96%) and all ALS-BI and ALS-CBI patients showed pragmatic deficits. Males showed higher percentage of pragmatic deficits (97%, p=0.007). No significant differences in language deficits have been found between bulbar and spinal onset. Months from onset and level of impairment at testing (ALS-FRS total score) were not significantly different between levels and type of language impairment. Age and education were significantly higher for cases showing no deficits in comprehension and pragmatics and in the group showing no language deficits. Comparing performances at neuropsychological tests among the three levels of language deficits, no significant differences in neuropsychological performances were found between group 1 and 2; compared to group 1, group 3 appeared to decay specifically on executive testing, verbal/visuospatial learning, and social cognition. Compared to group 2, group 3 showed worse performances specifically in tests of working memory and attention. Language deficits have found to be spread in our sample, encompassing verbal comprehension, production and pragmatics. Our study reveals that also cognitive intact patients (ALS-CN) showed at least one language deficit in 48% of cases. Pragmatic domain is the most compromised (84% of the total sample), present in nearly all ALS-CI (96%), likely due to the influence of executive impairment. Lower age and higher education seem to preserve comprehension, pragmatics and presence of language deficits. Finally, executive functions, verbal/visuospatial learning and social cognition differentiate the group with no language deficits from the group with a clinical language impairment (group 3), while attention and working memory differentiate the group with one language deficit from the clinical impaired group.

Keywords: amyotrophic lateral sclerosis, language assessment, neuropsychological assessment, language deficit

Procedia PDF Downloads 137
865 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose

Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini

Abstract:

Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.

Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration

Procedia PDF Downloads 146
864 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 300
863 Studies on the Bioactivity of Different Solvents Extracts of Selected Marine Macroalgae against Fish Pathogens

Authors: Mary Ghobrial, Sahar Wefky

Abstract:

Marine macroalgae have proven to be rich source of bioactive compounds with biomedical potential, not only for human but also for veterinary medicine. Emergence of microbial disease in aquaculture industries implies serious loses. Usage of commercial antibiotics for fish disease treatment produces undesirable side effects. Marine organisms are a rich source of structurally novel biologically active metabolites. Competition for space and nutrients led to the evolution of antimicrobial defense strategies in the aquatic environment. The interest in marine organisms as a potential and promising source of pharmaceutical agents has increased in the last years. Many bioactive and pharmacologically active substances have been isolated from microalgae. Compounds with antibacterial, antifungal and antiviral activities have been also detected in green, brown and red algae. Selected species of marine benthic algae belonging to the Phaeophyta and Rhodophyta, collected from different coastal areas of Alexandria (Egypt), were investigated for their antibacterial and antifungal, activities. Macroalgae samples were collected during low tide from the Alexandria Mediterranean coast. Samples were air dried under shade at room temperature. The dry algae were ground, using electric mixer grinder. They were soaked in 10 ml of each of the solvents acetone, ethanol, methanol and hexane. Antimicrobial activity was evaluated using well-cut diffusion technique In vitro screening of organic solvent extracts from the marine macroalgae Laurencia pinnatifida, Pterocladia capillaceae, Stepopodium zonale, Halopteris scoparia and Sargassum hystrix, showed specific activity in inhibiting the growth of five virulent strains of bacteria pathogenic to fish Pseudomonas fluorescens, Aeromonas hydrophila, Vibrio anguillarum, V. tandara, Escherichia coli and two fungi Aspergillus flavus and A. niger. Results showed that, acetone and ethanol extracts of all test macroalgae exhibited antibacterial activity, while acetone extract of the brown Sargassum hystrix displayed the highest antifungal activity. The extracts of seaweeds inhibited bacteria more strongly than fungi and species of the Rhodophyta showed the greatest activity against the bacteria rather than fungi tested. The gas liquid chromatography coupled with mass spectrometry detection technique allows good qualitative and quantitative analysis of the fractionated extracts with high sensitivity to the smaller amounts of components. Results indicated that, the main common component in the acetone extracts of L. pinnatifida and P. capillacea is 4-hydroxy-4-methyl2-pentanone representing 64.38 and 58.60%. Thus, the extracts derived from the red macroalgae were more efficient than those obtained from the brown macroalgae in combating bacterial pathogens rather than pathogenic fungi. The most preferred species over all was the red Laurencia pinnatifida. In conclusion, the present study provides the potential of red and brown macroalgae extracts for development of anti-pathogenic agents for use in fish aquaculture.

Keywords: bacteria, fungi, extracts, solvents

Procedia PDF Downloads 421
862 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 94
861 Molecular Identification of Camel Tick and Investigation of Its Natural Infection by Rickettsia and Borrelia in Saudi Arabia

Authors: Reem Alajmi, Hind Al Harbi, Tahany Ayaad, Zainab Al Musawi

Abstract:

Hard ticks Hyalomma spp. (family: Ixodidae) are obligate ectoparasite in their all life stages on some domestic animals mainly camels and cattle. Ticks may lead to many economic and public health problems because of their blood feeding behavior. Also, they act as vectors for many bacterial, viral and protozoan agents which may cause serious diseases such as tick-born encephalitis, Rocky-mountain spotted fever, Q-fever and Lyme disease which can affect human and/or animals. In the present study, molecular identification of ticks that attack camels in Riyadh region, Saudi Arabia based on the partial sequence of mitochondrial 16s rRNA gene was applied. Also, the present study aims to detect natural infections of collected camel ticks with Rickessia spp. and Borelia spp. using PCR/hybridization of Citrate synthase encoding gene present in bacterial cells. Hard ticks infesting camels were collected from different camels located in a farm in Riyadh region, Saudi Arabia. Results of the present study showed that the collected specimens belong to two species: Hyalomma dromedari represent 99% of the identified specimens and Hyalomma marginatum which account for 1 % of identified ticks. The molecular identification was made through blasting the obtained sequence of this study with sequences already present and identified in GeneBank. All obtained sequences of H. dromedarii specimens showed 97-100% identity with the same gene sequence of the same species (Accession # L34306.1) which was used as a reference. Meanwhile, no intraspecific variations of H. marginatum mesured because only one specimen was collected. Results also had shown that the intraspecific variability between individuals of H. dromedarii obtained in 92 % of samples ranging from 0.2- 6.6%, while the remaining 7 % of the total samples of H. dromedarii showed about 10.3 % individual differences. However, the interspecific variability between H. dromedarii and H. marginatum was approximately 18.3 %. On the other hand, by using the technique of PCR/hybridization, we could detect natural infection of camel ticks with Rickettsia spp. and Borrelia spp. Results revealed the natural presence of both bacteria in collected ticks. Rickettsial spp. infection present in 29% of collected ticks, while 35% of collected specimen were infected with Borrelia spp. The valuable results obtained from the present study are a new record for the molecular identification of camel ticks in Riyadh, Saudi Arabia and their natural infection with both Rickettsia spp. and Borrelia spp. These results may help scientists to provide a good and direct control strategy of ticks in order to protect one of the most important economic animals which are camels. Also results of this project spotlight on the disease that might be transmitted by ticks to put out a direct protective plan to prevent spreading of these dangerous agents. Further molecular studies are needed to confirm the results of the present study by using other mitochondrial and nuclear genes for tick identification.

Keywords: Camel ticks, Rickessia spp. , Borelia spp. , mitochondrial 16s rRNA gene

Procedia PDF Downloads 256
860 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 241
859 Reducing Diagnostic Error in Australian Emergency Departments Using a Behavioural Approach

Authors: Breanna Wright, Peter Bragge

Abstract:

Diagnostic error rates in healthcare are approximately 10% of cases. Diagnostic errors can cause patient harm due to inappropriate, inadequate or delayed treatment, and such errors contribute heavily to medical liability claims globally. Therefore, addressing diagnostic error is a high priority. In most cases, diagnostic errors are the result of faulty information synthesis rather than lack of knowledge. Specifically, the majority of diagnostic errors involve cognitive factors, and in particular, cognitive biases. Emergency Departments are an environment with heightened risk of diagnostic error due to time and resource pressures, a frequently chaotic environment, and patients arriving undifferentiated and with minimal context. This project aimed to develop a behavioural, evidence-informed intervention to reduce diagnostic error in Emergency Departments through co-design with emergency physicians, insurers, researchers, hospital managers, citizens and consumer representatives. The Forum Process was utilised to address this aim. This involves convening a small (4 – 6 member) expert panel to guide a focused literature and practice review; convening of a 10 – 12 person citizens panel to gather perspectives of laypeople, including those affected by misdiagnoses; and a 18 – 22 person structured stakeholder dialogue bringing together representatives of the aforementioned stakeholder groups. The process not only provides in-depth analysis of the problem and associated behaviours, but brings together expertise and insight to facilitate identification of a behaviour change intervention. Informed by the literature and practice review, the Citizens Panel focused on eliciting the values and concerns of those affected or potentially affected by diagnostic error. Citizens were comfortable with diagnostic uncertainty if doctors were honest with them. They also emphasised the importance of open communication between doctors and patients and their families. Citizens expect more consistent standards across the state and better access for both patients and their doctors to patient health information to avoid time-consuming re-taking of long patient histories and medication regimes when re-presenting at Emergency Departments and to reduce the risk of unintentional omissions. The structured Stakeholder Dialogue focused on identifying a feasible behavioural intervention to review diagnoses in Emergency Departments. This needed to consider the role of cognitive bias in medical decision-making; contextual factors (in Victoria, there is a legislated 4-hour maximum time between ED triage and discharge / hospital admission); resource availability; and the need to ensure the intervention could work in large metropolitan as well as small rural and regional ED settings across Victoria. The identified behavioural intervention will be piloted in approximately ten hospital EDs across Victoria, Australia. This presentation will detail the findings of all review and consultation activities, describe the behavioural intervention developed and present results of the pilot trial.

Keywords: behavioural intervention, cognitive bias, decision-making, diagnostic error

Procedia PDF Downloads 110
858 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 91
857 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences

Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur

Abstract:

In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.

Keywords: decision making, endowment effect, pricing, loss aversion, loss attention

Procedia PDF Downloads 328
856 Antimicrobial and Anti-Biofilm Activity of Non-Thermal Plasma

Authors: Jan Masak, Eva Kvasnickova, Vladimir Scholtz, Olga Matatkova, Marketa Valkova, Alena Cejkova

Abstract:

Microbial colonization of medical instruments, catheters, implants, etc. is a serious problem in the spread of nosocomial infections. Biofilms exhibit enormous resistance to environment. The resistance of biofilm populations to antibiotic or biocides often increases by two to three orders of magnitude in comparison with suspension populations. Subjects of interests are substances or physical processes that primarily cause the destruction of biofilm, while the released cells can be killed by existing antibiotics. In addition, agents that do not have a strong lethal effect do not cause such a significant selection pressure to further enhance resistance. Non-thermal plasma (NTP) is defined as neutral, ionized gas composed of particles (photons, electrons, positive and negative ions, free radicals and excited or non-excited molecules) which are in permanent interaction. In this work, the effect of NTP generated by the cometary corona with a metallic grid on the formation and stability of biofilm and metabolic activity of cells in biofilm was studied. NTP was applied on biofilm populations of Staphylococcus epidermidis DBM 3179, Pseudomonas aeruginosa DBM 3081, DBM 3777, ATCC 15442 and ATCC 10145, Escherichia coli DBM 3125 and Candida albicans DBM 2164 grown on solid media on Petri dishes and on the titanium alloy (Ti6Al4V) surface used for the production joint replacements. Erythromycin (for S. epidermidis), polymyxin B (for E. coli and P. aeruginosa), amphotericin B (for C. albicans) and ceftazidime (for P. aeruginosa) were used to study the combined effect of NTP and antibiotics. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Fluorescence microscopy was applied to visualize the biofilm on the surface of the titanium alloy; SYTO 13 was used as a fluorescence probe to stain cells in the biofilm. It has been shown that biofilm populations of all studied microorganisms are very sensitive to the type of used NTP. The inhibition zone of biofilm recorded after 60 minutes exposure to NTP exceeded 20 cm², except P. aeruginosa DBM 3777 and ATCC 10145, where it was about 9 cm². Also metabolic activity of cells in biofilm differed for individual microbial strains. High sensitivity to NTP was observed in S. epidermidis, in which the metabolic activity of biofilm decreased after 30 minutes of NTP exposure to 15% and after 60 minutes to 1%. Conversely, the metabolic activity of cells of C. albicans decreased to 53% after 30 minutes of NTP exposure. Nevertheless, this result can be considered very good. Suitable combinations of exposure time of NTP and the concentration of antibiotic achieved in most cases a remarkable synergic effect on the reduction of the metabolic activity of the cells of the biofilm. For example, in the case of P. aeruginosa DBM 3777, a combination of 30 minutes of NTP with 1 mg/l of ceftazidime resulted in a decrease metabolic activity below 4%.

Keywords: anti-biofilm activity, antibiotic, non-thermal plasma, opportunistic pathogens

Procedia PDF Downloads 166
855 Automated System: Managing the Production and Distribution of Radiopharmaceuticals

Authors: Shayma Mohammed, Adel Trabelsi

Abstract:

Radiopharmacy is the art of preparing high-quality, radioactive, medicinal products for use in diagnosis and therapy. Radiopharmaceuticals unlike normal medicines, this dual aspect (radioactive, medical) makes their management highly critical. One of the most convincing applications of modern technologies is the ability to delegate the execution of repetitive tasks to programming scripts. Automation has found its way to the most skilled jobs, to improve the company's overall performance by allowing human workers to focus on more important tasks than document filling. This project aims to contribute to implement a comprehensive system to insure rigorous management of radiopharmaceuticals through the use of a platform that links the Nuclear Medicine Service Management System to the Nuclear Radio-pharmacy Management System in accordance with the recommendations of World Health Organization (WHO) and International Atomic Energy Agency (IAEA). In this project we attempt to build a web application that targets radiopharmacies, the platform is built atop the inherently compatible web stack which allows it to work in virtually any environment. Different technologies are used in this project (PHP, Symfony, MySQL Workbench, Bootstrap, Angular 7, Visual Studio Code and TypeScript). The operating principle of the platform is mainly based on two parts: Radiopharmaceutical Backoffice for the Radiopharmacian, who is responsible for the realization of radiopharmaceutical preparations and their delivery and Medical Backoffice for the Doctor, who holds the authorization for the possession and use of radionuclides and he/she is responsible for ordering radioactive products. The application consists of sven modules: Production, Quality Control/Quality Assurance, Release, General Management, References, Transport and Stock Management. It allows 8 classes of users: The Production Manager (PM), Quality Control Manager (QCM), Stock Manager (SM), General Manager (GM), Client (Doctor), Parking and Transport Manager (PTM), Qualified Person (QP) and Technical and Production Staff. Digital platform bringing together all players involved in the use of radiopharmaceuticals and integrating the stages of preparation, production and distribution, Web technologies, in particular, promise to offer all the benefits of automation while requiring no more than a web browser to act as a user client, which is a strength because the web stack is by nature multi-platform. This platform will provide a traceability system for radiopharmaceuticals products to ensure the safety and radioprotection of actors and of patients. The new integrated platform is an alternative to write all the boilerplate paperwork manually, which is a tedious and error-prone task. It would minimize manual human manipulation, which has proven to be the main source of error in nuclear medicine. A codified electronic transfer of information from radiopharmaceutical preparation to delivery will further reduce the risk of maladministration.

Keywords: automated system, management, radiopharmacy, technical papers

Procedia PDF Downloads 143
854 The Role of Group Interaction and Managers’ Risk-willingness for Business Model Innovation Decisions: A Thematic Analysis

Authors: Sarah Müller-Sägebrecht

Abstract:

Today’s volatile environment challenges executives to make the right strategic decisions to gain sustainable success. Entrepreneurship scholars postulate mainly positive effects of environmental changes on entrepreneurship behavior, such as developing new business opportunities, promoting ingenuity, and the satisfaction of resource voids. A strategic solution approach to overcome threatening environmental changes and catch new business opportunities is business model innovation (BMI). Although this research stream has gained further importance in the last decade, BMI research is still insufficient. Especially BMI barriers, such as inefficient strategic decision-making processes, need to be identified. Strategic decisions strongly impact organizational future and are, therefore, usually made in groups. Although groups draw on a more extensive information base than single individuals, group-interaction effects can influence the decision-making process - in a favorable but also unfavorable way. Decisions are characterized by uncertainty and risk, whereby their intensity is perceived individually differently. The individual risk-willingness influences which option humans choose. The special nature of strategic decisions, such as in BMI processes, is that these decisions are not made individually but in groups due to their high organizational scope. These groups consist of different personalities whose individual risk-willingness can vary considerably. It is known from group decision theory that these individuals influence each other, observable in different group-interaction effects. The following research questions arise: i) How does group interaction shape BMI decision-making from managers’ perspective? ii) What are the potential interrelations among managers’ risk-willingness, group biases, and BMI decision-making? After conducting 26 in-depth interviews with executives from the manufacturing industry, applied Gioia methodology reveals the following results: i) Risk-averse decision-makers have an increased need to be guided by facts. The more information available to them, the lower they perceive uncertainty and the more willing they are to pursue a specific decision option. However, the results also show that social interaction does not change the individual risk-willingness in the decision-making process. ii) Generally, it could be observed that during BMI decisions, group interaction is primarily beneficial to increase the group’s information base for making good decisions, less than for social interaction. Further, decision-makers mainly focus on information available to all decision-makers in the team but less on personal knowledge. This work contributes to strategic decision-making literature twofold. First, it gives insights into how group-interaction effects influence an organization’s strategic BMI decision-making. Second, it enriches risk-management research by highlighting how individual risk-willingness impacts organizational strategic decision-making. To date, it was known in BMI research that risk aversion would be an internal BMI barrier. However, with this study, it becomes clear that it is not risk aversion that inhibits BMI. Instead, the lack of information prevents risk-averse decision-makers from choosing a riskier option. Simultaneously, results show that risk-averse decision-makers are not easily carried away by the higher risk-willingness of their team members. Instead, they use social interaction to gather missing information. Therefore, executives need to provide sufficient information to all decision-makers to catch promising business opportunities.

Keywords: business model innovation, cognitive biases, group-interaction effects, strategic decision-making, risk-willingness

Procedia PDF Downloads 62
853 Implementing Quality Improvement Projects to Enhance Contraception and Abortion Care Service Provision and Pre-Service Training of Health Care Providers

Authors: Munir Kassa, Mengistu Hailemariam, Meghan Obermeyer, Kefelegn Baruda, Yonas Getachew, Asnakech Dessie

Abstract:

Improving the quality of sexual and reproductive health services that women receive is expected to have an impact on women’s satisfaction with the services, on their continued use and, ultimately, on their ability to achieve their fertility goals or reproductive intentions. Surprisingly, however, there is little empirical evidence of either whether this expectation is correct, or how best to improve service quality within sexual and reproductive health programs so that these impacts can be achieved. The Recent focus on quality has prompted more physicians to do quality improvement work, but often without the needed skill sets, which results in poorly conceived and ultimately unsuccessful improvement initiatives. As this renders the work unpublishable, it further impedes progress in the field of health care improvement and widens the quality chasm. Moreover, since 2014, the Center for International Reproductive Health Training (CIRHT) has worked diligently with 11 teaching hospitals across Ethiopia to increase access to contraception and abortion care services. This work has included improving pre-service training through education and curriculum development, expanding hands-on training to better learn critical techniques and counseling skills, and fostering a “team science” approach to research by encouraging scientific exploration. This is the first time this systematic approach has been applied and documented to improve access to high-quality services in Ethiopia. The purpose of this article is to report initiatives undertaken, and findings concluded by the clinical service team at CIRHT in an effort to provide a pragmatic approach to quality improvement projects. An audit containing nearly 300 questions about several aspects of patient care, including structure, process, and outcome indicators was completed by each teaching hospital’s quality improvement team. This baseline audit assisted in identifying major gaps and barriers, and each team was responsible for determining specific quality improvement aims and tasks to support change interventions using Shewart’s Cycle for Learning and Improvement (the Plan-Do-Study-Act model). To measure progress over time, quality improvement teams met biweekly and compiled monthly data for review. Also, site visits to each hospital were completed by the clinical service team to ensure monitoring and support. The results indicate that applying an evidence-based, participatory approach to quality improvement has the potential to increase the accessibility and quality of services in a short amount of time. In addition, continued ownership and on-site support are vital in promoting sustainability. This approach could be adapted and applied in similar contexts, particularly in other African countries.

Keywords: abortion, contraception, quality improvement, service provision

Procedia PDF Downloads 192
852 Early Biological Effects in Schoolchildren Living in an Area of Salento (Italy) with High Incidence of Chronic Respiratory Diseases: The IMP.AIR. Study

Authors: Alessandra Panico, Francesco Bagordo, Tiziana Grassi, Adele Idolo, Marcello Guido, Francesca Serio, Mattia De Giorgi, Antonella De Donno

Abstract:

In the Province of Lecce (Southeastern Italy) an area with unusual high incidence of chronic respiratory diseases, including lung cancer, was recently identified. The causes of this health emergency are still not entirely clear. In order to determine the risk profile of children living in five municipalities included in this area an epidemiological-molecular study was performed in the years 2014-2016: the IMP.AIR. (Impact of air quality on health of residents in the Municipalities of Sternatia, Galatina, Cutrofiano, Sogliano Cavour and Soleto) study. 122 children aged 6-8 years attending primary school in the study area were enrolled to evaluate the frequency of micronuclei (MNs) in their buccal exfoliated cells. The samples were collected in May 2015 by rubbing the oral mucosa with a soft bristle disposable toothbrush. At the same time, a validated questionnaire was administered to parents to obtain information about health, lifestyle and eating habits of the children. In addition, information on airborne pollutants, routinely detected by the Regional Environmental Agency (ARPA Puglia) in the study area, was acquired. A multivariate analysis was performed to detect any significant association between frequency of MNs (dependent variable) and behavioral factors (independent variables). The presence of MNs was highlighted in the buccal exfoliated cells of about 42% of recruited children with a mean frequency of 0.49 MN/1000 cells, greater than in other areas of Salento. The survey on individual characteristics and lifestyles showed that one in three children was overweight and that most of them had unhealthy eating habits with frequent consumption of foods considered ‘risky’. Moreover many parents (40% of fathers and 12% of mothers) were smokers and about 20% of them admitted to smoking in the house where the children lived. Information regarding atmospheric contaminants was poor. Of the few substances routinely detected by the only one monitoring station located in the study area (PM2.5, SO2, NO2, CO, O3) only ozone showed high concentrations exceeding the limits set by the legislation for 67 times in the year 2015. The study showed that the level of early biological effect markers in children was not negligible. This critical condition could be related to some individual factors and lifestyles such as overweight, unhealthy eating habits and exposure to passive smoking. At present, no relationship with airborne pollutants can be established due to the lack of information on many substances. Therefore, it would be advisable to modify incorrect behaviors and to intensify the monitoring of airborne pollutants (e.g. including detection of PM10, heavy metals, aromatic polycyclic hydrocarbons, benzene) given the epidemiology of chronic respiratory diseases registered in this area.

Keywords: chronic respiratory diseases, environmental pollution, lifestyle, micronuclei

Procedia PDF Downloads 186