Search results for: speed detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6057

Search results for: speed detection

207 Potential Assessment and Techno-Economic Evaluation of Photovoltaic Energy Conversion System: A Case of Ethiopia Light Rail Transit System

Authors: Asegid Belay Kebede, Getachew Biru Worku

Abstract:

The Earth and its inhabitants have faced an existential threat as a result of severe manmade actions. Global warming and climate change have been the most apparent manifestations of this threat throughout the world, with increasingly intense heat waves, temperature rises, flooding, sea-level rise, ice sheet melting, and so on. One of the major contributors to this disaster is the ever-increasing production and consumption of energy, which is still primarily fossil-based and emits billions of tons of hazardous GHG. The transportation industry is recognized as the biggest actor in terms of emissions, accounting for 24% of direct CO2 emissions and being one of the few worldwide sectors where CO2 emissions are still growing. Rail transportation, which includes all from light rail transit to high-speed rail services, is regarded as one of the most efficient modes of transportation, accounting for 9% of total passenger travel and 7% of total freight transit. Nonetheless, there is still room for improvement in the transportation sector, which might be done by incorporating alternative and/or renewable energy sources. As a result of these rapidly changing global energy situations and rapidly dwindling fossil fuel supplies, we were driven to analyze the possibility of renewable energy sources for traction applications. Even a small achievement in energy conservation or harnessing might significantly influence the total railway system and have the potential to transform the railway sector like never before. As a result, the paper begins by assessing the potential for photovoltaic (PV) power generation on train rooftops and existing infrastructure such as railway depots, passenger stations, traction substation rooftops, and accessible land along rail lines. As a result, a method based on a Google Earth system (using Helioscopes software) is developed to assess the PV potential along rail lines and on train station roofs. As an example, the Addis Ababa light rail transit system (AA-LRTS) is utilized. The case study examines the electricity-generating potential and economic performance of photovoltaics installed on AALRTS. As a consequence, the overall capacity of solar systems on all stations, including train rooftops, reaches 72.6 MWh per day, with an annual power output of 10.6 GWh. Throughout a 25-year lifespan, the overall CO2 emission reduction and total profit from PV-AA-LRTS can reach 180,000 tons and 892 million Ethiopian birrs, respectively. The PV-AA-LRTS has a 200% return on investment. All PV stations have a payback time of less than 13 years, and the price of solar-generated power is less than $0.08/kWh, which can compete with the benchmark price of coal-fired electricity. Our findings indicate that PV-AA-LRTS has tremendous potential, with both energy and economic advantages.

Keywords: sustainable development, global warming, energy crisis, photovoltaic energy conversion, techno-economic analysis, transportation system, light rail transit

Procedia PDF Downloads 60
206 A Vision Making Exercise for Twente Region; Development and Assesment

Authors: Gelareh Ghaderi

Abstract:

the overall objective of this study is to develop two alternative plans of spatial and infrastructural development for the Netwerkstad Twente (Twente region) until 2040 and to assess the impacts of those two alternative plans. This region is located on the eastern border of the Netherlands, and it comprises of five municipalities. Based on the strengths and opportunities of the five municipalities of the Netwerkstad Twente, and in order develop the region internationally, strengthen the job market and retain skilled and knowledgeable young population, two alternative visions have been developed; environmental oriented vision, and economical oriented vision. Environmental oriented vision is based mostly on preserving beautiful landscapes. Twente would be recognized as an educational center, driven by green technologies and environment-friendly economy. Market-oriented vision is based on attracting and developing different economic activities in the region based on visions of the five cities of Netwerkstad Twente, in order to improve the competitiveness of the region in national and international scale. On the basis of the two developed visions and strategies for achieving the visions, land use and infrastructural development are modeled and assessed. Based on the SWOT analysis, criteria were formulated and employed in modeling the two contrasting land use visions by the year 2040. Land use modeling consists of determination of future land use demand, assessment of suitability land (Suitability analysis), and allocation of land uses on suitable land. Suitability analysis aims to determine the available supply of land for future development as well as assessing their suitability for specific type of land uses on the basis of the formulated set of criteria. Suitability analysis was operated using CommunityViz, a Planning Support System application for spatially explicit land suitability and allocation. Netwerkstad Twente has highly developed transportation infrastructure, consists of highways network, national road network, regional road network, street network, local road network, railway network and bike-path network. Based on the assumptions of speed limitations on different types of roads provided, infrastructure accessibility level of predicted land use parcels by four different transport modes is investigated. For evaluation of the two development scenarios, the Multi-criteria Evaluation (MCE) method is used. The first step was to determine criteria used for evaluation of each vision. All factors were categorized as economical, ecological and social. Results of Multi-criteria Evaluation show that Environmental oriented cities scenario has higher overall score. Environment-oriented scenario has impressive scores in relation to economical and ecological factors. This is due to the fact that a large percentage of housing tends towards compact housing. Twente region has immense potential, and the success of this project will define the Eastern part of The Netherlands and create a real competitive local economy with innovations and attractive environment as its backbone.

Keywords: economical oriented vision, environmental oriented vision, infrastructure, land use, multi criteria assesment, vision

Procedia PDF Downloads 202
205 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents

Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme

Abstract:

The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.

Keywords: wine, ota removal, food safety, fining

Procedia PDF Downloads 508
204 Vortex Generation to Model the Airflow Downstream of a Piezoelectric Fan Array

Authors: Alastair Hales, Xi Jiang, Siming Zhang

Abstract:

Numerical methods are used to generate vortices in a domain. Through considered design, two counter-rotating vortices may interact and effectively drive one another downstream. This phenomenon is comparable to the vortex interaction that occurs in a region immediately downstream from two counter-oscillating piezoelectric (PE) fan blades. PE fans are small blades clamped at one end and driven to oscillate at their first natural frequency by an extremely low powered actuator. In operation, the high oscillation amplitude and frequency generate sufficient blade tip speed through the surrounding air to create downstream air flow. PE fans are considered an ideal solution for low power hot spot cooling in a range of small electronic devices, but a single blade does not typically induce enough air flow to be considered a direct alternative to conventional air movers, such as axial fans. The development of face-to-face PE fan arrays containing multiple blades oscillating in counter-phase to one another is essential for expanding the range of potential PE fan applications regarding the cooling of power electronics. Even in an unoptimised state, these arrays are capable of moving air volumes comparable to axial fans with less than 50% of the power demand. Replicating the airflow generated by face-to-face PE fan arrays without including the actual blades in the model reduces the process’s computational demands and enhances the rate of innovation and development in the field. Vortices are generated at a defined inlet using a time-dependent velocity profile function, which pulsates the inlet air velocity magnitude. This induces vortex generation in the considered domain, and these vortices are shown to separate and propagate downstream in a regular manner. The generation and propagation of a single vortex are compared to an equivalent vortex generated from a PE fan blade in a previous experimental investigation. Vortex separation is found to be accurately replicated in the present numerical model. Additionally, the downstream trajectory of the vortices’ centres vary by just 10.5%, and size and strength of the vortices differ by a maximum of 10.6%. Through non-dimensionalisation, the numerical method is shown to be valid for PE fan blades with differing parameters to the specific case investigated. The thorough validation methods presented verify that the numerical model may be used to replicate vortex formation from an oscillating PE fans blade. An investigation is carried out to evaluate the effects of varying the distance between two PE fan blade, pitch. At small pitch, the vorticity in the domain is maximised, along with turbulence in the near vicinity of the inlet zones. It is proposed that face-to-face PE fan arrays, oscillating in counter-phase, should have a minimal pitch to optimally cool nearby heat sources. On the other hand, downstream airflow is maximised at a larger pitch, where the vortices can fully form and effectively drive one another downstream. As such, this should be implemented when bulk airflow generation is the desired result.

Keywords: piezoelectric fans, low energy cooling, vortex formation, computational fluid dynamics

Procedia PDF Downloads 154
203 Journey to Inclusive School: Description of Crucial Sensitive Concepts in the Context of Situational Analysis

Authors: Denisa Denglerova, Radim Sip

Abstract:

Academic sources as well as international agreements and national documents define inclusion in terms of several criteria: equal opportunities, fulfilling individual needs, development of human resources, community participation. In order for these criteria to be met, the community must be cohesive. Community cohesion, which is a relatively new concept, is not determined by homogeneity, but by the acceptance of diversity among the community members and utilisation of its positive potential. This brings us to a central category of inclusion - appreciating diversity and using it to a positive effect. However, school diversity is a real phenomenon, which schools need to tackle more and more often. This is also indicated by the number of publications focused on diversity in schools. These sources present recent analyses of using identity as a tool of coping with the demands of a diversified society. The aim of this study is to identify and describe in detail the processes taking place in selected schools, which contribute to their pro-inclusive character. The research is designed around a multiple case study of three pro-inclusive schools. Paradigmatically speaking, the research is rooted in situational epistemology. This is also related to the overall framework of interpretation, for which we are going to use innovative methods of situational analysis. In terms of specific research outcomes this will manifest itself in replacing the idea of “objective theory” by the idea of “detailed cartography of a social world”. The cartographic approach directs both the logic of data collection and the choice of methods of their analysis and interpretation. The research results include detection of the following sensitive concepts: Key persons. All participants can contribute to promoting an inclusion-friendly environment; however, some do so with greater motivation than others. These could include school management, teachers with a strong vision of equality, or school counsellors. They have a significant effect on the transformation of the school, and are themselves deeply convinced that inclusion is necessary. Accordingly, they select suitable co-workers; they also inspire some of the other co-workers to make changes, leading by example. Employees with strongly opposing views gradually leave the school, and new members of staff are introduced to the concept of inclusion and openness from the beginning. Manifestations of school openness in working with diversity on all important levels. By this we mean positive manipulation with diversity both in the relationships between “traditional” school participants (directors, teachers, pupils) and school-parent relationships, or relationships between schools and the broader community, in terms of teaching methods as well as ways how the school culture affects the school environment. Other important detected concepts significantly helping to form a pro-inclusive environment in the school are individual and parallel classes; freedom and responsibility of both pupils and teachers, manifested on the didactic level by tendencies towards an open curriculum; ways of asserting discipline in the school environment.

Keywords: inclusion, diversity, education, sensitive concept, situational analysis

Procedia PDF Downloads 173
202 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.

Keywords: adverse drug events, clinical coders, decision making, hospital admissions

Procedia PDF Downloads 101
201 The Importance of Development Evaluation to Preterm Children in Remote Area

Authors: Chung-Yuan Wang, Min Hsu, Bo-Ya Juan, Hsiv Ching Lin, Hsveh Min Lin, Hsiu-Fang Yeh

Abstract:

The success of Taiwan's National Health Insurance (NHI) system attracts widespread praise from the international society. However, the availability of medical care in a emote area is limited. Without the convenient public transportation system and mature social welfare policy, these people are difficult to regain their health and prevent disability. Preterm children have more risk to get development delay. Preterm children in a remote area have the human right to get rehabilitation resources as those in the city area. Therefore, the aim of this study was to show the importance of development screening to preterm children in a remote area and a tract the government to notice the issue. In Pingtung, children who are suspected development delay would be suggested to take a skillful screening evaluation in our hospital. Those preterm children (within 1-year-old) visited our pediatric clinic would also be referred to take the development evaluation. After the physiatrist’s systemic evaluation, the subjects would be scheduled to take the development evaluation. Gross motor, fine motor, speech comprehension/expression and mental study were included. The evaluation was in-charged by a physical therapist, occupational therapy, speech therapist and pediatric psychologist. The tools were Peabody developmental scale, Bayley Scales of Infant and Toddler Development (Bayley-III) and Wechsler Preschool & Primary Scale of Intelligence-Revised (WPPSI-R). In 2013, 459 children received the service in our hospital. Among these children, fifty-seven were noted with preterm baby history (gestation within 37 weeks). Thirty-six of these preterm children, who had never receive development evaluation, were included in this study. Thirty-six subjects (twenty-six male and ten female) were included. Nineteen subjects were found development delay. Six subjects were found suspected development delay. In gross motor, six subjects were development delay and eight were suspected development delay. In fine motor, five subjects were development delay and three were suspected development delay. In speech, sixteen subjects were development delay and six were suspected development delay. In our study, through the provision of development evaluation service, 72.2% preterm baby were found their development delay or suspected delay. They need further early intervention rehabilitation service. We made their parents realize that when development delay was recognized at the early stage, they are often reversible. No only the patients but also their families were improved their health status. The number of the subjects was limited in our study. Further study might be needed. Compared with 770 physical therapist (PT) and 370 occupational therapy (OT) in Taipei, there are only 108 PT and 54 OT in Pingtung. Further, there are much fewer therapists working on the field of pediatric rehabilitation. Living healthy is a human's right, no matter where does he live. For those development delay children in remote area, particularly preterm children, early detection, and early intervention rehabilitation service could play an important role in decreasing their disability and improving their quality of life. Through this study, we suggest the government to add more national resources on the development evaluation to preterm children in a remote area.

Keywords: development, early intervention, preterm children, rehabilitation

Procedia PDF Downloads 417
200 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality

Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti

Abstract:

One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.

Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training

Procedia PDF Downloads 88
199 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control

Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol

Abstract:

Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.

Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics

Procedia PDF Downloads 177
198 Chemical Profiling of Hymenocardia acida Stem Bark Extract and Modulation of Selected Antioxidant and Esterase Enzymes in Kidney and Heart Ofwistar Rats

Authors: Adeleke G. E., Bello M. A., Abdulateef R. B., Olasinde T. T., Oriaje K. O., AransiI A., Elaigwu K. O., Omidoyin O. S., Shoyinka E. D., Awoyomi M. B., Akano M., Adaramoye O. A.

Abstract:

Hymenocardia acidatul belongs to the genus, Hymenocardiaceae, which is widely distributed in Africa. Both the leaf and stem bark of the plant have been used in the treatment of several diseases. The present study examined the chemical constituents of the H. acida stem bark extract (HASBE) and its effects on some antioxidant indices and esterase enzymes in female Wistar rats. The HASBE was obtained by Soxhlet extraction using methanol and then subjected to Atomic Absorption Spectroscopy (AAS) for elemental analysis, and Fourier-Transform Infrared (FT-IR) spectroscopy, ultraviolet (UV) spectroscopy, for functional group analysis, while High-performance liquid chromatography (HPLC), and Gas Chromatography-Flame ionization detection (GC-FID) were carried out for compound identification. Forty-eight female Wistar rats were assigned into eight groups of six rats each and separately administered orally with normal saline (Control), 50, 100, 150, 200, 250, 300, 350 mg/kg of HASBE twice per week for eight weeks. The rats were sacrificed under chloroform anesthesia, and kidneys and heart were excised and processed to obtain homogenates. The levels of superoxide dismutase (SOD), catalase, Malondialdehyde (MDA), glutathione peroxidase (GPx), acetylcholinesterase (AChE), and carboxylesterase (CE) were determined spectrophotometrically. The AAS of HASBE shows the presence of eight elements, including Cobalt (0.303), Copper (0.222), Zinc (0.137), Iron (2.027), Nickel (1.304), Chromium (0.313), Manganese (0.213), and Magnesium (0.337 ppm). The FT-IR result of HASBE shows four peaks at 2961.4, 2926.0, 1056.7, and 1034.3 cm-1, while UV analysis shows a maximum absorbance (0.522) at 205 nm. The HPLC spectrum of HASBE indicates the presence of four major compounds, including orientin (77%), β-sitosterol (6.58%), rutin (5.02%), and betulinic acid (3.33%), while GC-FID result shows five major compounds, including rutin (53.27%), orientin (13.06%) and stigmasterol (11.73%), hymenocardine (6.43%) and homopterocarpin (5.29%). The SOD activity was significantly (p < 0.05) lowered in the kidney but elevated in the heart, while catalase was elevated in both organs relative to control rats. The GPx activity was significantly elevated only in the kidney, while MDA was not significantly (p > 0.05) affected in the two organs compared with controls. The activity of AChE was significantly elevated in both organs, while CE activity was elevated only in the kidney relative to control rats. The present study reveals that Hymenocardia acida stem bark extract majorly contains orientin, rutin, stigmasterol, hymenocardine, β-sitosterol, homopterocarpin, and betulinic acid. In addition, these compounds could possibly enhance redox status and esterase activities in the kidney and heart of Wistar rats.

Keywords: hymenocardia acida, elemental analysis, compounds identification, redox status, organs

Procedia PDF Downloads 124
197 Linguistic Cyberbullying, a Legislative Approach

Authors: Simona Maria Ignat

Abstract:

Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.

Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter

Procedia PDF Downloads 60
196 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog

Authors: Ana Flavia Belchior De Andrade

Abstract:

Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.

Keywords: backlog, forensic laboratory, quality management, accreditation

Procedia PDF Downloads 98
195 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 127
194 Mental Health Surveys on Community and Organizational Levels: Challenges, Issues, Conclusions and Possibilities

Authors: László L. Lippai

Abstract:

In addition to the fact that mental health bears great significance to a particular individual, it can also be regarded as an organizational, community and societal resource. Within the Szeged Health Promotion Research Group, we conducted mental health surveys on two levels: The inhabitants of a medium-sized Hungarian town and students of a Hungarian university with a relatively big headcount were requested to participate in surveys whose goals were to define local government priorities and organization-level health promotion programmes, respectively. To facilitate professional decision-making, we defined three, pragmatically relevant, groups of the target population: the mentally healthy, the vulnerable and the endangered. In order to determine which group a person actually belongs to, we designed a simple and quick measurement tool, which could even be utilised as a smoothing method, the Mental State Questionnaire validity of the above three categories was verified by analysis of variance against psychological quality of life variables. We demonstrate the pragmatic significance of our method via the analyses of the scores of our two mental health surveys. On town level, during our representative survey in Hódmezővásárhely (N=1839), we found that 38.7% of the participants was mentally healthy, 35.3% was vulnerable, while 16.3% was considered as endangered. We were able to identify groups that were in a dramatic state in terms of mental health. For example, such a group consisted of men aged 45 to 64 with only primary education qualification and the ratios of the mentally healthy, vulnerable and endangered were 4.5, 45.5 and 50%, respectively. It was also astonishing to see to what a little extent qualification prevailed as a protective factor in the case of women. Based on our data, the female group aged 18 to 44 with primary education—of whom 20.3% was mentally healthy, 42.4% vulnerable and 37.3% was endangered—as well as the female group aged 45 to 64 with university or college degree—of whom 25% was mentally healthy, 51.3 vulnerable and 23.8% endangered—are to be handled as priority intervention target groups in a similarly difficult position. On organizational level, our survey involving the students of the University of Szeged, N=1565, provided data to prepare a strategy of mental health promotion for a university with a headcount exceeding 20,000. When developing an organizational strategy, it was important to gather information to estimate the proportions of target groups in which mental health promotion methods; for example, life management skills development, detection, psychological consultancy, psychotherapy, would be applied. Our scores show that 46.8% of the student participants were mentally healthy, 42.1% were vulnerable and 11.1% were endangered. These data convey relevant information as to the allocation of organizational resources within a university with a considerable headcount. In conclusion, The Mental State Questionnaire, as a valid smoothing method, is adequate to describe a community in a plain and informative way in the terms of mental health. The application of the method can promote the preparation, design and implementation of mental health promotion interventions. 

Keywords: health promotion, mental health promotion, mental state questionnaire, psychological well-being

Procedia PDF Downloads 280
193 Temperature Dependence of the Optoelectronic Properties of InAs(Sb)-Based LED Heterostructures

Authors: Antonina Semakova, Karim Mynbaev, Nikolai Bazhenov, Anton Chernyaev, Sergei Kizhaev, Nikolai Stoyanov

Abstract:

At present, heterostructures are used for fabrication of almost all types of optoelectronic devices. Our research focuses on the optoelectronic properties of InAs(Sb) solid solutions that are widely used in fabrication of light emitting diodes (LEDs) operating in middle wavelength infrared range (MWIR). This spectral range (2-6 μm) is relevant for laser diode spectroscopy of gases and molecules, for systems for the detection of explosive substances, medical applications, and for environmental monitoring. The fabrication of MWIR LEDs that operate efficiently at room temperature is mainly hindered by the predominance of non-radiative Auger recombination of charge carriers over the process of radiative recombination, which makes practical application of LEDs difficult. However, non-radiative recombination can be partly suppressed in quantum-well structures. In this regard, studies of such structures are quite topical. In this work, electroluminescence (EL) of LED heterostructures based on InAs(Sb) epitaxial films with the molar fraction of InSb ranging from 0 to 0.09 and multi quantum-well (MQW) structures was studied in the temperature range 4.2-300 K. The growth of the heterostructures was performed by metal-organic chemical vapour deposition on InAs substrates. On top of the active layer, a wide-bandgap InAsSb(Ga,P) barrier was formed. At low temperatures (4.2-100 K) stimulated emission was observed. As the temperature increased, the emission became spontaneous. The transition from stimulated emission to spontaneous one occurred at different temperatures for structures with different InSb contents in the active region. The temperature-dependent carrier lifetime, limited by radiative recombination and the most probable Auger processes (for the materials under consideration, CHHS and CHCC), were calculated within the framework of the Kane model. The effect of various recombination processes on the carrier lifetime was studied, and the dominant role of Auger processes was established. For MQW structures quantization energies for electrons, light and heavy holes were calculated. A characteristic feature of the experimental EL spectra of these structures was the presence of peaks with energy different from that of calculated optical transitions between the first quantization levels for electrons and heavy holes. The obtained results showed strong effect of the specific electronic structure of InAsSb on the energy and intensity of optical transitions in nanostructures based on this material. For the structure with MQWs in the active layer, a very weak temperature dependence of EL peak was observed at high temperatures (>150 K), which makes it attractive for fabricating temperature-resistant gas sensors operating in the middle-infrared range.

Keywords: Electroluminescence, InAsSb, light emitting diode, quantum wells

Procedia PDF Downloads 186
192 Gamifying Content and Language Integrated Learning: A Study Exploring the Use of Game-Based Resources to Teach Primary Mathematics in a Second Language

Authors: Sarah Lister, Pauline Palmer

Abstract:

Research findings presented within this paper form part of a larger scale collaboration between academics at Manchester Metropolitan University and a technology company. The overarching aims of this project focus on developing a series of game-based resources to promote the teaching of aspects of mathematics through a second language (L2) in primary schools. This study explores the potential of game-based learning (GBL) as a dynamic way to engage and motivate learners, making learning fun and purposeful. The research examines the capacity of GBL resources to provide a meaningful and purposeful context for CLIL. GBL is a powerful learning environment and acts as an effective vehicle to promote the learning of mathematics through an L2. The fun element of GBL can minimise stress and anxiety associated with mathematics and L2 learning that can create barriers. GBL provides one of the few safe domains where it is acceptable for learners to fail. Games can provide a life-enhancing experience for learners, revolutionizing the routinized ways of learning through fusing learning and play. This study argues that playing games requires learners to think creatively to solve mathematical problems, using the L2 in order to progress, which can be associated with the development of higher-order thinking skills and independent learning. GBL requires learners to engage appropriate cognitive processes with increased speed of processing, sensitivity to environmental inputs, or flexibility in allocating cognitive and perceptual resources. At surface level, GBL resources provide opportunities for learners to learn to do things. Games that fuse subject content and appropriate learning objectives have the potential to make learning academic subjects more learner-centered, promote learner autonomy, easier, more enjoyable, more stimulating and engaging and therefore, more effective. Data includes observations of the children playing the games and follow up group interviews. Given that learning as a cognitive event cannot be directly observed or measured. A Cognitive Discourse Functions (CDF) construct was used to frame the research, to map the development of learners’ conceptual understanding in an L2 context and as a framework to observe the discursive interactions that occur learner to learner and between learner and teacher. Cognitively, the children were required to engage with mathematical content, concepts and language to make decisions quickly, to engage with the gameplay to reason, solve and overcome problems and learn through experimentation. The visual elements of the games supported the learning of new concepts. Children recognised the value of the games to consolidate their mathematical thinking and develop their understanding of new ideas. The games afforded them time to think and reflect. The teachers affirmed that the games provided meaningful opportunities for the learners to practise the language. The findings of this research support the view that using the game-based resources supported children’s grasp of mathematical ideas and their confidence and ability to use the L2. Engaging with the content and language through the games led to deeper learning.

Keywords: CLIL, gaming, language, mathematics

Procedia PDF Downloads 116
191 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years

Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat

Abstract:

Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.

Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding

Procedia PDF Downloads 146
190 Aerosol Chemical Composition in Urban Sites: A Comparative Study of Lima and Medellin

Authors: Guilherme M. Pereira, Kimmo Teinïla, Danilo Custódio, Risto Hillamo, Célia Alves, Pérola de C. Vasconcellos

Abstract:

South American large cities often present serious air pollution problems and their atmosphere composition is influenced by a variety of emissions sources. The South American Emissions Megacities, and Climate project (SAEMC) has focused on the study of emissions and its influence on climate in the South American largest cities and it also included Lima (Peru) and Medellin (Colombia), sites where few studies of the genre were done. Lima is a coastal city with more than 8 million inhabitants and the second largest city in South America. Medellin is a 2.5 million inhabitants city and second largest city in Colombia; it is situated in a valley. The samples were collected in quartz fiber filters in high volume samplers (Hi-Vol), in 24 hours of sampling. The samples were collected in intensive campaigns in both sites, in July, 2010. Several species were determined in the aerosol samples of Lima and Medellin. Organic and elemental carbon (OC and EC) in thermal-optical analysis; biomass burning tracers (levoglucosan - Lev, mannosan - Man and galactosan - Gal) in high-performance anion exchange ion chromatography with mass spectrometer detection; water soluble ions in ion chromatography. The average particulate matter was similar for both campaigns, the PM10 concentrations were above the recommended by World Health Organization (50 µg m⁻³ – daily limit) in 40% of the samples in Medellin, while in Lima it was above that value in 15% of the samples. The average total ions concentration was higher in Lima (17450 ng m⁻³ in Lima and 3816 ng m⁻³ in Medellin) and the average concentrations of sodium and chloride were higher in this site, these species also had better correlations (Pearson’s coefficient = 0,63); suggesting a higher influence of marine aerosol in the site due its location in the coast. Sulphate concentrations were also much higher at Lima site; which may be explained by a higher influence of marine originated sulphate. However, the OC, EC and monosaccharides average concentrations were higher at Medellin site; this may be due to the lower dispersion of pollutants due to the site’s location and a larger influence of biomass burning sources. The levoglucosan average concentration was 95 ng m⁻³ for Medellin and 16 ng m⁻³ and OC was well correlated with levoglucosan (Pearson’s coefficient = 0,86) in Medellin; suggesting a higher influence of biomass burning over the organic aerosol in this site. The Lev/Man ratio is often related to the type of biomass burned and was close to 18, similar to the observed in previous studies done at biomass burning impacted sites in the Amazon region; backward trajectories also suggested the transport of aerosol from that region. Biomass burning appears to have a larger influence on the air quality in Medellin, in addition the vehicular emissions; while Lima showed a larger influence of marine aerosol during the study period.

Keywords: aerosol transport, atmospheric particulate matter, biomass burning, SAEMC project

Procedia PDF Downloads 243
189 Immune Responses and Pathological Manifestations in Chicken to Oral Infection with Salmonella typhimurium

Authors: Mudasir Ahmad Syed, Raashid Ahmd Wani, Mashooq Ahmad Dar, Uneeb Urwat, Riaz Ahmad Shah, Nazir Ahmad Ganai

Abstract:

Salmonella enterica serovar Typhimurium (Salmonella Typhimurium) is a primary avian pathogen responsible for severe intestinal pathology in younger chickens and economic losses. However, the Salmonella Typhimurium is also able to cause infection in humans, described by typhoid fever and acute gastro-intestinal disease. A study was conducted at days to investigate pathological, histopathological, haemato-biochemical, immunological and expression kinetics of NRAMP (natural resistance associated macrophage protein) gene family (NRAMP1 and NRAMP2) in broiler chickens following experimental infection of Salmonella Typhimurium at 0,1,3,5,7,9,11,13 and 15 days respectively. Infection was developed in birds through oral route at 2×108 CFU/ml. Clinical symptoms appeared 4 days post infection (dpi) and after one-week birds showed progressive weakness, anorexia, diarrhea and lowering of head. On postmortem examination, liver showed congestion, hemorrhage and necrotic foci on surface, while as spleen, lungs and intestines revealed congestion and hemorrhages. Histopathological alterations were principally observed in liver in second week post infection. Changes in liver comprised of congestion, areas of necrosis, reticular endothelial hyperplasia in association with mononuclear cell and heterophilic infiltration. Hematological studies confirm a significant decrease (P<0.05) in RBC count, Hb concentration and PCV. White blood cell count showed significant increase throughout the experimental study. An increase in heterophils was found up to 7dpi and a decreased pattern was observed afterwards. Initial lymphopenia followed by lymphocytosis was found in infected chicks. Biochemical studies showed a significant increase in glucose, AST and ALT concentration and a significant decrease (P<0.05) in total protein and albumin level in the infected group. Immunological studies showed higher titers of IgG in infected group as compared to control group. The real time gene expression of NRAMPI and NRAMP2 genes increased significantly (P<0.05) in infected group as compared to controls. The peak expression of NRAMP1 gene was seen in liver, spleen and caecum of infected birds at 3dpi, 5dpi and 7dpi respectively, while as peak expression of NRAMP2 gene in liver, spleen and caecum of infected chicken was seen at 9dpi, 5dpi and 9dpi respectively. This study has role in diagnostics and prognostics in the poultry industry for the detection of salmonella infections at early stages of poultry development.

Keywords: biochemistry, histopathology, NRAMP, poultry, real time expression, Salmonella Typhimurium

Procedia PDF Downloads 315
188 Effects of Temperature and Mechanical Abrasion on Microplastics

Authors: N. Singh, G. K. Darbha

Abstract:

Since the last decade, a wave of research has begun to study the prevalence and impact of ever-increasing plastic pollution in the environment. The wide application and ubiquitous distribution of plastic have become a global concern due to its persistent nature. The disposal of plastics has emerged as one of the major challenges for waste management landfills. Microplastics (MPs) have found its existence in almost every environment, from the high altitude mountain lake to the deep sea sediments, polar icebergs, coral reefs, estuaries, beaches, and river, etc. Microplastics are fragments of plastics with size less than 5 mm. Microplastics can be classified as primary microplastics and secondary microplastics. Primary microplastics includes purposefully introduced microplastics into the end products for consumers (microbeads used in facial cleansers, personal care product, etc.), pellets (used in manufacturing industries) or fibres (from textile industries) which finally enters into the environment. Secondary microplastics are formed by disintegration of larger fragments under the exposure of sunlight, mechanical abrasive forces by rain, waves, wind and/or water. A number of factors affect the quantity of microplastic present in freshwater environments. In addition to physical forces, human population density proximal to the water body, proximity to urban centres, water residence time, and size of the water body also affects plastic properties. With time, other complex processes in nature such as physical, chemical and biological break down plastics by interfering with its structural integrity. Several studies demonstrate that microplastics found in wastewater sludge being used as manure for agricultural fields, thus having the tendency to alter the soil environment condition influencing the microbial population as well. Inadequate data are available on the fate and transport of microplastics under varying environmental conditions that are required to supplement important information for further research. In addition, microplastics have the tendency to absorb heavy metals and hydrophobic organic contaminants such as PAHs and PCBs from its surroundings and thus acting as carriers for these contaminants in the environment system. In this study, three kinds of microplastics (polyethylene, polypropylene and expanded polystyrene) of different densities were chosen. Plastic samples were placed in sand with different aqueous media (distilled water, surface water, groundwater and marine water). It was incubated at varying temperatures (25, 35 and 40 °C) and agitation levels (rpm). The results show that the number of plastic fragments enhanced with increase in temperature and agitation speed. Moreover, the rate of disintegration of expanded polystyrene is high compared to other plastics. These results demonstrate that temperature, salinity, and mechanical abrasion plays a major role in degradation of plastics. Since weathered microplastics are more harmful as compared to the virgin microplastics, long-term studies involving other environmental factors are needed to have a better understanding of degradation of plastics.

Keywords: environmental contamination, fragmentation, microplastics, temperature, weathering

Procedia PDF Downloads 137
187 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 83
186 The Relations between Language Diversity and Similarity and Adults' Collaborative Creative Problem Solving

Authors: Z. M. T. Lim, W. Q. Yow

Abstract:

Diversity in individual problem-solving approaches, culture and nationality have been shown to have positive effects on collaborative creative processes in organizational and scholastic settings. For example, diverse graduate and organizational teams consisting of members with both structured and unstructured problem-solving styles were found to have more creative ideas on a collaborative idea generation task than teams that comprised solely of members with either structured or unstructured problem-solving styles. However, being different may not always provide benefits to the collaborative creative process. In particular, speaking different languages may hinder mutual engagement through impaired communication and thus collaboration. Instead, sharing similar languages may have facilitative effects on mutual engagement in collaborative tasks. However, no studies have explored the relations between language diversity and adults’ collaborative creative problem solving. Sixty-four Singaporean English-speaking bilingual undergraduates were paired up into similar or dissimilar language pairs based on the second language they spoke (e.g., for similar language pairs, both participants spoke English-Mandarin; for dissimilar language pairs, one participant spoke English-Mandarin and the other spoke English-Korean). Each participant completed the Ravens Progressive Matrices Task individually. Next, they worked in pairs to complete a collaborative divergent thinking task where they used mind-mapping techniques to brainstorm ideas on a given problem together (e.g., how to keep insects out of the house). Lastly, the pairs worked on a collaborative insight problem-solving task (Triangle of Coins puzzle) where they needed to flip a triangle of ten coins around by moving only three coins. Pairs who had prior knowledge of the Triangle of Coins puzzle were asked to complete an equivalent Matchstick task instead, where they needed to make seven squares by moving only two matchsticks based on a given array of matchsticks. Results showed that, after controlling for intelligence, similar language pairs completed the collaborative insight problem-solving task faster than dissimilar language pairs. Intelligence also moderated these relations. Among adults of lower intelligence, similar language pairs solved the insight problem-solving task faster than dissimilar language pairs. These differences in speed were not found in adults with higher intelligence. No differences were found in the number of ideas generated in the collaborative divergent thinking task between similar language and dissimilar language pairs. In conclusion, sharing similar languages seem to enrich collaborative creative processes. These effects were especially pertinent to pairs with lower intelligence. This provides guidelines for the formation of groups based on shared languages in collaborative creative processes. However, the positive effects of shared languages appear to be limited to the insight problem-solving task and not the divergent thinking task. This could be due to the facilitative effects of other factors of diversity as found in previous literature. Background diversity, for example, may have a larger facilitative effect on the divergent thinking task as compared to the insight problem-solving task due to the varied experiences individuals bring to the task. In conclusion, this study contributes to the understanding of the effects of language diversity in collaborative creative processes and challenges the general positive effects that diversity has on these processes.

Keywords: bilingualism, diversity, creativity, collaboration

Procedia PDF Downloads 285
185 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis

Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov

Abstract:

Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.

Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage

Procedia PDF Downloads 82
184 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter

Authors: Jiri Cecrdle

Abstract:

This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.

Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator

Procedia PDF Downloads 68
183 ADAM10 as a Potential Blood Biomarker of Cognitive Frailty

Authors: Izabela P. Vatanabe, Rafaela Peron, Patricia Manzine, Marcia R. Cominetti

Abstract:

Introduction: Considering the increase in life expectancy of world population, there is an emerging concern in health services to allocate better care and care to elderly, through promotion, prevention and treatment of health. It has been observed that frailty syndrome is prevalent in elderly people worldwide and this complex and heterogeneous clinical syndrome consist of the presence of physical frailty associated with cognitive dysfunction, though in absence of dementia. This can be characterized by exhaustion, unintentional weight loss, decreased walking speed, weakness and low level of physical activity, in addition, each of these symptoms may be a predictor of adverse outcomes such as hospitalization, falls, functional decline, institutionalization, and death. Cognitive frailty is a recent concept in literature, which is defined as the presence of physical frailty associated with mild cognitive impairment (MCI) however in absence of dementia. This new concept has been considered as a subtype of frailty, which along with aging process and its interaction with physical frailty, accelerates functional declines and can result in poor quality of life of the elderly. MCI represents a risk factor for Alzheimer's disease (AD) in view of high conversion rate for this disease. Comorbidities and physical frailty are frequently found in AD patients and are closely related to heterogeneity and clinical manifestations of the disease. The decreased platelets ADAM10 levels in AD patients, compared to cognitively healthy subjects, matched by sex, age and education. Objective: Based on these previous results, this study aims to evaluate whether ADAM10 platelet levels of could act as a biomarker of cognitive frailty. Methods: The study was approved by Ethics Committee of Federal University of São Carlos (UFSCar) and conducted in the municipality of São Carlos, headquarters of Federal University of São Carlos (UFSCar). Biological samples of subjects were collected, analyzed and then stored in a biorepository. ADAM10 platelet levels were analyzed by western blotting technique in subjects with MCI and compared to subjects without cognitive impairment, both with and without presence of frailty. Statistical tests of association, regression and diagnostic accuracy were performed. Results: The results have shown that ADAM10/β-actin ratio is decreased in elderly individuals with cognitive frailty compared to non-frail and cognitively healthy controls. Previous studies performed by this research group, already mentioned above, demonstrated that this reduction is still higher in AD patients. Therefore, the ADAM10/β-actin ratio appears to be a potential biomarker for cognitive frailty. The results bring important contributions to an accurate diagnosis of cognitive frailty from the perspective of ADAM10 as a biomarker for this condition, however, more experiments are being conducted, using a high number of subjects, and will help to understand the role of ADAM10 as biomarker of cognitive frailty and contribute to the implementation of tools that work in the diagnosis of cognitive frailty. Such tools can be used in public policies for the diagnosis of cognitive frailty in the elderly, resulting in a more adequate planning for health teams and better quality of life for the elderly.

Keywords: ADAM10, biomarkers, cognitive frailty, elderly

Procedia PDF Downloads 205
182 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 87
181 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor

Authors: Mitali Saha, Soma Das

Abstract:

The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.

Keywords: coconut oil, CCNT, cholesterol, biosensor

Procedia PDF Downloads 264
180 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 155
179 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 51
178 Management of Non-Revenue Municipal Water

Authors: Habib Muhammetoglu, I. Ethem Karadirek, Selami Kara, Ayse Muhammetoglu

Abstract:

The problem of non-revenue water (NRW) from municipal water distribution networks is common in many countries such as Turkey, where the average yearly water losses are around 50% . Water losses can be divided into two major types namely: 1) Real or physical water losses, and 2) Apparent or commercial water losses. Total water losses in Antalya city, Turkey is around 45%. Methods: A research study was conducted to develop appropriate methodologies to reduce NRW. A pilot study area of about 60 thousands inhabitants was chosen to apply the study. The pilot study area has a supervisory control and data acquisition (SCADA) system for the monitoring and control of many water quantity and quality parameters at the groundwater drinking wells, pumping stations, distribution reservoirs, and along the water mains. The pilot study area was divided into 18 District Metered Areas (DMAs) with different number of service connections that ranged between a few connections to less than 3000 connections. The flow rate and water pressure to each DMA were on-line continuously measured by an accurate flow meter and water pressure meter that were connected to the SCADA system. Customer water meters were installed to all billed and unbilled water users. The monthly water consumption as given by the water meters were recorded regularly. Water balance was carried out for each DMA using the well-know standard IWA approach. There were considerable variations in the water losses percentages and the components of the water losses among the DMAs of the pilot study area. Old Class B customer water meters at one DMA were replaced by more accurate new Class C water meters. Hydraulic modelling using the US-EPA EPANET model was carried out in the pilot study area for the prediction of water pressure variations at each DMA. The data sets required to calibrate and verify the hydraulic model were supplied by the SCADA system. It was noticed that a number of the DMAs exhibited high water pressure values. Therefore, pressure reducing valves (PRV) with constant head were installed to reduce the pressure up to a suitable level that was determined by the hydraulic model. On the other hand, the hydraulic model revealed that the water pressure at the other DMAs cannot be reduced when complying with the minimum pressure requirement (3 bars) as stated by the related standards. Results: Physical water losses were reduced considerably as a result of just reducing water pressure. Further physical water losses reduction was achieved by applying acoustic methods. The results of the water balances helped in identifying the DMAs that have considerable physical losses. Many bursts were detected especially in the DMAs that have high physical water losses. The SCADA system was very useful to assess the efficiency level of this method and to check the quality of repairs. Regarding apparent water losses reduction, changing the customer water meters resulted in increasing water revenue by more than 20%. Conclusions: DMA, SCADA, modelling, pressure management, leakage detection and accurate customer water meters are efficient for NRW.

Keywords: NRW, water losses, pressure management, SCADA, apparent water losses, urban water distribution networks

Procedia PDF Downloads 372