Search results for: automated drift detection and adaptation
953 Cooling of Exhaust Gases Emitted Into the Atmosphere as the Possibility to Reduce the Helicopter Radiation Emission Level
Authors: Mateusz Paszko, Mirosław Wendeker, Adam Majczak
Abstract:
Every material body that temperature is higher than 0K (absolute zero) emits infrared radiation to the surroundings. Infrared radiation is highly meaningful in military aviation, especially in military applications of helicopters. Helicopters, in comparison to other aircraft, have much lower flight speeds and maneuverability, which makes them easy targets for actual combat assets like infrared-guided missiles. When designing new helicopter types, especially for combat applications, it is essential to pay enormous attention to infrared emissions of the solid parts composing the helicopter’s structure, as well as to exhaust gases egressing from the engine’s exhaust system. Due to their high temperature, exhaust gases, egressed to the surroundings are a major factor in infrared radiation emission and, in consequence, detectability of a helicopter performing air combat operations. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. This paper presents the analysis of possibilities to decrease the infrared radiation level that is emitted to the environment by helicopter in flight, by cooling exhaust in special ejection-based coolers. The paper also presents the concept 3D model and results of numeric analysis of ejective-based cooler cooperation with PA-10W turbine engine. Numeric analysis presented promising results in decreasing the infrared emission level by PA W-3 helicopter in flight.Keywords: exhaust cooler, helicopter propulsion, infrared radiation, stealth
Procedia PDF Downloads 344952 Evaluation of DNA Oxidation and Chemical DNA Damage Using Electrochemiluminescent Enzyme/DNA Microfluidic Array
Authors: Itti Bist, Snehasis Bhakta, Di Jiang, Tia E. Keyes, Aaron Martin, Robert J. Forster, James F. Rusling
Abstract:
DNA damage from metabolites of lipophilic drugs and pollutants, generated by enzymes, represents a major toxicity pathway in humans. These metabolites can react with DNA to form either 8-oxo-7,8-dihydro-2-deoxyguanosine (8-oxodG), which is the oxidative product of DNA or covalent DNA adducts, both of which are genotoxic and hence considered important biomarkers to detect cancer in humans. Therefore, detecting reactions of metabolites with DNA is an effective approach for the safety assessment of new chemicals and drugs. Here we describe a novel electrochemiluminescent (ECL) sensor array which can detect DNA oxidation and chemical DNA damage in a single array, facilitating a more accurate diagnostic tool for genotoxicity screening. Layer-by-layer assembly of DNA and enzyme are assembled on the pyrolytic graphite array which is housed in a microfluidic device for sequential detection of two type of the DNA damages. Multiple enzyme reactions are run on test compounds using the array, generating toxic metabolites in situ. These metabolites react with DNA in the films to cause DNA oxidation and chemical DNA damage which are detected by ECL generating osmium compound and ruthenium polymer, respectively. The method is further validated by the formation of 8-oxodG and DNA adduct using similar films of DNA/enzyme on magnetic bead biocolloid reactors, hydrolyzing the DNA, and analyzing by liquid chromatography-mass spectrometry (LC-MS). Hence, this combined DNA/enzyme array/LC-MS approach can efficiently explore metabolic genotoxic pathways for drugs and environmental chemicals.Keywords: biosensor, electrochemiluminescence, DNA damage, microfluidic array
Procedia PDF Downloads 365951 Pattern of Refractive Error, Knowledge, Attitude and Practice about Eye Health among the Primary School Children in Bangladesh
Authors: Husain Rajib, K. S. Kishor, D. G. Jewel
Abstract:
Background: Uncorrected refractive error is a common cause of preventable visual impairment in pediatric age group which can be lead to blindness but early detection of visual impairment can reduce the problem that will have good effective in education and more involve in social activities. Glasses are the cheapest and commonest form of correction of refractive errors. To achieve this, patient must exhibit good compliance to spectacle wear. Patient’s attitude and perception of glasses and eye health could affect compliance. Material and method: A Prospective community based cross sectional study was designed in order to evaluate the knowledge, attitude and practices about refractive errors and eye health amongst the primary school going children. Result: Among 140 respondents, 72 were males and 68 were females. We found 50 children were myopic and out of them 26 were male and 24 were female, 27 children were hyperopic and out of them 14 were male and 13 were female. About 63 children were astigmatic and out of them 32 were male and 31 were female. The level of knowledge, attitude was satisfactory. The attitude of the students, teachers and parents was cooperative which helps to do cycloplegic refraction. Practice was not satisfactory due to social stigma and information gap. Conclusion: Knowledge of refractive error and acceptance of glasses for the correction of uncorrected refractive error. Public awareness program such as vision screening program, eye camp, and teachers training program are more beneficial for wearing and prescribing spectacle.Keywords: refractive error, stigma, knowledge, attitude, practice
Procedia PDF Downloads 259950 Reinventing Smart Tourism via Use of Smart Gamified and Gaming Applications in Greece
Authors: Sofia Maria Poulimenou, Ioannis Deliyannis, Elisavet Filippidou, Stamatella Laboura
Abstract:
Smart technologies are being actively used to improve the experience of travel and promote or demote a destination’s reputation via a wide variety of social media applications and platforms. This paper conceptualises the design and deployment of smart management apps to promote culture, sustainability and accessibility within two destinations in Greece that represent the extremes of visiting scale. One is the densely visited Corfu, which is a UNESCO’s heritage site. The problems caused by the lack of organisation of the visiting experience and infrastructures affect all parties interacting within the site: visitors, citizens, public and private sector. Second is Kilkis, a low tourism destination with high seasonality and mostly inbound tourism. Here the issue faced is that traditional approaches to inform and motivate locals and visitors to explore and taste of the culture have not flourished. The problem is apprehended via the design and development of two systems named “Hologrammatic Corfu” for Corfu old town and “BRENDA” for the area of Kilkis. Although each system is designed independently, featuring different solutions to the problems, both approaches have been designed by the same team and a novel gaming and gamification methodology. The “Hologramatic Corfu” application has been designed, for the exploration of the site covering user requirments before, during and after the trip, with the use of transmedia content such as photos, 360-degree videos, augmented reality and hologrammatic videos. Also, a statistical analysis of travellers’ visits to specific points of interest is actively utilized enabling visitors to dynamically re-rooted during their visit, safeguarding sustainability and accessibility and inclusivity along the entire tourism cycle. “BRENDA” is designed specifically to promote gastronomic and historical tourism. This serious game implements and combines gaming and gamification elements in order to connect local businesses with cultural points of interest. As the environment of the project has a strong touristic orientation, “BRENDA” supports food-related gamified processes and historical games involving active participation of both local communities (content providers) and visitors (players) which are more likely to be successfully performed in the informal environment of travelling and promote sustainable tourism experiences. Finally, the paper presents the ability to re-use existing gaming components within new areas of interest via minimal adaptation and the use of transmedia aspects that enables destinations to be rebranded into smart destinations.Keywords: smart tourism, gamification, user experience, transmedia content
Procedia PDF Downloads 173949 Microwave Dielectric Properties and Microstructures of Nd(Ti₀.₅W₀.₅)O₄ Ceramics for Application in Wireless Gas Sensors
Authors: Yih-Chien Chen, Yue-Xuan Du, Min-Zhe Weng
Abstract:
Carbon monoxide is a substance produced by the incomplete combustion. It is toxic even at concentrations of less than 100ppm. Since it is colorless and odorless, it is difficult to detect. CO sensors have been developed using a variety of physical mechanisms, including semiconductor oxides, solid electrolytes, and organic semiconductors. Many works have focused on using semiconducting sensors composed of sensitive layers such as ZnO, TiO₂, and NiO with high sensitivity for gases. However, these sensors working at high temperatures increased their power consumption. On the other hand, the dielectric resonator (DR) is attractive for gas detection due to its large surface area and sensitivity for external environments. Materials that are to be employed in sensing devices must have a high-quality factor. Numerous researches into the fergusonite-type structure and related ceramic systems have explored. Extensive research into RENbO₄ ceramics has explored their potential application in resonators, filters, and antennas in modern communication systems, which are operated at microwave frequencies. Nd(Ti₀.₅W₀.₅)O₄ ceramics were synthesized herein using the conventional mixed-oxide method. The Nd(Ti₀.₅W₀.₅)O₄ ceramics were prepared using the conventional solid-state method. Dielectric constants (εᵣ) of 15.4-19.4 and quality factor (Q×f) of 3,600-11,100 GHz were obtained at sintering temperatures in the range 1425-1525°C for 4 h. The dielectric properties of the Nd(Ti₀.₅W₀.₅)O₄ ceramics at microwave frequencies were found to vary with the sintering temperature. For a further understanding of these microwave dielectric properties, they were analyzed by densification, X-ray diffraction (XRD), and by making microstructural observations.Keywords: dielectric constant, dielectric resonators, sensors, quality factor
Procedia PDF Downloads 259948 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology
Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert
Abstract:
Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.Keywords: BIPV, building simulation, optimized control strategy, planning tool
Procedia PDF Downloads 108947 Innovative In-Service Training Approach to Strengthen Health Care Human Resources and Scale-Up Detection of Mycobacterium tuberculosis
Authors: Tsegahun Manyazewal, Francesco Marinucci, Getachew Belay, Abraham Tesfaye, Gonfa Ayana, Amaha Kebede, Tsegahun Manyazewal, Francesco Marinucci, Getachew Belay, Abraham Tesfaye, Gonfa Ayana, Amaha Kebede, Yewondwossen Tadesse, Susan Lehman, Zelalem Temesgen
Abstract:
In-service health trainings in Sub-Saharan Africa are mostly content-centered with higher disconnection with the real practice in the facility. This study intended to evaluate in-service training approach aimed to strengthen health care human resources. A combined web-based and face-to-face training was designed and piloted in Ethiopia with the diagnosis of tuberculosis. During the first part, which lasted 43 days, trainees accessed web-based material and read without leaving their work; while the second part comprised a one-day hands-on evaluation. Trainee’s competency was measured using multiple-choice questions, written-assignments, exercises and hands-on evaluation. Of 108 participants invited, 81 (75%) attended the course and 71 (88%) of them successfully completed. Of those completed, 73 (90%) scored a grade from A to C. The approach was effective to transfer knowledge and turn it into practical skills. In-service health training should transform from a passive one-time-event to a continuous behavioral change of participants and improvements on their actual work.Keywords: Ethiopia, health care, Mycobacterium tuberculosis, training
Procedia PDF Downloads 501946 UEMG-FHR Coupling Analysis in Pregnancies Complicated by Pre-Eclampsia and Small for Gestational Age
Authors: Kun Chen, Yan Wang, Yangyu Zhao, Shufang Li, Lian Chen, Xiaoyue Guo, Jue Zhang, Jing Fang
Abstract:
The coupling strength between uterine electromyography (UEMG) and Fetal heart rate (FHR) signals during peripartum reflects the fetal biophysical activities. Therefore, UEMG-FHR coupling characterization is instructive in assessing placenta function. This study introduced a physiological marker named elevated frequency of UEMG-FHR coupling (E-UFC) and explored its predictive value for pregnancies complicated by pre-eclampsia and small for gestational age (SGA). Placental insufficiency patients (n=12) and healthy volunteers (n=24) were recruited and participated. UEMG and FHR were recorded non-invasively by a trans-abdominal device in women at term with singleton pregnancy (32-37 weeks) from 10:00 pm to 8:00 am. The product of the wavelet coherence and the wavelet cross-spectral power between UEMG and FHR was used to weight these two effects in order to quantify the degree of the UEMG-FHR coupling. E-UFC was exacted from the resultant spectrogram by calculating the mean value of the high-coherence (r > 0.5) frequency band. Results showed the high-coherence between UEMG and FHR was observed in the frequency band (1/512-1/16Hz). In addition, E-UFC in placental insufficiency patients was weaker compared to healthy controls (p < 0.001) at group level. These findings suggested the proposed approach could be used to quantitatively characterize the fetal biophysical activities, which is beneficial for early detection of placental insufficiency and reduces the occurrence of adverse pregnancy.Keywords: uterine electromyography, fetal heart rate, coupling analysis, wavelet analysis
Procedia PDF Downloads 200945 Life Cycle Datasets for the Ornamental Stone Sector
Authors: Isabella Bianco, Gian Andrea Blengini
Abstract:
The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.Keywords: life cycle assessment, LCA datasets, ornamental stone, stone environmental impact
Procedia PDF Downloads 231944 A Deep Learning Approach to Online Social Network Account Compromisation
Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang
Abstract:
The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.Keywords: computer security, network security, online social network, account compromisation
Procedia PDF Downloads 115943 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques
Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola
Abstract:
Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2
Procedia PDF Downloads 136942 Modelling and Numerical Analysis of Thermal Non-Destructive Testing on Complex Structure
Authors: Y. L. Hor, H. S. Chu, V. P. Bui
Abstract:
Composite material is widely used to replace conventional material, especially in the aerospace industry to reduce the weight of the devices. It is formed by combining reinforced materials together via adhesive bonding to produce a bulk material with alternated macroscopic properties. In bulk composites, degradation may occur in microscopic scale, which is in each individual reinforced fiber layer or especially in its matrix layer such as delamination, inclusion, disbond, void, cracks, and porosity. In this paper, we focus on the detection of defect in matrix layer which the adhesion between the composite plies is in contact but coupled through a weak bond. In fact, the adhesive defects are tested through various nondestructive methods. Among them, pulsed phase thermography (PPT) has shown some advantages providing improved sensitivity, large-area coverage, and high-speed testing. The aim of this work is to develop an efficient numerical model to study the application of PPT to the nondestructive inspection of weak bonding in composite material. The resulting thermal evolution field is comprised of internal reflections between the interfaces of defects and the specimen, and the important key-features of the defects presented in the material can be obtained from the investigation of the thermal evolution of the field distribution. Computational simulation of such inspections has allowed the improvement of the techniques to apply in various inspections, such as materials with high thermal conductivity and more complex structures.Keywords: pulsed phase thermography, weak bond, composite, CFRP, computational modelling, optimization
Procedia PDF Downloads 169941 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations
Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo
Abstract:
Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation
Procedia PDF Downloads 283940 Surface Characterization of Zincblende and Wurtzite Semiconductors Using Nonlinear Optics
Authors: Hendradi Hardhienata, Tony Sumaryada, Sri Setyaningsih
Abstract:
Current progress in the field of nonlinear optics has enabled precise surface characterization in semiconductor materials. Nonlinear optical techniques are favorable due to their nondestructive measurement and ability to work in nonvacuum and ambient conditions. The advance of the bond hyperpolarizability models opens a wide range of nanoscale surface investigation including the possibility to detect molecular orientation at the surface of silicon and zincblende semiconductors, investigation of electric field induced second harmonic fields at the semiconductor interface, detection of surface impurities, and very recently, study surface defects such as twin boundary in wurtzite semiconductors. In this work, we show using nonlinear optical techniques, e.g. nonlinear bond models how arbitrary polarization of the incoming electric field in Rotational Anisotropy Spectroscopy experiments can provide more information regarding the origin of the nonlinear sources in zincblende and wurtzite semiconductor structure. In addition, using hyperpolarizability consideration, we describe how the nonlinear susceptibility tensor describing SHG can be well modelled using only few parameter because of the symmetry of the bonds. We also show how the third harmonic intensity feature shows considerable changes when the incoming field polarization angle is changed from s-polarized to p-polarized. We also propose a method how to investigate surface reconstruction and defects in wurtzite and zincblende structure at the nanoscale level.Keywords: surface characterization, bond model, rotational anisotropy spectroscopy, effective hyperpolarizability
Procedia PDF Downloads 155939 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology
Authors: Mahdi Farajzadeh Ajirlou
Abstract:
Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter
Procedia PDF Downloads 37938 Report of Candida Auris: An Emerging Fungal Pathogen in a Tertiary Healthcare Facility in Ekiti State, Nigeria
Authors: David Oluwole Moses, Odeyemi Adebowale Toba, Olawale Adetunji Kola
Abstract:
Candida auris, an emerging fungus, has been reported in more than 30 countries around the world since its first detection in 2009. Due to its several virulence factors, resistance to antifungals, and persistence in hospital settings, Candida auris has been reported to cause treatment-failure infections. This study was therefore carried out to determine the incidence of Candida auris in a tertiary hospital in Ekiti State, Nigeria. In this study, a total of 115 samples were screened for Candida species using cultural and molecular methods. The carriage of virulence factors and antifungal resistance among C. auris was detected using standard microbiological methods. Candida species isolated from the samples were 15 (30.0%) in clinical samples and 22 (33.85%) in hospital equipment screened. Non-albicans Candida accounted for 3 (20%) and 8 (36.36%) among the isolates from the clinical samples and equipment, respectively. Only five of the non-albicans Candida isolates were C. auris. All the isolates produced biofilm, gelatinase, and hemolysin, while none produced germ tubes. Two of the isolates were resistant to all the antifungals tested. Also, all the isolates were resistant to fluconazole and itraconazole. Nystatin appeared to be the most effective among the tested antifungals. The isolation of Candida auris is being reported for the second time in Nigeria, further confirming that the fungus has spread beyond Lagos and Ibadan, where it was first reported. The extent of the spread of the nosocomial fungus needed to be further investigated and curtailed in Nigeria before its outbreak in healthcare facilities.Keywords: candida auris, virulence factors, antifungals, pathogen, hospital, infection
Procedia PDF Downloads 43937 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions
Authors: Peter Carter
Abstract:
Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”Keywords: climate change, climate change indicators, climate change trends, climate system change interactions
Procedia PDF Downloads 100936 Rapid and Efficient Removal of Lead from Water Using Chitosan/Magnetite Nanoparticles
Authors: Othman M. Hakami, Abdul Jabbar Al-Rajab
Abstract:
Occurrence of heavy metals in water resources increased in the recent years albeit at low concentrations. Lead (PbII) is among the most important inorganic pollutants in ground and surface water. However, removal of this toxic metal efficiently from water is of public and scientific concern. In this study, we developed a rapid and efficient removal method of lead from water using chitosan/magnetite nanoparticles. A simple and effective process has been used to prepare chitosan/magnetite nanoparticles (NPs) (CS/Mag NPs) with effect on saturation magnetization value; the particles were strongly responsive to an external magnetic field making separation from solution possible in less than 2 minutes using a permanent magnet and the total Fe in solution was below the detection limit of ICP-OES (<0.19 mg L-1). The hydrodynamic particle size distribution increased from an average diameter of ~60 nm for Fe3O4 NPs to ~75 nm after chitosan coating. The feasibility of the prepared NPs for the adsorption and desorption of Pb(II) from water were evaluated using Chitosan/Magnetite NPs which showed a high removal efficiency for Pb(II) uptake, with 90% of Pb(II) removed during the first 5 minutes and equilibrium in less than 10 minutes. Maximum adsorption capacities for Pb(II) occurred at pH 6.0 and under room temperature were as high as 85.5 mg g-1, according to Langmuir isotherm model. Desorption of adsorbed Pb on CS/Mag NPs was evaluated using deionized water at different pH values ranged from 1 to 7 which was an effective eluent and did not result the destruction of NPs, then, they could subsequently be reused without any loss of their activity in further adsorption tests. Overall, our results showed the high efficiency of chitosan/magnetite nanoparticles (NPs) in lead removal from water in controlled conditions, and further studies should be realized in real field conditions.Keywords: chitosan, magnetite, water, treatment
Procedia PDF Downloads 401935 Light-Weight Network for Real-Time Pose Estimation
Authors: Jianghao Hu, Hongyu Wang
Abstract:
The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone
Procedia PDF Downloads 152934 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach
Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed
Abstract:
Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model
Procedia PDF Downloads 460933 Using Predictive Analytics to Identify First-Year Engineering Students at Risk of Failing
Authors: Beng Yew Low, Cher Liang Cha, Cheng Yong Teoh
Abstract:
Due to a lack of continual assessment or grade related data, identifying first-year engineering students in a polytechnic education at risk of failing is challenging. Our experience over the years tells us that there is no strong correlation between having good entry grades in Mathematics and the Sciences and excelling in hardcore engineering subjects. Hence, identifying students at risk of failure cannot be on the basis of entry grades in Mathematics and the Sciences alone. These factors compound the difficulty of early identification and intervention. This paper describes the development of a predictive analytics model in the early detection of students at risk of failing and evaluates its effectiveness. Data from continual assessments conducted in term one, supplemented by data of student psychological profiles such as interests and study habits, were used. Three classification techniques, namely Logistic Regression, K Nearest Neighbour, and Random Forest, were used in our predictive model. Based on our findings, Random Forest was determined to be the strongest predictor with an Area Under the Curve (AUC) value of 0.994. Correspondingly, the Accuracy, Precision, Recall, and F-Score were also highest among these three classifiers. Using this Random Forest Classification technique, students at risk of failure could be identified at the end of term one. They could then be assigned to a Learning Support Programme at the beginning of term two. This paper gathers the results of our findings. It also proposes further improvements that can be made to the model.Keywords: continual assessment, predictive analytics, random forest, student psychological profile
Procedia PDF Downloads 132932 Grating Assisted Surface Plasmon Resonance Sensor for Monitoring of Hazardous Toxic Chemicals and Gases in an Underground Mines
Authors: Sanjeev Kumar Raghuwanshi, Yadvendra Singh
Abstract:
The objective of this paper is to develop and optimize the Fiber Bragg (FBG) grating based Surface Plasmon Resonance (SPR) sensor for monitoring the hazardous toxic chemicals and gases in underground mines or any industrial area. A fully cladded telecommunication standard FBG is proposed to develop to produce surface plasmon resonance. A thin few nm gold/silver film (subject to optimization) is proposed to apply over the FBG sensing head using e-beam deposition method. Sensitivity enhancement of the sensor will be done by adding a composite nanostructured Graphene Oxide (GO) sensing layer using the spin coating method. Both sensor configurations suppose to demonstrate high responsiveness towards the changes in resonance wavelength. The GO enhanced sensor may show increased sensitivity of many fold compared to the gold coated traditional fibre optic sensor. Our work is focused on to optimize GO, multilayer structure and to develop fibre coating techniques that will serve well for sensitive and multifunctional detection of hazardous chemicals. This research proposal shows great potential towards future development of optical fiber sensors using readily available components such as Bragg gratings as highly sensitive chemical sensors in areas such as environmental sensing.Keywords: surface plasmon resonance, fibre Bragg grating, sensitivity, toxic gases, MATRIX method
Procedia PDF Downloads 264931 Study of Cathodic Protection for Trunk Pipeline of Al-Garraf Oil Field
Authors: Maysoon Khalil Askar
Abstract:
The delineation of possible areas of corrosion along the external face of an underground oil pipeline in Trunk line of Al- Garraf oil field was investigated using the horizontal electrical resistivity profiling technique and study the contribution of pH, Moisture Content in Soil and Presence chlorides, sulfates and total dissolve salts in soil and water. The test sites represent a physical and chemical properties of soils. The hydrogen-ion concentration of soil and groundwater range from 7.2 to 9.6, and the resistivity values of the soil along the pipeline were obtained using the YH302B model resistivity meter having values between 1588 and 720 Ohm-cm. the chloride concentration in soil and groundwater is high (more than 1000 ppm), total soulable salt is more than 5000 ppm, and sulphate range from 0.17% and 0.98% in soil and more than 600 ppm in groundwater. The soil is poor aeration, the soil texture is fine (clay and silt soil), the water content is high (the groundwater is close to surface), the chloride and sulphate is high in the soil and groundwater, the total soulable salt is high in ground water and finally the soil electric resistivity is low that the soil is very corrosive and there is the possibility of the pipeline failure. These methods applied in the study are quick, economic and efficient for detecting along buried pipelines which need to be protected. Routine electrical geophysical investigations along buried oil pipelines should be undertaken for the early detection and prevention of pipeline failure with its attendant environmental, human and economic consequences.Keywords: soil resistivity, corrosion, cathodic protection, chloride concentration, water content
Procedia PDF Downloads 436930 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 528929 Molecular Profiles of Microbial Etiologic Agents Forming Biofilm in Urinary Tract Infections of Pregnant Women by RTPCR Assay
Authors: B. Nageshwar Rao
Abstract:
Urinary tract infection (UTI) represents the most commonly acquired bacterial infection worldwide, with substantial morbidity, mortality, and economic burden. The objective of the study is to characterize the microbial profiles of uropathogenic in the obstetric population by RTPCR. Study design: An observational cross-sectional study was performed at a single tertiary health care hospital among 50 pregnant women with UTIs, including asymptomatic and symptomatic patients attending the outpatient department and inpatient department of Obstetrics and Gynaecology.Methods: Serotyping and genes detection of various uropathogens were studied using RTPCR. Pulse filed gel electrophoresis methods were used to determine the various genetic profiles. Results: The present study shows that CsgD protein, involved in biofilm formation in Escherichia coli, VIM1, IMP1 genes for Klebsiella were identified by using the RTPCR method. Our results showed that the prevalence of VIM1 and IMP1 genes and CsgD protein in E.coli showed a significant relationship between strong biofilm formation, and this may be due to the prevalence of specific genes. Finally, the genetic identification of RTPCR results for both bacteria was correlated with each other and concluded that the above uropathogens were common isolates in producing Biofilm in the pregnant woman suffering from urinary tract infection in our hospital observational study.Keywords: biofilms, Klebsiella, E.coli, urinary tract infection
Procedia PDF Downloads 124928 A Sui Generis Technique to Detect Pathogens in Post-Partum Breast Milk Using Image Processing Techniques
Authors: Yogesh Karunakar, Praveen Kandaswamy
Abstract:
Mother’s milk provides the most superior source of nutrition to a child. There is no other substitute to the mother’s milk. Postpartum secretions like breast milk can be analyzed on the go for testing the presence of any harmful pathogen before a mother can feed the child or donate the milk for the milk bank. Since breast feeding is one of the main causes for transmission of diseases to the newborn, it is mandatory to test the secretions. In this paper, we describe the detection of pathogens like E-coli, Human Immunodeficiency Virus (HIV), Hepatitis B (HBV), Hepatitis C (HCV), Cytomegalovirus (CMV), Zika and Ebola virus through an innovative method, in which we are developing a unique chip for testing the mother’s milk sample. The chip will contain an antibody specific to the target pathogen that will show a color change if there are enough pathogens present in the fluid that will be considered dangerous. A smart-phone camera will then be acquiring the image of the strip and using various image processing techniques we will detect the color development due to antigen antibody interaction within 5 minutes, thereby not adding to any delay, before the newborn is fed or prior to the collection of the milk for the milk bank. If the target pathogen comes positive through this method, then the health care provider can provide adequate treatment to bring down the number of pathogens. This will reduce the postpartum related mortality and morbidity which arises due to feeding infectious breast milk to own child.Keywords: postpartum, fluids, camera, HIV, HCV, CMV, Zika, Ebola, smart-phones, breast milk, pathogens, image processing techniques
Procedia PDF Downloads 221927 Using Cyclic Structure to Improve Inference on Network Community Structure
Authors: Behnaz Moradijamei, Michael Higgins
Abstract:
Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.Keywords: hypothesis testing, RNBRW, network inference, community structure
Procedia PDF Downloads 150926 A Straightforward Method for Determining Inorganic Selenium Speciations by Graphite Furnace Atomic Absorption Spectroscopy in Water Samples
Authors: Sahar Ehsani, David James, Vernon Hodge
Abstract:
In this experimental study, total selenium in solution was measured with Graphite Furnace Atomic Absorption Spectroscopy, GFAAS, then chemical reactions with sodium borohydride were used to reduce selenite to hydrogen selenide. Hydrogen selenide was then stripped from the solution by purging the solution with nitrogen gas. Since the two main speciations in oxic waters are usually selenite, Se(IV) and selenate, Se(VI), it was assumed that after Se(IV) is removed, the remaining total selenium was Se(VI). Total selenium measured after stripping gave Se(VI) concentration, and the difference of total selenium measured before and after stripping gave Se(IV) concentration. An additional step of reducing Se(VI) to Se(IV) was performed by boiling the stripped solution under acidic conditions, then removing Se(IV) by a chemical reaction with sodium borohydride. This additional procedure of removing Se(VI) from the solution is useful in rare cases where the water sample is reducing and contains selenide speciation. In this study, once Se(IV) and Se(VI) were both removed from the water sample, the remaining total selenium concentration was zero. The method was tested to determine Se(IV) and Se(VI) in both purified water and synthetic irrigation water spiked with Se(IV) and Se(VI). Average recovery of spiked samples of diluted synthetic irrigation water was 99% for Se(IV) and 97% for Se(VI). Detection limits of the method were 0.11 µg L⁻¹ and 0.32 µg L⁻¹ for Se(IV) and Se(VI), respectively.Keywords: Analytical Method, Graphite Furnace Atomic Absorption Spectroscopy, Selenate, Selenite, Selenium Speciations
Procedia PDF Downloads 141925 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 330924 Image Processing of Scanning Electron Microscope Micrograph of Ferrite and Pearlite Steel for Recognition of Micro-Constituents
Authors: Subir Gupta, Subhas Ganguly
Abstract:
In this paper, we demonstrate the new area of application of image processing in metallurgical images to develop the more opportunity for structure-property correlation based approaches of alloy design. The present exercise focuses on the development of image processing tools suitable for phrase segmentation, grain boundary detection and recognition of micro-constituents in SEM micrographs of ferrite and pearlite steels. A comprehensive data of micrographs have been experimentally developed encompassing the variation of ferrite and pearlite volume fractions and taking images at different magnification (500X, 1000X, 15000X, 2000X, 3000X and 5000X) under scanning electron microscope. The variation in the volume fraction has been achieved using four different plain carbon steel containing 0.1, 0.22, 0.35 and 0.48 wt% C heat treated under annealing and normalizing treatments. The obtained data pool of micrographs arbitrarily divided into two parts to developing training and testing sets of micrographs. The statistical recognition features for ferrite and pearlite constituents have been developed by learning from training set of micrographs. The obtained features for microstructure pattern recognition are applied to test set of micrographs. The analysis of the result shows that the developed strategy can successfully detect the micro constitutes across the wide range of magnification and variation of volume fractions of the constituents in the structure with an accuracy of about +/- 5%.Keywords: SEM micrograph, metallurgical image processing, ferrite pearlite steel, microstructure
Procedia PDF Downloads 197