Search results for: detection algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6364

Search results for: detection algorithm

184 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 37
183 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study

Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem

Abstract:

Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.

Keywords: preeclampsia, incidence, risk factors, maternal

Procedia PDF Downloads 116
182 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 190
181 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 108
180 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 44
179 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 56
178 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 37
177 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 108
176 The Evaluation of Subclinical Hypothyroidism in Children with Morbid Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Cardiovascular pathology is one of the expected consequences of excessive fat gain. The role of zinc in thyroid hormone metabolism is an important matter. The concentrations of both thyroid stimulating hormone (TSH) and zinc are subject to variation in obese individuals. Zinc exhibits protective effects on cardiovascular health and is inversely correlated with cardiovascular markers in childhood obesity. The association between subclinical hypothyroidism (SCHT) and metabolic disorders is under investigation due to its clinical importance. Underactive thyroid gland causes high TSH levels. Subclinical hypothyroidism is defined as the elevated serum TSH levels in the presence of normal free thyroxin (T4) concentrations. The aim of this study was to evaluate the associations between TSH levels and zinc concentrations in morbid obese (MO) children exhibiting SCHT. The possibility of using the probable association between these parameters was also evaluated for the discrimination of metabolic syndrome positive (MetS+) and metabolic syndrome negative (MetS-) groups. Forty-two children were present in each group. Informed consent forms were obtained. Institutional Ethics Committee approved the study protocol. Tables prepared by World Health Organization were used for the definition of MO children. Children, whose age- and sex-dependent body mass index percentile values were above 99, were defined as MO. Children with at least two MetS components were included in MOMetS+ group. Elevated systolic/diastolic blood pressure values, increased fasting blood glucose, triglycerides (TRG)/decreased high density lipoprotein-cholesterol (HDL-C) concentrations in addition to central obesity were listed as MetS components. Anthropometric measures were recorded. Routine biochemical analyses were performed. Thirteen and fifteen children had SCHT in MOMetS- and MOMetS+ groups, respectively. Statistical analyses were performed. p<0.05 was accepted as statistically significant. In MOMetS- and MOMetS+ groups, TSH levels were 4.1±2.9 mU/L and 4.6±3.1 mU/L, respectively. Corresponding values for SCHT cases in these groups were 7.3±3.1 mU/L and 8.0±2.7 mU/L. Free T4 levels were within normal limits. Zinc concentrations were negatively correlated with TSH levels in both groups. The significant negative correlation calculated in MOMetS+ group (r= -0.909; p<0.001) was much stronger than that found in MOMetS- group (r= -0.706; p<0.05). This strong correlation (r= -0.909; p<0.001) calculated for cases with SCHT in MOMetS+ group was much lower (r= -0.793; p<0.001) when all MOMetS+ cases were considered. Zinc is closely related to T4 and TSH therefore, it participates in thyroid hormone metabolism. Since thyroid hormones are required for zinc absorption, hypothyroidism can lead to zinc deficiency. The presence of strong correlations between TSH and zinc in SCHT cases found in both MOMetS- and MOMetS+ groups pointed out that MO children were under the threat of cardiovascular pathologies. The detection of the much stronger correlation in MOMetS+ group in comparison with the correlation found in MOMetS- group was the indicator of greater cardiovascular risk due to the presence of MetS. In MOMetS+ group, correlation in SCHT cases found higher than correlation calculated for all cases confirmed much higher cardiovascular risk due to the contribution of SCHT.

Keywords: cardiovascular risk, children, morbid obesity, subclinical hypothyroidism, zinc

Procedia PDF Downloads 55
175 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring

Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra

Abstract:

Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.

Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application

Procedia PDF Downloads 68
174 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms

Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.

Abstract:

Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.

Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery

Procedia PDF Downloads 149
173 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 165
172 Evaluation of the Incidence of Mycobacterium Tuberculosis Complex Associated with Soil, Hayfeed and Water in Three Agricultural Facilities in Amathole District Municipality in the Eastern Cape Province

Authors: Athini Ntloko

Abstract:

Mycobacterium bovis and other species of Mycobacterium tuberculosis complex (MTBC) can result to a zoonotic infection known as Bovine tuberculosis (bTB). MTBC has members that may contaminate an extensive range of hosts, including wildlife. Diverse wild species are known to cause disease in domestic livestock and are acknowledged as TB reservoirs. It has been a main study worldwide to deliberate on bTB risk factors as a result and some studies focused on particular parts of risk factors such as wildlife and herd management. The significance of the study was to determine the incidence of Mycobacterium tuberculosis complex that is associated with soil, hayfeed and water. Questionnaires were administered to thirty (30) smallholding farm owners in the two villages (kwaMasele and Qungqwala) and three (3) three commercial farms (Fort Hare dairy farm, Middledrift dairy farm and Seven star dairy farm). Detection of M. tuberculosis complex was achieved by Polymerase Chain Reaction using primers for IS6110; whereas a genotypic drug resistance mutation was detected using Genotype MTBDRplus assays. Nine percent (9%) of respondents had more than 40 cows in their herd, while 60% reported between 10 and 20 cows in their herd. Relationship between farm size and vaccination for TB differed from forty one percent (41%) being the highest to the least five percent (5%). The highest number of respondents who knew about relationship between TB cases and cattle location was ninety one percent (91%). Approximately fifty one percent (51%) of respondents had knowledge about wild life access to the farms. Relationship between import of cattle and farm size ranged from nine percent (9%) to thirty five percent (35%). Cattle sickness in relation to farm size differed from forty three (43%) being the highest to the least three percent (3%); while thirty three percent (33%) of respondents had knowledge about health management. Respondents with knowledge about the occurrence of TB infections in farms were forty-eight percent (48%). The frequency of DNA isolation from samples ranged from the highest forty-five percent (45%) from water to the least twenty two percent (22%) from soil. Fort Hare dairy farm had the highest number of positive samples, forty four percent (44%) from water samples; whereas Middledrift dairy farm had the lowest positive from water, seventeen percent (17%). Twelve (22%) out of 55 isolates showed resistance to INH and RIF that is, multi-drug resistance (MDR) and nine percent (9%) were sensitive to either INH or RIF. The mutations at rpoB gene differed from 58% being the highest to the least (23%). Fifty seven percent (57%) of samples showed a S315T1 mutation while only 14% possessed a S531L in the katG gene. The highest inhA mutations were detected in T8A (80 %) and the least was observed in A16G (17%). The results of this study reveal that risk factors for bTB in cattle and dairy farm workers are a serious issue abound in the Eastern Cape of South Africa; with the possibility of widespread dissemination of multidrug resistant determinants in MTBC from the environment.

Keywords: hayfeed, isoniazid, multi-drug resistance, mycobacterium tuberculosis complex, polymerase chain reaction, rifampicin, soil, water

Procedia PDF Downloads 311
171 ExactData Smart Tool For Marketing Analysis

Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak

Abstract:

Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.

Keywords: NLP, AI, IT, language, marketing, analysis

Procedia PDF Downloads 56
170 Conceptual and Preliminary Design of Landmine Searching UAS at Extreme Environmental Condition

Authors: Gopalasingam Daisan

Abstract:

Landmines and ammunitions have been creating a significant threat to the people and animals, after the war, the landmines remain in the land and it plays a vital role in civilian’s security. Especially the Children are at the highest risk because they are curious. After all, an unexploded bomb can look like a tempting toy to an inquisitive child. The initial step of designing the UAS (Unmanned Aircraft Systems) for landmine detection is to choose an appropriate and effective sensor to locate the landmines and other unexploded ammunitions. The sensor weight and other components related to the sensor supporting device’s weight are taken as a payload weight. The mission requirement is to find the landmines in a particular area by making a proper path that will cover all the vicinity in the desired area. The weight estimation of the UAV (Unmanned Aerial Vehicle) can be estimated by various techniques discovered previously with good accuracy at the first phase of the design. The next crucial part of the design is to calculate the power requirement and the wing loading calculations. The matching plot techniques are used to determine the thrust-to-weight ratio, and this technique makes this process not only easiest but also precisely. The wing loading can be calculated easily from the stall equation. After these calculations, the wing area is determined from the wing loading equation and the required power is calculated from the thrust to weight ratio calculations. According to the power requirement, an appropriate engine can be selected from the available engine from the market. And the wing geometric parameter is chosen based on the conceptual sketch. The important steps in the wing design to choose proper aerofoil and which will ensure to create sufficient lift coefficient to satisfy the requirements. The next component is the tail; the tail area and other related parameters can be estimated or calculated to counteract the effect of the wing pitching moment. As the vertical tail design depends on many parameters, the initial sizing only can be done in this phase. The fuselage is another major component, which is selected based on the slenderness ratio, and also the shape is determined on the sensor size to fit it under the fuselage. The landing gear is one of the important components which is selected based on the controllability and stability requirements. The minimum and maximum wheel track and wheelbase can be determined based on the crosswind and overturn angle requirements. The minor components of the landing gear design and estimation are not the focus of this project. Another important task is to calculate the weight of the major components and it is going to be estimated using empirical relations and also the mass is added to each such component. The CG and moment of inertia are also determined to each component separately. The sensitivity of the weight calculation is taken into consideration to avoid extra material requirements and also reduce the cost of the design. Finally, the aircraft performance is calculated, especially the V-n (velocity and load factor) diagram for different flight conditions such as not disturbed and with gust velocity.

Keywords: landmine, UAS, matching plot, optimization

Procedia PDF Downloads 147
169 Assessment of Very Low Birth Weight Neonatal Tracking and a High-Risk Approach to Minimize Neonatal Mortality in Bihar, India

Authors: Aritra Das, Tanmay Mahapatra, Prabir Maharana, Sridhar Srikantiah

Abstract:

In the absence of adequate well-equipped neonatal-care facilities serving rural Bihar, India, the practice of essential home-based newborn-care remains critically important for reduction of neonatal and infant mortality, especially among pre-term and small-for-gestational-age (Low-birth-weight) newborns. To improve the child health parameters in Bihar, ‘Very-Low-Birth-Weight (vLBW) Tracking’ intervention is being conducted by CARE India, since 2015, targeting public facility-delivered newborns weighing ≤2000g at birth, to improve their identification and provision of immediate post-natal care. To assess the effectiveness of the intervention, 200 public health facilities were randomly selected from all functional public-sector delivery points in Bihar and various outcomes were tracked among the neonates born there. Thus far, one pre-intervention (Feb-Apr’2015-born neonates) and three post-intervention (for Sep-Oct’2015, Sep-Oct’2016 and Sep-Oct’2017-born children) follow-up studies were conducted. In each round, interviews were conducted with the mothers/caregivers of successfully-tracked children to understand outcome, service-coverage and care-seeking during the neonatal period. Data from 171 matched facilities common across all rounds were analyzed using SAS-9.4. Identification of neonates with birth-weight ≤ 2000g improved from 2% at baseline to 3.3%-4% during post-intervention. All indicators pertaining to post-natal home-visits by frontline-workers (FLWs) improved. Significant improvements between baseline and post-intervention rounds were also noted regarding mothers being informed about ‘weak’ child – at the facility (R1 = 25 to R4 = 50%) and at home by FLW (R1 = 19%, to R4 = 30%). Practice of ‘Kangaroo-Mother-Care (KMC)’– an important component of essential newborn care – showed significant improvement in postintervention period compared to baseline in both facility (R1 = 15% to R4 = 31%) and home (R1 = 10% to R4=29%). Increasing trend was noted regarding detection and birth weight-recording of the extremely low-birth-weight newborns (< 1500 g) showed an increasing trend. Moreover, there was a downward trend in mortality across rounds, in each birth-weight strata (< 1500g, 1500-1799g and >= 1800g). After adjustment for the differential distribution of birth-weights, mortality was found to decline significantly from R1 (22.11%) to R4 (11.87%). Significantly declining trend was also observed for both early and late neonatal mortality and morbidities. Multiple regression analysis identified - birth during immediate post-intervention phase as well as that during the maintenance phase, birth weight > 1500g, children of low-parity mothers, receiving visit from FLW in the first week and/or receiving advice on extra care from FLW as predictors of survival during neonatal period among vLBW newborns. vLBW tracking was found to be a successful and sustainable intervention and has already been handed over to the Government.

Keywords: weak newborn tracking, very low birth weight babies, newborn care, community response

Procedia PDF Downloads 130
168 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 220
167 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 175
166 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 137
165 Laboratory Indices in Late Childhood Obesity: The Importance of DONMA Indices

Authors: Orkide Donma, Mustafa M. Donma, Muhammet Demirkol, Murat Aydin, Tuba Gokkus, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu

Abstract:

Obesity in childhood establishes a ground for adulthood obesity. Especially morbid obesity is an important problem for the children because of the associated diseases such as diabetes mellitus, cancer and cardiovascular diseases. In this study, body mass index (BMI), body fat ratios, anthropometric measurements and ratios were evaluated together with different laboratory indices upon evaluation of obesity in morbidly obese (MO) children. Children with nutritional problems participated in the study. Written informed consent was obtained from the parents. Study protocol was approved by the Ethics Committee. Sixty-two MO girls aged 129.5±35.8 months and 75 MO boys aged 120.1±26.6 months were included into the scope of the study. WHO-BMI percentiles for age-and-sex were used to assess the children with those higher than 99th as morbid obesity. Anthropometric measurements of the children were recorded after their physical examination. Bio-electrical impedance analysis was performed to measure fat distribution. Anthropometric ratios, body fat ratios, Index-I and Index-II as well as insulin sensitivity indices (ISIs) were calculated. Girls as well as boys were binary grouped according to homeostasis model assessment-insulin resistance (HOMA-IR) index of <2.5 and >2.5, fasting glucose to insulin ratio (FGIR) of <6 and >6 and quantitative insulin sensitivity check index (QUICKI) of <0.33 and >0.33 as the frequently used cut-off points. They were evaluated based upon their BMIs, arms, legs, trunk, whole body fat percentages, body fat ratios such as fat mass index (FMI), trunk-to-appendicular fat ratio (TAFR), whole body fat ratio (WBFR), anthropometric measures and ratios [waist-to-hip, head-to-neck, thigh-to-arm, thigh-to-ankle, height/2-to-waist, height/2-to-hip circumference (C)]. SPSS/PASW 18 program was used for statistical analyses. p≤0.05 was accepted as statistically significance level. All of the fat percentages showed differences between below and above the specified cut-off points in girls when evaluated with HOMA-IR and QUICKI. Differences were observed only in arms fat percent for HOMA-IR and legs fat percent for QUICKI in boys (p≤ 0.05). FGIR was unable to detect any differences for the fat percentages of boys. Head-to-neck C was the only anthropometric ratio recommended to be used for all ISIs (p≤0.001 for both girls and boys in HOMA-IR, p≤0.001 for girls and p≤0.05 for boys in FGIR and QUICKI). Indices which are recommended for use in both genders were Index-I, Index-II, HOMA/BMI and log HOMA (p≤0.001). FMI was also a valuable index when evaluated with HOMA-IR and QUICKI (p≤0.001). The important point was the detection of the severe significance for HOMA/BMI and log HOMA while they were evaluated also with the other indices, FGIR and QUICKI (p≤0.001). These parameters along with Index-I were unique at this level of significance for all children. In conclusion, well-accepted ratios or indices may not be valid for the evaluation of both genders. This study has emphasized the limiting properties for boys. This is particularly important for the selection process of some ratios and/or indices during the clinical studies. Gender difference should be taken into consideration for the evaluation of the ratios or indices, which will be recommended to be used particularly within the scope of obesity studies.

Keywords: anthropometry, childhood obesity, gender, insulin sensitivity index

Procedia PDF Downloads 334
164 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 348
163 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 88
162 Monsoon Controlled Mercury Transportation in Ganga Alluvial Plain, Northern India and Its Implication on Global Mercury Cycle

Authors: Anjali Singh, Ashwani Raju, Vandana Devi, Mohmad Mohsin Atique, Satyendra Singh, Munendra Singh

Abstract:

India is the biggest consumer of mercury and, consequently, a major emitter too. The increasing mercury contamination in India’s water resources has gained widespread attention and, therefore, atmospheric deposition is of critical concern. However, little emphasis was placed on the role of precipitation in the aquatic mercury cycle of the Ganga Alluvial Plain which provides drinking water to nearly 7% of the world’s human population. A majority of the precipitation here occurs primarily in 10% duration of the year in the monsoon season. To evaluate the sources and transportation of mercury, water sample analysis has been conducted from two selected sites near Lucknow, which have a strong hydraulic gradient towards the river. 31 groundwater samples from Jehta village (26°55’15’’N; 80°50’21’’E; 119 m above mean sea level) and 31 river water samples from the Behta Nadi (a tributary of the Gomati River draining into the Ganga River) were collected during the monsoon season on every alternate day between 01 July to 30 August 2019. The total mercury analysis was performed by using Flow Injection Atomic Absorption Spectroscopy (AAS)-Mercury Hybride System, and daily rainfall data was collected from the India Meteorological Department, Amausi, Lucknow. The ambient groundwater and river-water concentrations were both 2-4 ng/L as there is no known geogenic source of mercury found in the area. Before the onset of the monsoon season, the groundwater and the river-water recorded mercury concentrations two orders of magnitude higher than the ambient concentrations, indicating the regional transportation of the mercury from the non-point source into the aquatic environment. Maximum mercury concentrations in groundwater and river-water were three orders of magnitude higher than the ambient concentrations after the onset of the monsoon season characterizing the considerable mobilization and redistribution of mercury by monsoonal precipitation. About 50% of both of the water samples were reported mercury below the detection limit, which can be mostly linked to the low intensity of precipitation in August and also with the dilution factor by precipitation. The highest concentration ( > 1200 ng/L) of mercury in groundwater was reported after 6-days lag from the first precipitation peak. Two high concentration peaks (>1000 ng/L) in river-water were separately correlated with the surface flow and groundwater outflow of mercury. We attribute the elevated mercury concentration in both of the water samples before the precipitation event to mercury originating from the extensive use of agrochemicals in mango farming in the plain. However, the elevated mercury concentration during the onset of monsoon appears to increase in area wetted with atmospherically deposited mercury, which migrated down from surface water to groundwater as downslope migration is a fundamental mechanism seen in rivers of the alluvial plain. The present study underscores the significance of monsoonal precipitation in the transportation of mercury to drinking water resources of the Ganga Alluvial Plain. This study also suggests that future research must be pursued for a better understand of the human health impact of mercury contamination and for quantification of the role of Ganga Alluvial Plain in the Global Mercury Cycle.

Keywords: drinking water resources, Ganga alluvial plain, india, mercury

Procedia PDF Downloads 121
161 Isoflavonoid Dynamic Variation in Red Clover Genotypes

Authors: Andrés Quiroz, Emilio Hormazábal, Ana Mutis, Fernando Ortega, Loreto Méndez, Leonardo Parra

Abstract:

Red clover root borer, Hylastinus obscurus Marsham (Coleoptera: Curculionidae), is the main insect pest associated to red clover, Trifolium pratense L. An average of 1.5 H. obscurus per plant can cause 5.5% reduction in forage yield in pastures of two to three years old. Moreover, insect attack can reach 70% to 100% of the plants. To our knowledge, there is no a chemical strategy for controlling this pest. Therefore alternative strategies for controlling H. obscurus are a high priority for red clover producers. One of this alternative is related to the study of secondary metabolites involved in intrinsic chemical defenses developed by plants, such as isoflavonoids. The isoflavonoids formononetin and daidzein have elicited an antifeedant and phagostimult effect on H. obscurus respectively. However, we do not know how is the dynamic variation of these isoflavonoids under field conditions. The main objective of this work was to evaluate the variation of the antifeedant isoflavonoids formononetin, the phagostimulant isoflavonoids daidzein, and their respective glycosides over time in different ecotypes of red clover. Fourteen red clover ecotypes (8 cultivars and 6 experimental lines), were collected at INIA-Carillanca (La Araucanía, Chile). These plants were established in October 2015 under irrigated conditions. The cultivars were distributed in a randomized complete block with three replicates. The whole plants were sampled in four times: 15th October 2016, 12th December 2016, 27th January 2017 and 16th March 2017 with sufficient amount of soil to avoid root damage. A polar fraction of isoflavonoid was obtained from 20 mg of lyophilized root tissue extracted with 2 mL of 80% MeOH for 16 h using an orbital shaker in the dark at room temperature. After, an aliquot of 1.4 mL of the supernatant was evaporated, and the residue was resuspended in 300 µL of 45% MeOH. The identification and quantification of isoflavonoid root extracts were performed by the injection of 20 µL into a Shimadzu HPLC equipped with a C-18 column. The sample was eluted with a mobile phase composed of AcOH: H₂O (1:9 v/v) as solvent A and CH₃CN as solvent B. The detection was performed at 260 nm. The results showed that the amount of aglycones was higher than the respective glycosides. This result is according to the biosynthetic pathway of flavonoids, where the formation of glycoside is further to the glycosides biosynthesis. The amount of formononetin was higher than daidzein. In roots, where H. obscurus spent the most part of its live cycle, the highest content of formononetin was found in G 27, Pawera, Sabtoron High, Redqueli-INIA and Superqueli-INIA cvs. (2.1, 1.8, 1.8, 1.6 and 1.0 mg g⁻¹ respectively); and the lowest amount of daidzein were found Superqueli-INIA (0.32 mg g⁻¹) and in the experimental line Sel Syn Int4 (0.24 mg g⁻¹). This ecotype showed a high content of formononetin (0.9 mg g⁻¹). This information, associated with cultural practices, could help farmers and breeders to reduce H. obscurus in grassland, selecting ecotypes with high content of formononetin and low amount of daidzein in the roots of red clover plants. Acknowledgements: FONDECYT 1141245 and 11130715.

Keywords: daidzein, formononetin, isoflavonoid glycosides, trifolium pratense

Procedia PDF Downloads 189
160 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 47
159 Anticancer Potentials of Aqueous Tinospora cordifolia and Its Bioactive Polysaccharide, Arabinogalactan on Benzo(a)Pyrene Induced Pulmonary Tumorigenesis: A Study with Relevance to Blood Based Biomarkers

Authors: Vandana Mohan, Ashwani Koul

Abstract:

Aim: To evaluate the potential of Aqueous Tinospora cordifolia stem extract (Aq.Tc) and Arabinogalactan (AG) on pulmonary carcinogenesis and associated tumor markers. Background: Lung cancer is one of the most frequent malignancy with high mortality rate due to limitation of early detection resulting in low cure rates. Current research effort focuses on identifying some blood-based biomarkers like CEA, ctDNA and LDH which may have potential to detect cancer at an early stage, evaluation of therapeutic response and its recurrence. Medicinal plants and their active components have been widely investigated for their anticancer potentials. Aqueous preparation of T. Cordifolia extract is enriched in the polysaccharide fraction i.e., AG when compared with other types of extract. Moreover, reports are available of polysaccharide fraction of T. Cordifolia in in vitro lung cancer models which showed profound anti-metastatic activity against these cell lines. However, not much has been explored about its effect in in vivo lung cancer models and the underlying mechanism involved. Experimental Design: Mice were randomly segregated into six groups. Group I animals served as control. Group II animals were administered with Aq. Tc extract (200 mg/kg b.w.) p.o.on the alternate days. Group III animals were fed with AG (7.5 mg/kg b.w.) p.o. on the alternate days (thrice a week). Group IV animals were installed with Benzo(a)pyrene (50 mg/kg b.w.), i.p. twice within an interval of two weeks. Group V animals received Aq. Tc extract as in group II along with it B(a)P was installed after two weeks of Aq. Tc administration following the same protocol as for group IV. Group VI animals received AG as in group III along with it B(a)P was installed after two weeks of AG administration. Results: Administration of B(a)P to mice resulted in increased tumor incidence, multiplicity and pulmonary somatic index with concomitant increase in serum/plasma markers like CEA, ctDNA, LDH and TNF-α.Aq.Tc and AG supplementation significantly attenuated these alterations at different stages of tumorigenesis thereby showing potent anti-cancer effect in lung cancer. A pronounced decrease in serum/plasma markers were observed in animals treated with Aq.Tc as compared to those fed with AG. Also, extensive hyperproliferation of alveolar epithelium was prominent in B(a)P induced lung tumors. However, treatment of Aq.Tc and AG to lung tumor bearing mice exhibited reduced alveolar damage evident from decreased number of hyperchromatic irregular nuclei. A direct correlation between the concentration of tumor markers and the intensity of lung cancer was observed in animals bearing cancer co-treated with Aq.Tc and AG. Conclusion: These findings substantiate the chemopreventive potential of Aq.Tc and AG against lung tumorigenesis. Interestingly, Aq.Tc was found to be more effective in modulating the cancer as reflected by various observations which may be attributed to the synergism offered by various components of Aq.Tc. Further studies are in progress to understand the underlined mechanism in inhibiting lung tumorigenesis by Aq.Tc and AG.

Keywords: Arabinogalactan, Benzo(a)pyrene B(a)P, carcinoembryonic antigen (CEA), circulating tumor DNA (ctDNA), lactate dehydrogenase (LDH), Tinospora cordifolia

Procedia PDF Downloads 162
158 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus

Authors: Yesim Tumsek, Erkan Celebi

Abstract:

In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.

Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus

Procedia PDF Downloads 247
157 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 178
156 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 131
155 Biological Monitoring: Vegetation Cover, Bird Assemblages, Rodents, Terrestrial and Aquatic Invertebrates from a Closed Landfill

Authors: A. Cittadino, P. Gantes, C. Coviella, M. Casset, A. Sanchez Caro

Abstract:

Three currently active landfills receive the waste from Buenos Aires city and the Great Buenos Aires suburbs. One of the first landfills to receive solid waste from this area was located in Villa Dominico, some 7 km south from Buenos Aires City. With an area of some 750 ha, including riparian habitats, divided into 14 cells, it received solid wastes from June 1979 through February 2004. In December 2010, a biological monitoring program was set up by CEAMSE and Universidad Nacional de Lujan, still operational to date. The aim of the monitoring program is to assess the state of several biological groups within the landfill and to follow their dynamics overtime in order to identify if any, early signs of damage the landfill activities might have over the biota present. Bird and rodent populations, aquatic and terrestrial invertebrates’ populations, cells vegetation coverage, and surrounding areas vegetation coverage and main composition are followed by quarterly samplings. Bird species richness and abundance were estimated by observation over walk transects on each environment. A total of 74 different species of birds were identified. Species richness and diversity were high for both riparian surrounding areas and within the landfill. Several grassland -typical of the 'Pampa'- bird species were found within the landfill, as well as some migratory and endangered bird species. Sherman and Tomahawk traps are set overnight for small mammal sampling. Rodent populations are just above detection limits, and the few specimens captured belong mainly to species common to rural areas, instead of city-dwelling species. The two marsupial species present in the region were captured on occasions. Aquatic macroinvertebrates were sampled on a watercourse upstream and downstream the outlet of the landfill’s wastewater treatment plant and are used to follow water quality using biological indices. Water quality ranged between weak and severe pollution; benthic invertebrates sampled before and after the landfill, show no significant differences in water quality using the IBMWP index. Insect biota from yellow sticky cards and pitfall traps showed over 90 different morphospecies, with Shannon diversity index running from 1.9 to 3.9, strongly affected by the season. An easy-to-perform non-expert demandant method was used to assess vegetation coverage. Two scales of determination are utilized: field observation (1 m resolution), and Google Earth images (that allow for a better than 5 m resolution). Over the eight year period of the study, vegetation coverage over the landfill cells run from a low 83% to 100% on different cells, with an average between 95 to 99% for the entire landfill depending on seasonality. Surrounding area vegetation showed almost 100% coverage during the entire period, with an average density from 2 to 6 species per sq meter and no signs of leachate damaged vegetation.

Keywords: biological indicators, biota monitoring, landfill species diversity, waste management

Procedia PDF Downloads 117