Search results for: rapid detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5553

Search results for: rapid detection

123 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing

Procedia PDF Downloads 65
122 Transitioning Towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges

Authors: Atefeh Salehipoor

Abstract:

Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: 1. Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. 2. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. 3. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. 4. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. 5. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. 6. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.

Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension

Procedia PDF Downloads 53
121 Performance of the Abbott RealTime High Risk HPV Assay with SurePath Liquid Based Cytology Specimens from Women with Low Grade Cytological Abnormalities

Authors: Alexandra Sargent, Sarah Ferris, Ioannis Theofanous

Abstract:

The Abbott RealTime High Risk HPV test (RealTime HPV) is one of five assays clinically validated and approved by the English NHS Cervical Screening Programme (CSP) for HPV triage of low grade dyskaryosis and test-of-cure of treated Cervical Intraepithelial Neoplasia. The assay is a highly automated multiplex real-time PCR test for detecting 14 high risk (hr) HPV types, with simultaneous differentiation of HPV 16 and HPV 18 versus non-HPV 16/18 hrHPV. An endogenous internal control ensures sample cellularity, controls extraction efficiency and PCR inhibition. The original cervical specimen collected in SurePath (SP) liquid-based cytology (LBC) medium (BD Diagnostics) and the SP post-gradient cell pellets (SPG) after cytological processing are both CE marked for testing with the RealTime HPV test. During the 2011 NHSCSP validation of new tests only the original aliquot of SP LBC medium was investigated. Residual sample volume left after cytology slide preparation is low and may not always have sufficient volume for repeat HPV testing or for testing of other biomarkers that may be implemented in testing algorithms in the future. The SPG samples, however, have sufficient volumes to carry out additional testing and necessary laboratory validation procedures. This study investigates the correlation of RealTime HPV results of cervical specimens collected in SP LBC medium from women with low grade cytological abnormalities observed with matched pairs of original SP LBC medium and SP post-gradient cell pellets (SPG) after cytology processing. Matched pairs of SP and SPG samples from 750 women with borderline (N = 392) and mild (N = 351) cytology were available for this study. Both specimen types were processed and parallel tested for the presence of hrHPV with RealTime HPV according to the manufacturer´s instructions. HrHPV detection rates and concordance between test results from matched SP and SPGCP pairs were calculated. A total of 743 matched pairs with valid test results on both sample types were available for analysis. An overall-agreement of hrHPV test results of 97.5% (k: 0.95) was found with matched SP/SPG pairs and slightly lower concordance (96.9%; k: 0.94) was observed on 392 pairs from women with borderline cytology compared to 351 pairs from women with mild cytology (98.0%; k: 0.95). Partial typing results were highly concordant in matched SP/SPG pairs for HPV 16 (99.1%), HPV 18 (99.7%) and non-HPV16/18 hrHPV (97.0%), respectively. 19 matched pairs were found with discrepant results: 9 from women with borderline cytology and 4 from women with mild cytology were negative on SPG and positive on SP; 3 from women with borderline cytology and 3 from women with mild cytology were negative on SP and positive on SPG. Excellent correlation of hrHPV DNA test results was found between matched pairs of SP original fluid and post-gradient cell pellets from women with low grade cytological abnormalities tested with the Abbott RealTime High-Risk HPV assay, demonstrating robust performance of the test with both specimen types and reassuring the utility of the assay for cytology triage with both specimen types.

Keywords: Abbott realtime test, HPV, SurePath liquid based cytology, surepath post-gradient cell pellet

Procedia PDF Downloads 229
120 Case Report: A Case of Confusion with Review of Sedative-Hypnotic Alprazolam Use

Authors: Agnes Simone

Abstract:

A 52-year-old male with unknown psychiatric and medical history was brought to the Psychiatric Emergency Room by ambulance directly from jail. He had been detained for three weeks for possession of a firearm while intoxicated. On initial evaluation, the patient was unable to provide a reliable history. He presented with odd jerking movements of his extremities and catatonic features, including mutism and stupor. His vital signs were stable. Patient was transferred to the medical emergency department for work-up of altered mental status. Due to suspicion for opioid overdose, the patient was given naloxone (Narcan) with no improvement. Laboratory work-up included complete blood count, comprehensive metabolic panel, thyroid stimulating hormone, vitamin B12, folate, magnesium, rapid plasma reagin, HIV, blood alcohol level, aspirin, and Tylenol blood levels, urine drug screen, and urinalysis, which were all negative. CT head and chest X-Ray were also negative. With this negative work-up, the medical team concluded there was no organic etiology and requested inpatient psychiatric admission. Upon re-evaluation by psychiatry, it was evident that the patient continued to have an altered mental status. Of note, the medical team did not include substance withdrawal in the differential diagnosis due to stable vital signs and a negative urine drug screen. The psychiatry team decided to check California's prescription drug monitoring program (CURES) and discovered that the patient was prescribed benzodiazepine alprazolam (Xanax) 2mg BID, a sedative-hypnotic, and hydrocodone/acetaminophen 10mg/325mg (Norco) QID, an opioid. After a thorough chart review, his daughter's contact information was found, and she confirmed his benzodiazepine and opioid use, with recent escalation and misuse. It was determined that the patient was experiencing alprazolam withdrawal, given this collateral information, his current symptoms, negative urine drug screen, and recent abrupt discontinuation of medications while incarcerated. After admission to the medical unit and two doses of alprazolam 2mg, the patient's mental status, alertness, and orientation improved, but he had no memory of the events that led to his hospitalization. He was discharged with a limited supply of alprazolam and a close follow-up to arrange a taper. Accompanying this case report, a qualitative review of presentations with alprazolam withdrawal was completed. This case and the review highlights: (1) Alprazolam withdrawal can occur at low doses and within just one week of use. (2) Alprazolam withdrawal can present without any vital sign instability. (3) Alprazolam withdrawal does not respond to short-acting benzodiazepines but does respond to certain long-acting benzodiazepines due to its unique chemical structure. (4) Alprazolam withdrawal is distinct from and more severe than other benzodiazepine withdrawals. This case highlights (1) the importance of physician utilization of drug-monitoring programs. This case, in particular, relied on California's drug monitoring program. (2) The importance of obtaining collateral information, especially in cases in which the patient is unable to provide a reliable history. (3) The importance of including substance intoxication and withdrawal in the differential diagnosis even when there is a negative urine drug screen. Toxidrome of withdrawal can be delayed. (4) The importance of discussing addiction and withdrawal risks of medications with patients.

Keywords: addiction risk of benzodiazepines, alprazolam withdrawal, altered mental status, benzodiazepines, drug monitoring programs, sedative-hypnotics, substance use disorder

Procedia PDF Downloads 96
119 Meeting the Health Needs of Adolescents and Young Adults: Developing and Evaluating an Electronic Questionnaire and Health Report Form, for the Health Assessment at Youth Health Clinics – A Mixed Methods Project

Authors: P.V. Lostelius, M.Mattebo, E. Thors Adolfsson, A. Söderlund, Å. Revenäs

Abstract:

Adolescents are vulnerable in healthcare settings. Early detection of poor health in young people is important to support a good quality of life and adult social functioning. Youth Health Clinics (YHCs) in Sweden provide healthcare for young people ages 13-25 years old. Using an overall mixed methods approach, the project’s main objective was to develop and evaluate an electronic health system, including a health questionnaire, a case report form, and an evaluation questionnaire to assess young people’s health risks in early stages, increase health, and quality of life. In total, 72 young people, 16-23 years old, eleven healthcare professionals and eight researchers participated in the three project studies. Results from interviews with fifteen young people gave that an electronic health questionnaire should include questions about physical-, mental-, sexual health and social support. It should specifically include questions about self-harm and suicide risk. The young people said that the questionnaire should be appealing, based on young people’s needs and be user-friendly. It was important that young people felt safe when responding to the questions, both physically and electronically. Also, they found that it had the potential to support the face-to face-meeting between young people and healthcare professionals. The electronic health report system was developed by the researchers, performing a structured development of the electronic health questionnaire, construction of a case report form to present the results from the health questions, along with an electronic evaluation questionnaire. An Information Technology company finalized the development by digitalizing the electronic health system. Four young people, three healthcare professionals and seven researchers evaluated the usability using interviews and a usability questionnaire. The electronic health questionnaire was found usable for YHCs but needed some clarifications. Essentially, the system succeeded in capturing the overall health of young people; it should be able to keep the interest of young people and have the potential to contribute to health assessment planning and young people’s self-reflection, sharing vulnerable feelings with healthcare professionals. In advance of effect studies, a feasibility study was performed by collecting electronic questionnaire data from 54 young people and interview data from eight healthcare professionals to assess the feasibility of the use of the electronic evaluation questionnaire, the case report form, and the planned recruitment method. When merging the results, the research group found that the evaluation questionnaire and the health report were feasible for future research. However, the COVID-19 pandemic, commitment challenges and drop-outs affected the recruitment of young people. Also, some healthcare professionals felt insecure about using computers and electronic devices and worried that their workload would increase. This project contributes knowledge about the development and use of electronic health tools for young people. Before implementation, clinical routines need for using the health report system need to be considered.

Keywords: adolescent health, developmental studies, electronic health questionnaire, mixed methods research

Procedia PDF Downloads 71
118 Raman Spectroscopic Detection of the Diminishing Toxic Effect of Renal Waste Creatinine by Its in vitro Reaction with Drugs N-Acetylcysteine and Taurine

Authors: Debraj Gangopadhyay, Moumita Das, Ranjan K. Singh, Poonam Tandon

Abstract:

Creatinine is a toxic chemical waste generated from muscle metabolism. Abnormally high levels of creatinine in the body fluid indicate possible malfunction or failure of the kidneys. This leads to a condition termed as creatinine induced nephrotoxicity. N-acetylcysteine is an antioxidant drug which is capable of preventing creatinine induced nephrotoxicity and is helpful to treat renal failure in its early stages. Taurine is another antioxidant drug which serves similar purpose. The kidneys have a natural power that whenever reactive oxygen species radicals increase in the human body, the kidneys make an antioxidant shell so that these radicals cannot harm the kidney function. Taurine plays a vital role in increasing the power of that shell such that the glomerular filtration rate can remain in its normal level. Thus taurine protects the kidneys against several diseases. However, taurine also has some negative effects on the body as its chloramine derivative is a weak oxidant by nature. N-acetylcysteine is capable of inhibiting the residual oxidative property of taurine chloramine. Therefore, N-acetylcysteine is given to a patient along with taurine and this combination is capable of suppressing the negative effect of taurine. Both N-acetylcysteine and taurine being affordable, safe, and widely available medicines, knowledge of the mechanism of their combined effect on creatinine, the favored route of administration, and the proper dose may be highly useful in their use for treating renal patients. Raman spectroscopy is a precise technique to observe minor structural changes taking place when two or more molecules interact. The possibility of formation of a complex between a drug molecule and an analyte molecule in solution can be explored by analyzing the changes in the Raman spectra. The formation of a stable complex of creatinine with N-acetylcysteinein vitroin aqueous solution has been observed with the help of Raman spectroscopic technique. From the Raman spectra of the mixtures of aqueous solutions of creatinine and N-acetylcysteinein different molar ratios, it is observed that the most stable complex is formed at 1:1 ratio of creatinine andN-acetylcysteine. Upon drying, the complex obtained is gel-like in appearance and reddish yellow in color. The complex is hygroscopic and has much better water solubility compared to creatinine. This highlights that N-acetylcysteineplays an effective role in reducing the toxic effect of creatinine by forming this water soluble complex which can be removed through urine. Since the drug taurine is also known to be useful in reducing nephrotoxicity caused by creatinine, the aqueous solution of taurine with those of creatinine and N-acetylcysteinewere mixed in different molar ratios and were investigated by Raman spectroscopic technique. It is understood that taurine itself does not undergo complexation with creatinine as no additional changes are observed in the Raman spectra of creatinine when it is mixed with taurine. However, when creatinine, N-acetylcysteine and taurine are mixed in aqueous solution in molar ratio 1:1:3, several changes occurring in the Raman spectra of creatinine suggest the diminishing toxic effect of creatinine in the presence ofantioxidant drugs N-acetylcysteine and taurine.

Keywords: creatinine, creatinine induced nephrotoxicity, N-acetylcysteine, taurine

Procedia PDF Downloads 123
117 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 235
116 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 112
115 Molecular Characterization, Host Plant Resistance and Epidemiology of Bean Common Mosaic Virus Infecting Cowpea (Vigna unguiculata L. Walp)

Authors: N. Manjunatha, K. T. Rangswamy, N. Nagaraju, H. A. Prameela, P. Rudraswamy, M. Krishnareddy

Abstract:

The identification of virus in cowpea especially potyviruses is confusing. Even though there are several studies on viruses causing diseases in cowpea, difficult to distinguish based on symptoms and serological detection. The differentiation of potyviruses considering as a constraint, the present study is initiated for molecular characterization, host plant resistance and epidemiology of the BCMV infecting cowpea. The etiological agent causing cowpea mosaic was identified as Bean Common Mosaic Virus (BCMV) on the basis of RT-PCR and electron microscopy. An approximately 750bp PCR product corresponding to coat protein (CP) region of the virus and the presence of long flexuous filamentous particles measuring about 952 nm in size typical to genus potyvirus were observed under electron microscope. The characterized virus isolate genome had 10054 nucleotides, excluding the 3’ terminal poly (A) tail. Comparison of polyprotein of the virus with other potyviruses showed similar genome organization with 9 cleavage sites resulted in 10 functional proteins. The pairwise sequence comparison of individual genes, P1 showed most divergent, but CP gene was less divergent at nucleotide and amino acid level. A phylogenetic tree constructed based on multiple sequence alignments of the polyprotein nucleotide and amino acid sequences of cowpea BCMV and potyviruses showed virus is closely related to BCMV-HB. Whereas, Soybean variant of china (KJ807806) and NL1 isolate (AY112735) showed 93.8 % (5’UTR) and 94.9 % (3’UTR) homology respectively with other BCMV isolates. This virus transmitted to different leguminous plant species and produced systemic symptoms under greenhouse conditions. Out of 100 cowpea genotypes screened, three genotypes viz., IC 8966, V 5 and IC 202806 showed immune reaction in both field and greenhouse conditions. Single marker analysis (SMA) was revealed out of 4 SSR markers linked to BCMV resistance, M135 marker explains 28.2 % of phenotypic variation (R2) and Polymorphic information content (PIC) value of these markers was ranged from 0.23 to 0.37. The correlation and regression analysis showed rainfall, and minimum temperature had significant negative impact and strong relationship with aphid population, whereas weak correlation was observed with disease incidence. Path coefficient analysis revealed most of the weather parameters exerted their indirect contributions to the aphid population and disease incidence except minimum temperature. This study helps to identify specific gaps in knowledge for researchers who may wish to further analyse the science behind complex interactions between vector-virus and host in relation to the environment. The resistant genotypes identified are could be effectively used in resistance breeding programme.

Keywords: cowpea, epidemiology, genotypes, virus

Procedia PDF Downloads 206
114 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study

Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem

Abstract:

Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.

Keywords: preeclampsia, incidence, risk factors, maternal

Procedia PDF Downloads 116
113 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 287
112 Real-Time Neuroimaging for Rehabilitation of Stroke Patients

Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge

Abstract:

Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).

Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation

Procedia PDF Downloads 362
111 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 38
110 Sustainable Urban Regenaration the New Vocabulary and the Timless Grammar of the Urban Tissue

Authors: Ruth Shapira

Abstract:

Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. There is an out of control change of scale of the urban form and of the rhythm of the urban life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 36,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may bring about a sustainable new urban environment based on timeless values of the past, an approach that can be generic for similar cases. Basic Methodologies:The object, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by – thus – a new urban vocabulary based on the old structure of times passed. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue.Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the place consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a sustainable way. In conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy and sustainable framework for the accelerated urbanization of our chaotic present.

Keywords: sustainable urban design, intensification, emergent urban patterns, sustainable housing, compact urban neighborhoods, sustainable regeneration, restoration, complexity, uncertainty, need for change, implications of legislation on local planning

Procedia PDF Downloads 369
109 The Effects of Science, Technology, Engineering and Math Problem-Based Learning on Native Hawaiians and Other Underrepresented, Low-Income, Potential First-Generation High School Students

Authors: Nahid Nariman

Abstract:

The prosperity of any nation depends on its ability to use human potential, in particular, to offer an education that builds learners' competencies to become effective workforce participants and true citizens of the world. Ever since the Second World War, the United States has been a dominant player in the world politically, economically, socially, and culturally. The rapid rise of technological advancement and consumer technologies have made it clear that science, technology, engineering, and math (STEM) play a crucial role in today’s world economy. Exploring the top qualities demanded from new hires in the industry—i.e., problem-solving skills, teamwork, dependability, adaptability, technical and communication skills— sheds light on the kind of path that is needed for a successful educational system to effectively support STEM. The focus of 21st century education has been to build student competencies by preparing them to acquire and apply knowledge, to think critically and creatively, to competently use information, be able to work in teams, to demonstrate intellectual and moral values as well as cultural awareness, and to be able to communicate. Many educational reforms pinpoint various 'ideal' pathways toward STEM that educators, policy makers, and business leaders have identified for educating the workforce of tomorrow. This study will explore how problem-based learning (PBL), an instructional strategy developed in the medical field and adopted with many successful results in K-12 through higher education, is the proper approach to stimulate underrepresented high school students' interest in pursuing STEM careers. In the current study, the effect of a problem-based STEM model on students' attitudes and career interests was investigated using qualitative and quantitative methods. The participants were 71 low-income, native Hawaiian high school students who would be first-generation college students. They were attending a summer STEM camp developed as the result of a collaboration between the University of Hawaii and the Upward Bound Program. The project, funded by the National Science Foundation's Innovative Technology Experiences for Students and Teachers (ITEST) program, used PBL as an approach in challenging students to engage in solving hands-on, real-world problems in their communities. Pre-surveys were used before camp and post-surveys on the last day of the program to learn about the implementation of the PBL STEM model. A Career Interest Questionnaire provided a way to investigate students’ career interests. After the summer camp, a representative selection of students participated in focus group interviews to discuss their opinions about the PBL STEM camp. The findings revealed a significantly positive increase in students' attitudes towards STEM disciplines and STEM careers. The students' interview results also revealed that students identified PBL to be an effective form of instruction in their learning and in the development of their 21st-century skills. PBL was acknowledged for making the class more enjoyable and for raising students' interest in STEM careers, while also helping them develop teamwork and communication skills in addition to scientific knowledge. As a result, the integration of PBL and a STEM learning experience was shown to positively affect students’ interest in STEM careers.

Keywords: problem-based learning, science education, STEM, underrepresented students

Procedia PDF Downloads 97
108 The Evaluation of Subclinical Hypothyroidism in Children with Morbid Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Cardiovascular pathology is one of the expected consequences of excessive fat gain. The role of zinc in thyroid hormone metabolism is an important matter. The concentrations of both thyroid stimulating hormone (TSH) and zinc are subject to variation in obese individuals. Zinc exhibits protective effects on cardiovascular health and is inversely correlated with cardiovascular markers in childhood obesity. The association between subclinical hypothyroidism (SCHT) and metabolic disorders is under investigation due to its clinical importance. Underactive thyroid gland causes high TSH levels. Subclinical hypothyroidism is defined as the elevated serum TSH levels in the presence of normal free thyroxin (T4) concentrations. The aim of this study was to evaluate the associations between TSH levels and zinc concentrations in morbid obese (MO) children exhibiting SCHT. The possibility of using the probable association between these parameters was also evaluated for the discrimination of metabolic syndrome positive (MetS+) and metabolic syndrome negative (MetS-) groups. Forty-two children were present in each group. Informed consent forms were obtained. Institutional Ethics Committee approved the study protocol. Tables prepared by World Health Organization were used for the definition of MO children. Children, whose age- and sex-dependent body mass index percentile values were above 99, were defined as MO. Children with at least two MetS components were included in MOMetS+ group. Elevated systolic/diastolic blood pressure values, increased fasting blood glucose, triglycerides (TRG)/decreased high density lipoprotein-cholesterol (HDL-C) concentrations in addition to central obesity were listed as MetS components. Anthropometric measures were recorded. Routine biochemical analyses were performed. Thirteen and fifteen children had SCHT in MOMetS- and MOMetS+ groups, respectively. Statistical analyses were performed. p<0.05 was accepted as statistically significant. In MOMetS- and MOMetS+ groups, TSH levels were 4.1±2.9 mU/L and 4.6±3.1 mU/L, respectively. Corresponding values for SCHT cases in these groups were 7.3±3.1 mU/L and 8.0±2.7 mU/L. Free T4 levels were within normal limits. Zinc concentrations were negatively correlated with TSH levels in both groups. The significant negative correlation calculated in MOMetS+ group (r= -0.909; p<0.001) was much stronger than that found in MOMetS- group (r= -0.706; p<0.05). This strong correlation (r= -0.909; p<0.001) calculated for cases with SCHT in MOMetS+ group was much lower (r= -0.793; p<0.001) when all MOMetS+ cases were considered. Zinc is closely related to T4 and TSH therefore, it participates in thyroid hormone metabolism. Since thyroid hormones are required for zinc absorption, hypothyroidism can lead to zinc deficiency. The presence of strong correlations between TSH and zinc in SCHT cases found in both MOMetS- and MOMetS+ groups pointed out that MO children were under the threat of cardiovascular pathologies. The detection of the much stronger correlation in MOMetS+ group in comparison with the correlation found in MOMetS- group was the indicator of greater cardiovascular risk due to the presence of MetS. In MOMetS+ group, correlation in SCHT cases found higher than correlation calculated for all cases confirmed much higher cardiovascular risk due to the contribution of SCHT.

Keywords: cardiovascular risk, children, morbid obesity, subclinical hypothyroidism, zinc

Procedia PDF Downloads 55
107 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 167
106 Evaluation of the Incidence of Mycobacterium Tuberculosis Complex Associated with Soil, Hayfeed and Water in Three Agricultural Facilities in Amathole District Municipality in the Eastern Cape Province

Authors: Athini Ntloko

Abstract:

Mycobacterium bovis and other species of Mycobacterium tuberculosis complex (MTBC) can result to a zoonotic infection known as Bovine tuberculosis (bTB). MTBC has members that may contaminate an extensive range of hosts, including wildlife. Diverse wild species are known to cause disease in domestic livestock and are acknowledged as TB reservoirs. It has been a main study worldwide to deliberate on bTB risk factors as a result and some studies focused on particular parts of risk factors such as wildlife and herd management. The significance of the study was to determine the incidence of Mycobacterium tuberculosis complex that is associated with soil, hayfeed and water. Questionnaires were administered to thirty (30) smallholding farm owners in the two villages (kwaMasele and Qungqwala) and three (3) three commercial farms (Fort Hare dairy farm, Middledrift dairy farm and Seven star dairy farm). Detection of M. tuberculosis complex was achieved by Polymerase Chain Reaction using primers for IS6110; whereas a genotypic drug resistance mutation was detected using Genotype MTBDRplus assays. Nine percent (9%) of respondents had more than 40 cows in their herd, while 60% reported between 10 and 20 cows in their herd. Relationship between farm size and vaccination for TB differed from forty one percent (41%) being the highest to the least five percent (5%). The highest number of respondents who knew about relationship between TB cases and cattle location was ninety one percent (91%). Approximately fifty one percent (51%) of respondents had knowledge about wild life access to the farms. Relationship between import of cattle and farm size ranged from nine percent (9%) to thirty five percent (35%). Cattle sickness in relation to farm size differed from forty three (43%) being the highest to the least three percent (3%); while thirty three percent (33%) of respondents had knowledge about health management. Respondents with knowledge about the occurrence of TB infections in farms were forty-eight percent (48%). The frequency of DNA isolation from samples ranged from the highest forty-five percent (45%) from water to the least twenty two percent (22%) from soil. Fort Hare dairy farm had the highest number of positive samples, forty four percent (44%) from water samples; whereas Middledrift dairy farm had the lowest positive from water, seventeen percent (17%). Twelve (22%) out of 55 isolates showed resistance to INH and RIF that is, multi-drug resistance (MDR) and nine percent (9%) were sensitive to either INH or RIF. The mutations at rpoB gene differed from 58% being the highest to the least (23%). Fifty seven percent (57%) of samples showed a S315T1 mutation while only 14% possessed a S531L in the katG gene. The highest inhA mutations were detected in T8A (80 %) and the least was observed in A16G (17%). The results of this study reveal that risk factors for bTB in cattle and dairy farm workers are a serious issue abound in the Eastern Cape of South Africa; with the possibility of widespread dissemination of multidrug resistant determinants in MTBC from the environment.

Keywords: hayfeed, isoniazid, multi-drug resistance, mycobacterium tuberculosis complex, polymerase chain reaction, rifampicin, soil, water

Procedia PDF Downloads 312
105 Bioinspired Green Synthesis of Magnetite Nanoparticles Using Room-Temperature Co-Precipitation: A Study of the Effect of Amine Additives on Particle Morphology in Fluidic Systems

Authors: Laura Norfolk, Georgina Zimbitas, Jan Sefcik, Sarah Staniland

Abstract:

Magnetite nanoparticles (MNP) have been an area of increasing research interest due to their extensive applications in industry, such as in carbon capture, water purification, and crucially, the biomedical industry. The use of MNP in the biomedical industry is rising, with studies on their effect as Magnetic resonance imaging contrast agents, drug delivery systems, and as hyperthermic cancer treatments becoming prevalent in the nanomaterial research community. Particles used for biomedical purposes must meet stringent criteria; the particles must have consistent shape and size between particles. Variation between particle morphology can drastically alter the effective surface area of the material, making it difficult to correctly dose particles that are not homogeneous. Particles of defined shape such as octahedral and cubic have been shown to outperform irregular shaped particles in some applications, leading to the need to synthesize particles of defined shape. In nature, highly homogeneous MNP are found within magnetotactic bacteria, a unique bacteria capable of producing magnetite nanoparticles internally under ambient conditions. Biomineralisation proteins control the properties of the MNPs, enhancing their homogeneity. One of these proteins, Mms6, has been successfully isolated and used in vitro as an additive in room-temperature co-precipitation reactions (RTCP) to produce particles of defined mono-dispersed size & morphology. When considering future industrial scale-up it is crucial to consider the costs and feasibility of an additive, as an additive that is not readily available or easily synthesized at a competitive price will not be sustainable. As such, additives selected for this research are inspired by the functional groups of biomineralisation proteins, but cost-effective, environmentally friendly, and compatible with scale-up. Diethylenetriamine (DETA), triethylenetetramine (TETA), tetraethylenepentamine (TEPA), and pentaethylenehexamine (PEHA) have been successfully used in RTCP to modulate the properties of particles synthesized, leading to the formation of octahedral nanoparticles with no use of organic solvents, heating, or toxic precursors. By extending this principle to a fluidic system, ongoing research will reveal whether the amine additives can also exert morphological control in an environment which is suited toward higher particle yield. Two fluidic systems have been employed; a peristaltic turbulent flow mixing system suitable for the rapid production of MNP, and a macrofluidic system for the synthesis of tailored nanomaterials under a laminar flow regime. The presence of the amine additives in the turbulent flow system in initial results appears to offer similar morphological control as observed under RTCP conditions, with higher proportions of octahedral particles formed. This is a proof of concept which may pave the way to green synthesis of tailored MNP on an industrial scale. Mms6 and amine additives have been used in the macrofluidic system, with Mms6 allowing magnetite to be synthesized at unfavourable ferric ratios, but no longer influencing particle size. This suggests this synthetic technique while still benefiting from the addition of additives, may not allow additives to fully influence the particles formed due to the faster timescale of reaction. The amine additives have been tested at various concentrations, the results of which will be discussed in this paper.

Keywords: bioinspired, green synthesis, fluidic, magnetite, morphological control, scale-up

Procedia PDF Downloads 99
104 ExactData Smart Tool For Marketing Analysis

Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak

Abstract:

Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.

Keywords: NLP, AI, IT, language, marketing, analysis

Procedia PDF Downloads 56
103 Environmental Fate and Toxicity of Aged Titanium Dioxide Nano-Composites Used in Sunscreen

Authors: Danielle Slomberg, Jerome Labille, Riccardo Catalano, Jean-Claude Hubaud, Alexandra Lopes, Alice Tagliati, Teresa Fernandes

Abstract:

In the assessment and management of cosmetics and personal care products, sunscreens are of emerging concern regarding both human and environmental health. Organic UV blockers in many sunscreens have been evidenced to undergo rapid photodegradation, induce dermal allergic reactions due to skin penetration, and to cause adverse effects on marine systems. While mineral UV-blockers may offer a safer alternative, their fate and impact and resulting regulation are still under consideration, largely related to the potential influence of nanotechnology-based products on both consumers and the environment. Nanometric titanium dioxide (TiO₂) UV-blockers have many advantages in terms of sun protection and asthetics (i.e., transparency). These UV-blockers typically consist of rutile nanoparticles coated with a primary mineral layer (silica or alumina) aimed at blocking the nanomaterial photoactivity and can include a secondary organic coating (e.g., stearic acid, methicone) aimed at favouring dispersion of the nanomaterial in the sunscreen formulation. The nanomaterials contained in the sunscreen can leave the skin either through a bathing of everyday usage, with subsequent release into rivers, lakes, seashores, and/or sewage treatment plants. The nanomaterial behaviour, fate and impact in these different systems is largely determined by its surface properties, (e.g. the nanomaterial coating type) and lifetime. The present work aims to develop the eco-design of sunscreens through the minimisation of risks associated with nanomaterials incorporated into the formulation. All stages of the sunscreen’s life cycle must be considered in this aspect, from its manufacture to its end-of-life, through its use by the consumer to its impact on the exposed environment. Reducing the potential release and/or toxicity of the nanomaterial from the sunscreen is a decisive criterion for its eco-design. TiO₂ UV-blockers of varied size and surface coating (e.g., stearic acid and silica) have been selected for this study. Hydrophobic TiO₂ UV-blockers (i.e., stearic acid-coated) were incorporated into a typical water-in-oil (w/o) formulation while hydrophilic, silica-coated TiO₂ UV-blockers were dispersed into an oil-in-water (o/w) formulation. The resulting sunscreens were characterised in terms of nanomaterial localisation, sun protection factor, and photo-passivation. The risk to the direct aquatic environment was assessed by evaluating the release of nanomaterials from the sunscreen through a simulated laboratory aging procedure. The size distribution, surface charge, and degradation state of the nano-composite by-products, as well as their nanomaterial concentration and colloidal behaviour were determined in a variety of aqueous environments (e.g., seawater and freshwater). Release of the hydrophobic nanocomposites into the aqueous environment was driven by oil droplet formation while hydrophilic nano-composites were readily dispersed. Ecotoxicity of the sunscreen by-products (from both w/o and o/w formulations) and their risk to marine organisms were assessed using coral symbiotes and tropical corals, evaluating both lethal and sublethal toxicities. The data dissemination and provided risk knowledge from the present work will help guide regulation related to nanomaterials in sunscreen, provide better information for consumers, and allow for easier decision-making for manufacturers.

Keywords: alteration, environmental fate, sunscreens, titanium dioxide nanoparticles

Procedia PDF Downloads 238
102 Conceptual and Preliminary Design of Landmine Searching UAS at Extreme Environmental Condition

Authors: Gopalasingam Daisan

Abstract:

Landmines and ammunitions have been creating a significant threat to the people and animals, after the war, the landmines remain in the land and it plays a vital role in civilian’s security. Especially the Children are at the highest risk because they are curious. After all, an unexploded bomb can look like a tempting toy to an inquisitive child. The initial step of designing the UAS (Unmanned Aircraft Systems) for landmine detection is to choose an appropriate and effective sensor to locate the landmines and other unexploded ammunitions. The sensor weight and other components related to the sensor supporting device’s weight are taken as a payload weight. The mission requirement is to find the landmines in a particular area by making a proper path that will cover all the vicinity in the desired area. The weight estimation of the UAV (Unmanned Aerial Vehicle) can be estimated by various techniques discovered previously with good accuracy at the first phase of the design. The next crucial part of the design is to calculate the power requirement and the wing loading calculations. The matching plot techniques are used to determine the thrust-to-weight ratio, and this technique makes this process not only easiest but also precisely. The wing loading can be calculated easily from the stall equation. After these calculations, the wing area is determined from the wing loading equation and the required power is calculated from the thrust to weight ratio calculations. According to the power requirement, an appropriate engine can be selected from the available engine from the market. And the wing geometric parameter is chosen based on the conceptual sketch. The important steps in the wing design to choose proper aerofoil and which will ensure to create sufficient lift coefficient to satisfy the requirements. The next component is the tail; the tail area and other related parameters can be estimated or calculated to counteract the effect of the wing pitching moment. As the vertical tail design depends on many parameters, the initial sizing only can be done in this phase. The fuselage is another major component, which is selected based on the slenderness ratio, and also the shape is determined on the sensor size to fit it under the fuselage. The landing gear is one of the important components which is selected based on the controllability and stability requirements. The minimum and maximum wheel track and wheelbase can be determined based on the crosswind and overturn angle requirements. The minor components of the landing gear design and estimation are not the focus of this project. Another important task is to calculate the weight of the major components and it is going to be estimated using empirical relations and also the mass is added to each such component. The CG and moment of inertia are also determined to each component separately. The sensitivity of the weight calculation is taken into consideration to avoid extra material requirements and also reduce the cost of the design. Finally, the aircraft performance is calculated, especially the V-n (velocity and load factor) diagram for different flight conditions such as not disturbed and with gust velocity.

Keywords: landmine, UAS, matching plot, optimization

Procedia PDF Downloads 149
101 Assessment of Very Low Birth Weight Neonatal Tracking and a High-Risk Approach to Minimize Neonatal Mortality in Bihar, India

Authors: Aritra Das, Tanmay Mahapatra, Prabir Maharana, Sridhar Srikantiah

Abstract:

In the absence of adequate well-equipped neonatal-care facilities serving rural Bihar, India, the practice of essential home-based newborn-care remains critically important for reduction of neonatal and infant mortality, especially among pre-term and small-for-gestational-age (Low-birth-weight) newborns. To improve the child health parameters in Bihar, ‘Very-Low-Birth-Weight (vLBW) Tracking’ intervention is being conducted by CARE India, since 2015, targeting public facility-delivered newborns weighing ≤2000g at birth, to improve their identification and provision of immediate post-natal care. To assess the effectiveness of the intervention, 200 public health facilities were randomly selected from all functional public-sector delivery points in Bihar and various outcomes were tracked among the neonates born there. Thus far, one pre-intervention (Feb-Apr’2015-born neonates) and three post-intervention (for Sep-Oct’2015, Sep-Oct’2016 and Sep-Oct’2017-born children) follow-up studies were conducted. In each round, interviews were conducted with the mothers/caregivers of successfully-tracked children to understand outcome, service-coverage and care-seeking during the neonatal period. Data from 171 matched facilities common across all rounds were analyzed using SAS-9.4. Identification of neonates with birth-weight ≤ 2000g improved from 2% at baseline to 3.3%-4% during post-intervention. All indicators pertaining to post-natal home-visits by frontline-workers (FLWs) improved. Significant improvements between baseline and post-intervention rounds were also noted regarding mothers being informed about ‘weak’ child – at the facility (R1 = 25 to R4 = 50%) and at home by FLW (R1 = 19%, to R4 = 30%). Practice of ‘Kangaroo-Mother-Care (KMC)’– an important component of essential newborn care – showed significant improvement in postintervention period compared to baseline in both facility (R1 = 15% to R4 = 31%) and home (R1 = 10% to R4=29%). Increasing trend was noted regarding detection and birth weight-recording of the extremely low-birth-weight newborns (< 1500 g) showed an increasing trend. Moreover, there was a downward trend in mortality across rounds, in each birth-weight strata (< 1500g, 1500-1799g and >= 1800g). After adjustment for the differential distribution of birth-weights, mortality was found to decline significantly from R1 (22.11%) to R4 (11.87%). Significantly declining trend was also observed for both early and late neonatal mortality and morbidities. Multiple regression analysis identified - birth during immediate post-intervention phase as well as that during the maintenance phase, birth weight > 1500g, children of low-parity mothers, receiving visit from FLW in the first week and/or receiving advice on extra care from FLW as predictors of survival during neonatal period among vLBW newborns. vLBW tracking was found to be a successful and sustainable intervention and has already been handed over to the Government.

Keywords: weak newborn tracking, very low birth weight babies, newborn care, community response

Procedia PDF Downloads 130
100 Nurturing Minds, Shaping Futures: A Reflective Journey of 32 Years as a Teacher Educator

Authors: Mary Isobelle Mullaney

Abstract:

The maxim "an unexamined life is not worth living," attributed to Socrates, prompts a contemplative reflection spanning over 32 years as a teacher educator in the Republic of Ireland. Taking time to contemplate the changes that have occurred and the current landscape provides valuable insights into the dynamic terrain of teacher preparation. The reflective journey traverses the impacts of global and societal shifts, responding to challenges, embracing advancements, and navigating the delicate balance between responsiveness to the world and the active shaping of it. The transformative events of the COVID-19 pandemic spotlighted the indispensable role of teachers in Ireland, reinforcing the critical nature of education for the well-being of pupils. Research solidifies the understanding that teachers matter and so it is worth exploring the pivotal role of the teacher educator. This reflective piece examines the changes in teacher education and explores the juxtapositions that have emerged in response to three decades of profound change. The attractiveness of teaching as a career is juxtaposed against the reality of the demands of the job, with conditions for public servants in Ireland undergoing a shift. High-level strategic discussions about increasing teacher numbers now contrast with a previous oversupply. The delicate balance between the imperative to increase enrolment (getting "bums on seats") and the gatekeeper role of teacher educators is explored, raising questions about maintaining high standards amid changing student profiles. Another poignant dichotomy involves the high demand for teachers versus the hurdles candidates face in becoming teachers. The rising cost and duration of teacher education courses raise concerns about attracting quality candidates. The perceived attractiveness of teaching as a career contends with the reality of increased demands on educators. One notable juxtaposition centres around the rapid evolution of Irish initial teacher education versus the potential risk of change overload. The Teaching Council of Ireland has spearheaded considerable changes, raising questions about the timing and evaluation of these changes. This reflection contemplates the vision of a professional teaching council versus its evolving reality and the challenges posed by the value placed on school placement in teacher preparation. The juxtapositions extend to the classroom, where theory may not seamlessly align with the lived experience. Inconsistencies between college expectations and the classroom reality prompt reflection on the effectiveness of teacher preparation programs. Addressing the changing demographic landscape of society and schools, there is a persistent incongruity between the diversity of Irish society and the profile of second-level teachers. As education undergoes a digital revolution, the enduring philosophies of education confront technological advances. This reflection highlights the tension between established practices and contemporary demands, acknowledging the irreplaceable value of face-to-face interaction while integrating technology into teacher training programs. In conclusion, this reflective journey encapsulates the intricate web of juxtapositions in Irish Initial Teacher Education. It emphasises the enduring commitment to fostering education, recognising the profound influence educators wield, and acknowledging the challenges and gratifications inherent in shaping the minds and futures of generations to come.

Keywords: Irish post primary teaching, juxtapositions, reflection, teacher education

Procedia PDF Downloads 21
99 Utilization of Functionalized Biochar from Water Hyacinth (Eichhornia crassipes) as Green Nano-Fertilizers

Authors: Adewale Tolulope Irewale, Elias Emeka Elemike, Christian O. Dimkpa, Emeka Emmanuel Oguzie

Abstract:

As the global population steadily approaches the 10billion mark, the world is currently faced with two major challenges among others – accessing sustainable and clean energy, and food security. Accessing cleaner and sustainable energy sources to drive global economy and technological advancement, and feeding the teeming human population require sustainable, innovative, and smart solutions. To solve the food production problem, producers have relied on fertilizers as a way of improving crop productivity. Commercial inorganic fertilizers, which is employed to boost agricultural food production, however, pose significant ecological sustainability and economic problems including soil and water pollution, reduced input efficiency, development of highly resistant weeds, micronutrient deficiency, soil degradation, and increased soil toxicity. These ecological and sustainability concerns have raised uncertainties about the continued effectiveness of conventional fertilizers. With the application of nanotechnology, plant biomass upcycling offers several advantages in greener energy production and sustainable agriculture through reduction of environmental pollution, increasing soil microbial activity, recycling carbon thereby reducing GHG emission, and so forth. This innovative technology has the potential for a circular economy and creating a sustainable agricultural practice. Nanomaterials have the potential to greatly enhance the quality and nutrient composition of organic biomass which in turn, allows for the conversion of biomass into nanofertilizers that are potentially more efficient. Water hyacinth plant harvested from an inland water at Warri, Delta State Nigeria were air-dried and milled into powder form. The dry biomass were used to prepare biochar at a pre-determined temperature in an oxygen deficient atmosphere. Physicochemical analysis of the resulting biochar was carried out to determine its porosity and general morphology using the Scanning Transmission Electron Microscopy (STEM). The functional groups (-COOH, -OH, -NH2, -CN, -C=O) were assessed using the Fourier Transform InfraRed Spectroscopy (FTIR) while the heavy metals (Cr, Cu, Fe, Pb, Mg, Mn) were analyzed using Inductively Coupled Plasma – Optical Emission Spectrometry (ICP-OES). Impregnation of the biochar with nanonutrients were achieved under varied conditions of pH, temperature, nanonutrient concentrations and resident time to achieve optimum adsorption. Adsorption and desorption studies were carried out on the resulting nanofertilizer to determine kinetics for the potential nutrients’ bio-availability to plants when used as green fertilizers. Water hyacinth (Eichhornia crassipes) which is an aggressively invasive aquatic plant known for its rapid growth and profusion is being examined in this research to harness its biomass as a sustainable feedstock to formulate functionalized nano-biochar fertilizers, offering various benefits including water hyacinth biomass upcycling, improved nutrient delivery to crops and aquatic ecosystem remediation. Altogether, this work aims to create output values in the three dimensions of environmental, economic, and social benefits.

Keywords: biochar-based nanofertilizers, eichhornia crassipes, greener agriculture, sustainable ecosystem, water hyacinth

Procedia PDF Downloads 38
98 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces

Authors: Martin Alexander Eder, Sergei Semenov

Abstract:

Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.

Keywords: adhesive, fatigue, interface, multiaxial stress

Procedia PDF Downloads 142
97 Laboratory Indices in Late Childhood Obesity: The Importance of DONMA Indices

Authors: Orkide Donma, Mustafa M. Donma, Muhammet Demirkol, Murat Aydin, Tuba Gokkus, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu

Abstract:

Obesity in childhood establishes a ground for adulthood obesity. Especially morbid obesity is an important problem for the children because of the associated diseases such as diabetes mellitus, cancer and cardiovascular diseases. In this study, body mass index (BMI), body fat ratios, anthropometric measurements and ratios were evaluated together with different laboratory indices upon evaluation of obesity in morbidly obese (MO) children. Children with nutritional problems participated in the study. Written informed consent was obtained from the parents. Study protocol was approved by the Ethics Committee. Sixty-two MO girls aged 129.5±35.8 months and 75 MO boys aged 120.1±26.6 months were included into the scope of the study. WHO-BMI percentiles for age-and-sex were used to assess the children with those higher than 99th as morbid obesity. Anthropometric measurements of the children were recorded after their physical examination. Bio-electrical impedance analysis was performed to measure fat distribution. Anthropometric ratios, body fat ratios, Index-I and Index-II as well as insulin sensitivity indices (ISIs) were calculated. Girls as well as boys were binary grouped according to homeostasis model assessment-insulin resistance (HOMA-IR) index of <2.5 and >2.5, fasting glucose to insulin ratio (FGIR) of <6 and >6 and quantitative insulin sensitivity check index (QUICKI) of <0.33 and >0.33 as the frequently used cut-off points. They were evaluated based upon their BMIs, arms, legs, trunk, whole body fat percentages, body fat ratios such as fat mass index (FMI), trunk-to-appendicular fat ratio (TAFR), whole body fat ratio (WBFR), anthropometric measures and ratios [waist-to-hip, head-to-neck, thigh-to-arm, thigh-to-ankle, height/2-to-waist, height/2-to-hip circumference (C)]. SPSS/PASW 18 program was used for statistical analyses. p≤0.05 was accepted as statistically significance level. All of the fat percentages showed differences between below and above the specified cut-off points in girls when evaluated with HOMA-IR and QUICKI. Differences were observed only in arms fat percent for HOMA-IR and legs fat percent for QUICKI in boys (p≤ 0.05). FGIR was unable to detect any differences for the fat percentages of boys. Head-to-neck C was the only anthropometric ratio recommended to be used for all ISIs (p≤0.001 for both girls and boys in HOMA-IR, p≤0.001 for girls and p≤0.05 for boys in FGIR and QUICKI). Indices which are recommended for use in both genders were Index-I, Index-II, HOMA/BMI and log HOMA (p≤0.001). FMI was also a valuable index when evaluated with HOMA-IR and QUICKI (p≤0.001). The important point was the detection of the severe significance for HOMA/BMI and log HOMA while they were evaluated also with the other indices, FGIR and QUICKI (p≤0.001). These parameters along with Index-I were unique at this level of significance for all children. In conclusion, well-accepted ratios or indices may not be valid for the evaluation of both genders. This study has emphasized the limiting properties for boys. This is particularly important for the selection process of some ratios and/or indices during the clinical studies. Gender difference should be taken into consideration for the evaluation of the ratios or indices, which will be recommended to be used particularly within the scope of obesity studies.

Keywords: anthropometry, childhood obesity, gender, insulin sensitivity index

Procedia PDF Downloads 335
96 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 88
95 Monsoon Controlled Mercury Transportation in Ganga Alluvial Plain, Northern India and Its Implication on Global Mercury Cycle

Authors: Anjali Singh, Ashwani Raju, Vandana Devi, Mohmad Mohsin Atique, Satyendra Singh, Munendra Singh

Abstract:

India is the biggest consumer of mercury and, consequently, a major emitter too. The increasing mercury contamination in India’s water resources has gained widespread attention and, therefore, atmospheric deposition is of critical concern. However, little emphasis was placed on the role of precipitation in the aquatic mercury cycle of the Ganga Alluvial Plain which provides drinking water to nearly 7% of the world’s human population. A majority of the precipitation here occurs primarily in 10% duration of the year in the monsoon season. To evaluate the sources and transportation of mercury, water sample analysis has been conducted from two selected sites near Lucknow, which have a strong hydraulic gradient towards the river. 31 groundwater samples from Jehta village (26°55’15’’N; 80°50’21’’E; 119 m above mean sea level) and 31 river water samples from the Behta Nadi (a tributary of the Gomati River draining into the Ganga River) were collected during the monsoon season on every alternate day between 01 July to 30 August 2019. The total mercury analysis was performed by using Flow Injection Atomic Absorption Spectroscopy (AAS)-Mercury Hybride System, and daily rainfall data was collected from the India Meteorological Department, Amausi, Lucknow. The ambient groundwater and river-water concentrations were both 2-4 ng/L as there is no known geogenic source of mercury found in the area. Before the onset of the monsoon season, the groundwater and the river-water recorded mercury concentrations two orders of magnitude higher than the ambient concentrations, indicating the regional transportation of the mercury from the non-point source into the aquatic environment. Maximum mercury concentrations in groundwater and river-water were three orders of magnitude higher than the ambient concentrations after the onset of the monsoon season characterizing the considerable mobilization and redistribution of mercury by monsoonal precipitation. About 50% of both of the water samples were reported mercury below the detection limit, which can be mostly linked to the low intensity of precipitation in August and also with the dilution factor by precipitation. The highest concentration ( > 1200 ng/L) of mercury in groundwater was reported after 6-days lag from the first precipitation peak. Two high concentration peaks (>1000 ng/L) in river-water were separately correlated with the surface flow and groundwater outflow of mercury. We attribute the elevated mercury concentration in both of the water samples before the precipitation event to mercury originating from the extensive use of agrochemicals in mango farming in the plain. However, the elevated mercury concentration during the onset of monsoon appears to increase in area wetted with atmospherically deposited mercury, which migrated down from surface water to groundwater as downslope migration is a fundamental mechanism seen in rivers of the alluvial plain. The present study underscores the significance of monsoonal precipitation in the transportation of mercury to drinking water resources of the Ganga Alluvial Plain. This study also suggests that future research must be pursued for a better understand of the human health impact of mercury contamination and for quantification of the role of Ganga Alluvial Plain in the Global Mercury Cycle.

Keywords: drinking water resources, Ganga alluvial plain, india, mercury

Procedia PDF Downloads 124
94 Isoflavonoid Dynamic Variation in Red Clover Genotypes

Authors: Andrés Quiroz, Emilio Hormazábal, Ana Mutis, Fernando Ortega, Loreto Méndez, Leonardo Parra

Abstract:

Red clover root borer, Hylastinus obscurus Marsham (Coleoptera: Curculionidae), is the main insect pest associated to red clover, Trifolium pratense L. An average of 1.5 H. obscurus per plant can cause 5.5% reduction in forage yield in pastures of two to three years old. Moreover, insect attack can reach 70% to 100% of the plants. To our knowledge, there is no a chemical strategy for controlling this pest. Therefore alternative strategies for controlling H. obscurus are a high priority for red clover producers. One of this alternative is related to the study of secondary metabolites involved in intrinsic chemical defenses developed by plants, such as isoflavonoids. The isoflavonoids formononetin and daidzein have elicited an antifeedant and phagostimult effect on H. obscurus respectively. However, we do not know how is the dynamic variation of these isoflavonoids under field conditions. The main objective of this work was to evaluate the variation of the antifeedant isoflavonoids formononetin, the phagostimulant isoflavonoids daidzein, and their respective glycosides over time in different ecotypes of red clover. Fourteen red clover ecotypes (8 cultivars and 6 experimental lines), were collected at INIA-Carillanca (La Araucanía, Chile). These plants were established in October 2015 under irrigated conditions. The cultivars were distributed in a randomized complete block with three replicates. The whole plants were sampled in four times: 15th October 2016, 12th December 2016, 27th January 2017 and 16th March 2017 with sufficient amount of soil to avoid root damage. A polar fraction of isoflavonoid was obtained from 20 mg of lyophilized root tissue extracted with 2 mL of 80% MeOH for 16 h using an orbital shaker in the dark at room temperature. After, an aliquot of 1.4 mL of the supernatant was evaporated, and the residue was resuspended in 300 µL of 45% MeOH. The identification and quantification of isoflavonoid root extracts were performed by the injection of 20 µL into a Shimadzu HPLC equipped with a C-18 column. The sample was eluted with a mobile phase composed of AcOH: H₂O (1:9 v/v) as solvent A and CH₃CN as solvent B. The detection was performed at 260 nm. The results showed that the amount of aglycones was higher than the respective glycosides. This result is according to the biosynthetic pathway of flavonoids, where the formation of glycoside is further to the glycosides biosynthesis. The amount of formononetin was higher than daidzein. In roots, where H. obscurus spent the most part of its live cycle, the highest content of formononetin was found in G 27, Pawera, Sabtoron High, Redqueli-INIA and Superqueli-INIA cvs. (2.1, 1.8, 1.8, 1.6 and 1.0 mg g⁻¹ respectively); and the lowest amount of daidzein were found Superqueli-INIA (0.32 mg g⁻¹) and in the experimental line Sel Syn Int4 (0.24 mg g⁻¹). This ecotype showed a high content of formononetin (0.9 mg g⁻¹). This information, associated with cultural practices, could help farmers and breeders to reduce H. obscurus in grassland, selecting ecotypes with high content of formononetin and low amount of daidzein in the roots of red clover plants. Acknowledgements: FONDECYT 1141245 and 11130715.

Keywords: daidzein, formononetin, isoflavonoid glycosides, trifolium pratense

Procedia PDF Downloads 191