Search results for: limit of detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4796

Search results for: limit of detection

3056 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic Kidney Disease, Linear Regression, Microfluidics, Urinary Albumin

Procedia PDF Downloads 136
3055 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 81
3054 Use of a Chagas Urine Nanoparticle Test (Chunap) to Correlate with Parasitemia Levels in T. cruzi/HIV Co-Infected Patients

Authors: Yagahira E. Castro-Sesquen, Robert H. Gilman, Carolina Mejia, Daniel E. Clark, Jeong Choi, Melissa J. Reimer-Mcatee, Rocio Castro, Jorge Flores, Edward Valencia-Ayala, Faustino Torrico, Ricardo Castillo-Neyra, Lance Liotta, Caryn Bern, Alessandra Luchini

Abstract:

Early diagnosis of reactivation of Chagas disease in HIV patients could be lifesaving; however, in Latin American the diagnosis is performed by detection of parasitemia by microscopy which lacks sensitivity. To evaluate if levels of T. cruzi antigens in urine determined by Chunap (Chagas urine nanoparticle test) are correlated with parasitemia levels in T. cruzi/HIV co-infected patients. T. cruzi antigens in urine of HIV patients (N=55: 31 T. cruzi infected and 24 T. cruzi serology negative) were concentrated using hydrogel particles and quantified by Western Blot and a calibration curve. The percentage of Chagas positive patients determined by Chunap compared to blood microscopy, qPCR, and ELISA was 100% (6/6), 95% (18/19) and 74% (23/31), respectively. Chunap specificity was 91.7%. Linear regression analysis demonstrated a direct relationship between parasitemia levels (determined by qPCR) and urine T. cruzi antigen concentrations (p<0.001). A cut-off of > 105 pg was chosen to determine patients with reactivation of Chagas disease (6/6). Urine antigen concentration was significantly higher among patients with CD4+ lymphocyte counts below 200/mL (p=0.045). Chunap shows potential for early detection of reactivation and with appropriate adaptation can be used for monitoring Chagas disease status in T. cruzi/HIV co-infected patients.

Keywords: antigenuria, Chagas disease, Chunap, nanoparticles, parasitemia, poly N-isopropylacrylamide (NIPAm)/trypan blue particles (polyNIPAm/TB), reactivation of Chagas disease.

Procedia PDF Downloads 377
3053 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 30
3052 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 205
3051 A Compact Wearable Slot Antenna for LTE and WLAN Applications

Authors: Haider K. Raad

Abstract:

In this paper, a compact wide-band, ultra-thin and flexible slot antenna intended for wearable applications is presented. The presented antenna is designed to provide Wireless Local Area Network (WLAN) and Long Term Evolution (LTE) connectivity. The presented design exhibits a relatively wide bandwidth (1600-3500 MHz below -6 dB impedance bandwidth limit). The antenna is positioned on a 33 mm x 30 mm flexible substrate with a thickness of 50 µm. Antenna properties, such as the far-field radiation patterns, scattering parameter S11 are provided. The presented compact, thin and flexible design along with excellent radiation characteristics are deemed suitable for integration into flexible and wearable devices.

Keywords: wearable electronics, slot Antenna, LTE, WLAN

Procedia PDF Downloads 234
3050 Design of Bacterial Pathogens Identification System Based on Scattering of Laser Beam Light and Classification of Binned Plots

Authors: Mubashir Hussain, Mu Lv, Xiaohan Dong, Zhiyang Li, Bin Liu, Nongyue He

Abstract:

Detection and classification of microbes have a vast range of applications in biomedical engineering especially in detection, characterization, and quantification of bacterial contaminants. For identification of pathogens, different techniques are emerging in the field of biomedical engineering. Latest technology uses light scattering, capable of identifying different pathogens without any need for biochemical processing. Bacterial Pathogens Identification System (BPIS) which uses a laser beam, passes through the sample and light scatters off. An assembly of photodetectors surrounded by the sample at different angles to detect the scattering of light. The algorithm of the system consists of two parts: (a) Library files, and (b) Comparator. Library files contain data of known species of bacterial microbes in the form of binned plots, while comparator compares data of unknown sample with library files. Using collected data of unknown bacterial species, highest voltage values stored in the form of peaks and arranged in 3D histograms to find the frequency of occurrence. Resulting data compared with library files of known bacterial species. If sample data matching with any library file of known bacterial species, sample identified as a matched microbe. An experiment performed to identify three different bacteria particles: Enterococcus faecalis, Pseudomonas aeruginosa, and Escherichia coli. By applying algorithm using library files of given samples, results were compromising. This system is potentially applicable to several biomedical areas, especially those related to cell morphology.

Keywords: microbial identification, laser scattering, peak identification, binned plots classification

Procedia PDF Downloads 150
3049 Hydrothermal Synthesis of ZIF-7 Crystals and Their Composite ZIF-7/CS Membranes for Water/Ethanol Separation

Authors: Kai-Sheng Ji, Yi-Feng Lin

Abstract:

The pervaporation process for solvent and water separation has attracted research attention due to its lower energy consumption compared with conventional distillation processes. The membranes used for the pervaporation approach should exhibit high flux and separation factors. In this study, the ZIF-7 crystal particles were successfully incorporated into chitosan (CS) membranes to form ZIF-7/CS mixed-matrix membranes. The as-prepared ZIF-7/CS mixed-matrix membranes were used to separate mixtures of water/ethanol at 25℃ in the pervaporation process. The mixed-matrix membranes with different ZIF-7 wt% incorporation showed better separation efficiency than the pristine CS membranes because of the smaller pore size of the mixed-matrix membranes. The separation factor and the flux of the ZIF-7/CS membranes clearly exceed the upper limit of the previously reported CS-based and mixed-matrix membranes.

Keywords: pervaporation, chitosan, ZIF-7, memberane separation

Procedia PDF Downloads 430
3048 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: data compression, ultrasonic communication, guided waves, FEM analysis

Procedia PDF Downloads 124
3047 Array Type Miniaturized Ultrasonic Sensors for Detecting Sinkhole in the City

Authors: Won Young Choi, Kwan Kyu Park

Abstract:

Recently, the road depression happening in the urban area is different from the cause of the sink hole and the generation mechanism occurring in the limestone area. The main cause of sinkholes occurring in the city center is the loss of soil due to the damage of old underground buried materials and groundwater discharge due to large underground excavation works. The method of detecting the sinkhole in the urban area is mostly using the Ground Penetration Radar (GPR). However, it is challenging to implement compact system and detecting watery state since it is based on electromagnetic waves. Although many ultrasonic underground detection studies have been conducted, near-ground detection (several tens of cm to several meters) has been developed for bulk systems using geophones as a receiver. The goal of this work is to fabricate a miniaturized sinkhole detecting system based on low-cost ultrasonic transducers of 40 kHz resonant frequency with high transmission pressure and receiving sensitivity. Motived by biomedical ultrasonic imaging methods, we detect air layers below the ground such as asphalt through the pulse-echo method. To improve image quality using multi-channel, linear array system is implemented, and image is acquired by classical synthetic aperture imaging method. We present the successful feasibility test of multi-channel sinkhole detector based on ultrasonic transducer. In this work, we presented and analyzed image results which are imaged by single channel pulse-echo imaging, synthetic aperture imaging.

Keywords: road depression, sinkhole, synthetic aperture imaging, ultrasonic transducer

Procedia PDF Downloads 144
3046 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 151
3045 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 320
3044 Foot-and-Mouth Virus Detection in Asymptomatic Dairy Cows without Foot-and-Mouth Disease Outbreak

Authors: Duanghathai Saipinta, Tanittian Panyamongkol, Witaya Suriyasathaporn

Abstract:

Animal management aims to provide a suitable environment for animals allowing maximal productivity in those animals. Prevention of disease is an important part of animal management. Foot-and-mouth disease (FMD) is a highly contagious viral disease in cattle and is an economically important animal disease worldwide. Monitoring the FMD virus in farms is useful management for the prevention of the FMD outbreak. A recent publication indicated collection samples from nasal swabs can be used for monitoring FMD in symptomatic cows. Therefore, the objectives of this study were to determine the FMD virus in asymptomatic dairy cattle using nasal swab samples during the absence of an FMD outbreak. The study was conducted from December 2020 to June 2021 using 185 asymptomatic signs of FMD dairy cattle in Chiang Mai Province, Thailand. By random cow selection, nasal mucosal swabs were used to collect samples from the selected cows and then were to evaluate the presence of FMD viruses using the real-time rt-PCR assay. In total, 4.9% of dairy cattle detected FMD virus, including 2 dairy farms in Mae-on (8 samples; 9.6%) and 1 farm in the Chai-Prakan district (1 sample; 1.2%). Interestingly, both farms in Mae-on were the outbreak of the FMD after this detection for 6 months. This indicated that the FMD virus presented in asymptomatic cattle might relate to the subsequent outbreak of FMD. The outbreak demonstrates the presence of the virus in the environment. In conclusion, monitoring of FMD can be performed by nasal swab collection. Further investigation is needed to show whether the FMD virus presented in asymptomatic FMD cattle could be the cause of the subsequent FMD outbreak or not.

Keywords: cattle, foot-and-mouth disease, nasal swab, real-time rt-PCR assay

Procedia PDF Downloads 232
3043 Aerosol Chemical Composition in Urban Sites: A Comparative Study of Lima and Medellin

Authors: Guilherme M. Pereira, Kimmo Teinïla, Danilo Custódio, Risto Hillamo, Célia Alves, Pérola de C. Vasconcellos

Abstract:

South American large cities often present serious air pollution problems and their atmosphere composition is influenced by a variety of emissions sources. The South American Emissions Megacities, and Climate project (SAEMC) has focused on the study of emissions and its influence on climate in the South American largest cities and it also included Lima (Peru) and Medellin (Colombia), sites where few studies of the genre were done. Lima is a coastal city with more than 8 million inhabitants and the second largest city in South America. Medellin is a 2.5 million inhabitants city and second largest city in Colombia; it is situated in a valley. The samples were collected in quartz fiber filters in high volume samplers (Hi-Vol), in 24 hours of sampling. The samples were collected in intensive campaigns in both sites, in July, 2010. Several species were determined in the aerosol samples of Lima and Medellin. Organic and elemental carbon (OC and EC) in thermal-optical analysis; biomass burning tracers (levoglucosan - Lev, mannosan - Man and galactosan - Gal) in high-performance anion exchange ion chromatography with mass spectrometer detection; water soluble ions in ion chromatography. The average particulate matter was similar for both campaigns, the PM10 concentrations were above the recommended by World Health Organization (50 µg m⁻³ – daily limit) in 40% of the samples in Medellin, while in Lima it was above that value in 15% of the samples. The average total ions concentration was higher in Lima (17450 ng m⁻³ in Lima and 3816 ng m⁻³ in Medellin) and the average concentrations of sodium and chloride were higher in this site, these species also had better correlations (Pearson’s coefficient = 0,63); suggesting a higher influence of marine aerosol in the site due its location in the coast. Sulphate concentrations were also much higher at Lima site; which may be explained by a higher influence of marine originated sulphate. However, the OC, EC and monosaccharides average concentrations were higher at Medellin site; this may be due to the lower dispersion of pollutants due to the site’s location and a larger influence of biomass burning sources. The levoglucosan average concentration was 95 ng m⁻³ for Medellin and 16 ng m⁻³ and OC was well correlated with levoglucosan (Pearson’s coefficient = 0,86) in Medellin; suggesting a higher influence of biomass burning over the organic aerosol in this site. The Lev/Man ratio is often related to the type of biomass burned and was close to 18, similar to the observed in previous studies done at biomass burning impacted sites in the Amazon region; backward trajectories also suggested the transport of aerosol from that region. Biomass burning appears to have a larger influence on the air quality in Medellin, in addition the vehicular emissions; while Lima showed a larger influence of marine aerosol during the study period.

Keywords: aerosol transport, atmospheric particulate matter, biomass burning, SAEMC project

Procedia PDF Downloads 263
3042 The Dangers of Attentional Inertia in the Driving Task

Authors: Catherine Thompson, Maryam Jalali, Peter Hills

Abstract:

The allocation of visual attention is critical when driving and anything that limits attention will have a detrimental impact on safety. Engaging in a secondary task reduces the amount of attention directed to the road because drivers allocate resources towards this task, leaving fewer resources to process driving-relevant information. Yet the dangers associated with a secondary task do not end when the driver returns their attention to the road. Instead, the attentional settings adopted to complete a secondary task may persist to the road, affecting attention, and therefore affecting driver performance. This 'attentional inertia' effect was investigated in the current work. Forty drivers searched for hazards in driving video clips while their eye-movements were recorded. At varying intervals they were instructed to attend to a secondary task displayed on a tablet situated to their left-hand side. The secondary task consisted of three separate computer games that induced horizontal, vertical, and random eye movements. Visual search and hazard detection in the driving clips were compared across the three conditions of the secondary task. Results showed that the layout of information in the secondary task, and therefore the allocation of attention in this task, had an impact on subsequent search in the driving clips. Vertically presented information reduced the wide horizontal spread of search usually associated with accurate driving and had a negative influence on the detection of hazards. The findings show the additional dangers of engaging in a secondary task while driving. The attentional inertia effect has significant implications for semi-autonomous and autonomous vehicles in which drivers have greater opportunity to direct their attention away from the driving task.

Keywords: attention, eye-movements, hazard perception, visual search

Procedia PDF Downloads 165
3041 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 121
3040 Evaluation of Commercials by Psychological Changes in Consumers’ Physiological Characteristics

Authors: Motoki Seguchi, Fumiko Harada, Hiromitsu Shimakawa

Abstract:

There have been many local companies in countryside that carefully produce and sell products, which include crafts and foods produced with traditional methods. These companies are likely to use commercials to advertise their products. However, it is difficult for companies to judge whether the commercials they create are having an impact on consumers. Therefore, to create effective commercials, this study researches what kind of gimmicks in commercials affect what kind of consumers. This study proposes a method for extracting psychological change points from the physiological characteristics of consumers while they are watching commercials and estimating the gimmicks in the commercial that affect consumer engagement. In this method, change point detection is applied to pupil size for estimating gimmicks that affect consumers’ emotional engagement, and to EDA for estimating gimmicks that affect cognitive engagement. A questionnaire is also used to estimate the commercials that influence behavioral engagement. As a result of estimating the gimmicks that influence consumer engagement using this method, it was found that there are some common features among the gimmicks. To influence cognitive engagement, it was found that it was useful to include flashback scenes, messages to be appealed to, the company’s name, and the company’s logos as gimmicks. It was also found that flashback scenes and story climaxes were useful in influencing emotional engagement. Furthermore, it was found that the use of storytelling commercials may or may not be useful, depending on which consumers are desired to take which behaviors. It also estimated the gimmicks that influence consumers for each target and found that the useful gimmicks are slightly different for students and working adults. By using this method, it can understand which gimmicks in the commercial affect which engagement of the consumers. Therefore, the results of this study can be used as a reference for the gimmicks that should be included in commercials when companies create their commercials in the future.

Keywords: change point detection, estimating engagement, physiological characteristics, psychological changes, watching commercials

Procedia PDF Downloads 186
3039 Investigation of Leptospira Infection in Stray Animals in Thailand: Leptospirosis Risk Reduction in Human

Authors: Ruttayaporn Ngasaman, Saowakon Indouang, Usa Chethanond

Abstract:

Leptospirosis is a public health concern zoonosis in Thailand. Human and animals are often infected by contact with contaminated water. The infected animals play an important role in leptospira infection for both human and other hosts via urine. In humans, it can cause a wide range of symptoms, some of which may present mild flu-like symptoms including fever, vomiting, and jaundice. Without treatment, Leptospirosis can lead to kidney damage, meningitis, liver failure, respiratory distress, and even death. The prevalence of leptospirosis in stray animals in Thailand is unknown. The aim of this study was to investigate leptospira infection in stray animals including dogs and cats in Songkhla province, Thailand. Total of 434 blood samples were collected from 370 stray dogs and 64 stray cats during the population control program from 2014 to 2018. Screening test using latex agglutination for the detection of antibodies against Leptospira interrogans in serum samples shows 29.26% (127/434) positive. There were 120 positive samples of stray dogs and 7 positive samples of stray cats. Detection by polymerase chain reaction specific to LipL32 gene of Leptospira interrogans showed 1.61% (7/434) positive. Stray cats (5/64) show higher prevalence than stray dogs (2/370). Although active infection was low detected, but seroprevalence was high. This result indicated that stray animals were not active infection during sample collection but they use to get infected or in a latent period of infection. They may act as a reservoir for domestic animals and human in which stay in the same environment. In order to prevent and reduce the risk of leptospira infection in a human, stray animals should be done health checking, vaccination, and disease treatment.

Keywords: leptospirosis, stray animals, risk reduction, Thailand

Procedia PDF Downloads 133
3038 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties

Authors: E. Salem

Abstract:

Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.

Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames

Procedia PDF Downloads 114
3037 Multiple Etiologies and Incidences of Co-Infections in Childhood Diarrhea in a Hospital Based Screening Study in Odisha, India

Authors: Arpit K. Shrivastava, Nirmal K. Mohakud, Subrat Kumar, Priyadarshi S. Sahu

Abstract:

Acute diarrhea is one of the major causes of morbidity and mortality among children less than five years of age. Multiple etiologies have been implicated for infectious gastroenteritis causing acute diarrhea. In our study fecal samples (n=165) were collected from children (<5 years) presenting with symptoms of acute diarrhea. Samples were screened for viral, bacterial, and parasitic etiologies such as Rotavirus, Adenovirus, Diarrhoeagenic Escherichia coli (EPEC, EHEC, STEC, O157, O111), Shigella spp., Salmonella spp., Vibrio cholera, Cryptosporidium spp., and Giardia spp. The overall results from our study showed that 57% of children below 5 years of age with acute diarrhea were positive for at least one infectious etiology. Diarrhoeagenic Escherichia coli was detected to be the major etiological agent (29.09%) followed by Rotavirus (24.24%), Shigella (21.21%), Adenovirus (5.45%), Cryptosporidium (2.42%), and Giardia (0.60%). Among the different DEC strains, EPEC was detected significantly higher in <2 years children in comparison to >2 years age group (p =0.001). Concurrent infections with two or more pathogens were observed in 47 of 160 (28.48%) cases with a predominant incidence particularly in <2-year-old children (66.66%) compared to children of 2 to 5 years age group. Co-infection of Rotavirus with Shigella was the most frequent combination, which was detected in 17.94% cases, followed by Rotavirus with EPEC (15.38%) and Shigella with STEC (12.82%). Detection of multiple infectious etiologies and diagnosis of the right causative agent(s) can immensely help in better management of acute childhood diarrhea. In future more studies focusing on the detection of cases with concurrent infections must be carried out, as we believe that the etiological agents might be complementing each other’s strategies of pathogenesis resulting in severe diarrhea.

Keywords: children, co-infection, infectious diarrhea, Odisha

Procedia PDF Downloads 336
3036 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System

Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin

Abstract:

The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.

Keywords: TB smears, automated microscope, artificial intelligence, medical imaging

Procedia PDF Downloads 229
3035 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor

Authors: Mitali Saha, Soma Das

Abstract:

The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.

Keywords: coconut oil, CCNT, cholesterol, biosensor

Procedia PDF Downloads 282
3034 Groundwater Quality Monitoring in the Shoush Suburbs, Khouzestan Province, Iran

Authors: Mohammad Tahsin Karimi Nezhad, Zaynab Shadbahr, Ali Gholami

Abstract:

In recent years many attempts have been made to assess groundwater contamination by nitrates worldwide. The assessment of spatial and temporal variations of physico-chemical parameters of water is necessary to mange water quality. The objectives of the study were to evaluate spatial variability and temporal changes of hydrochemical factors by water sampling from 24 wells in the Shoush City suburb. The analysis was conducted for the whole area and for different land use and geological classes. In addition, nitrate concentration variability with descriptive parameters such as sampling depth, dissolved oxygen, and on ground nitrogen loadings was also investigated The results showed that nitrate concentrations did not exceed the standard limit (50 mg/l). EC of water samples, ranged from 900 to 1200 µs/cm, TDS from 775 to 830 mg/l and pH from 5.6 to 9.

Keywords: groundwater, GIS, water quality, Iran

Procedia PDF Downloads 431
3033 Retrospective Evaluation of Vector-borne Infections in Cats Living in Germany (2012-2019)

Authors: I. Schäfer, B. Kohn, M. Volkmann, E. Müller

Abstract:

Introduction: Blood-feeding arthropods transmit parasitic, bacterial, or viral pathogens to domestic animals and wildlife. Vector-borne infections are gaining significance due to the increase of travel, import of domestic animals from abroad, and the changing climate in Europe. Aims of the study: The main objective of this retrospective study was to assess the prevalence of vector-borne infections in cats in which a ‘Feline Travel Profile’ had been conducted. Material and Methods: This retrospective study included test results from cats for which a ‘Feline Travel Profile’ established by LABOKLIN had been requested by veterinarians between April 2012 and December 2019. This profile contains direct detection methods via polymerase chain reaction (PCR) for Hepatozoon spp. and Dirofilaria spp. as well as indirect detection methods via immunofluorescence antibody test (IFAT) for Ehrlichia spp. and Leishmania spp. This profile was expanded to include an IFAT for Rickettsia spp. from July 2015 onwards. The prevalence of the different vector-borne infectious agents was calculated. Results: A total of 602 cats were tested using the ‘Feline Travel Profile’. Positive test results were as follows: Rickettsia spp. IFAT 54/442 (12.2%), Ehrlichia spp. IFAT 68/602 (11.3%), Leishmania spp. IFAT 21/602 (3.5%), Hepatozoon spp. PCR 51/595 (8.6%), and Dirofilaria spp. PCR 1/595 cats (0.2%). Co-infections with more than one pathogen could be detected in 22/602 cats. Conclusions: 170/602 cats (28.2%) were tested positive for at least one vector-borne pathogen. Infections with multiple pathogens could be detected in 3.7% of the cats. The data emphasizes the importance of considering vector-borne infections as potential differential diagnoses in cats.

Keywords: arthopod-transmitted infections, feline vector-borne infections, Germany, laboratory diagnostics

Procedia PDF Downloads 166
3032 Vibration Based Damage Detection and Stiffness Reduction of Bridges: Experimental Study on a Small Scale Concrete Bridge

Authors: Mirco Tarozzi, Giacomo Pignagnoli, Andrea Benedetti

Abstract:

Structural systems are often subjected to degradation processes due to different kind of phenomena like unexpected loadings, ageing of the materials and fatigue cycles. This is true especially for bridges, in which their safety evaluation is crucial for the purpose of a design of planning maintenance. This paper discusses the experimental evaluation of the stiffness reduction from frequency changes due to uniform damage scenario. For this purpose, a 1:4 scaled bridge has been built in the laboratory of the University of Bologna. It is made of concrete and its cross section is composed by a slab linked to four beams. This concrete deck is 6 m long and 3 m wide, and its natural frequencies have been identified dynamically by exciting it with an impact hammer, a dropping weight, or by walking on it randomly. After that, a set of loading cycles has been applied to this bridge in order to produce a uniformly distributed crack pattern. During the loading phase, either cracking moment and yielding moment has been reached. In order to define the relationship between frequency variation and loss in stiffness, the identification of the natural frequencies of the bridge has been performed, before and after the occurrence of the damage, corresponding to each load step. The behavior of breathing cracks and its effect on the natural frequencies has been taken into account in the analytical calculations. By using a sort of exponential function given from the study of lot of experimental tests in the literature, it has been possible to predict the stiffness reduction through the frequency variation measurements. During the load test also crack opening and middle span vertical displacement has been monitored.

Keywords: concrete bridge, damage detection, dynamic test, frequency shifts, operational modal analysis

Procedia PDF Downloads 184
3031 Mutation Analysis of the ATP7B Gene in 43 Vietnamese Wilson’s Disease Patients

Authors: Huong M. T. Nguyen, Hoa A. P. Nguyen, Mai P. T. Nguyen, Ngoc D. Ngo, Van T. Ta, Hai T. Le, Chi V. Phan

Abstract:

Wilson’s disease (WD) is an autosomal recessive disorder of the copper metabolism, which is caused by a mutation in the copper-transporting P-type ATPase (ATP7B). The mechanism of this disease is the failure of hepatic excretion of copper to bile, and leads to copper deposits in the liver and other organs. The ATP7B gene is located on the long arm of chromosome 13 (13q14.3). This study aimed to investigate the gene mutation in the Vietnamese patients with WD, and make a presymptomatic diagnosis for their familial members. Forty-three WD patients and their 65 siblings were identified as having ATP7B gene mutations. Genomic DNA was extracted from peripheral blood samples; 21 exons and exon-intron boundaries of the ATP7B gene were analyzed by direct sequencing. We recognized four mutations ([R723=; H724Tfs*34], V1042Cfs*79, D1027H, and IVS6+3A>G) in the sum of 20 detectable mutations, accounting for 87.2% of the total. Mutation S105* was determined to have a high rate (32.6%) in this study. The hotspot regions of ATP7B were found at exons 2, 16, and 8, and intron 14, in 39.6 %, 11.6 %, 9.3%, and 7 % of patients, respectively. Among nine homozygote/compound heterozygote siblings of the patients with WD, three individuals were determined as asymptomatic by screening mutations of the probands. They would begin treatment after diagnosis. In conclusion, 20 different mutations were detected in 43 WD patients. Of this number, four novel mutations were explored, including [R723=; H724Tfs*34], V1042Cfs*79, D1027H, and IVS6+3A>G. The mutation S105* is the most prevalent and has been considered as a biomarker that can be used in a rapid detection assay for diagnosis of WD patients. Exons 2, 8, and 16, and intron 14 should be screened initially for WD patients in Vietnam. Based on risk profile for WD, genetic testing for presymptomatic patients is also useful in diagnosis and treatment.

Keywords: ATP7B gene, mutation detection, presymptomatic diagnosis, Vietnamese Wilson’s disease

Procedia PDF Downloads 380
3030 Optimal Planning of Dispatchable Distributed Generators for Power Loss Reduction in Unbalanced Distribution Networks

Authors: Mahmoud M. Othman, Y. G. Hegazy, A. Y. Abdelaziz

Abstract:

This paper proposes a novel heuristic algorithm that aims to determine the best size and location of distributed generators in unbalanced distribution networks. The proposed heuristic algorithm can deal with the planning cases where power loss is to be optimized without violating the system practical constraints. The distributed generation units in the proposed algorithm is modeled as voltage controlled node with the flexibility to be converted to constant power factor node in case of reactive power limit violation. The proposed algorithm is implemented in MATLAB and tested on the IEEE 37 -node feeder. The results obtained show the effectiveness of the proposed algorithm.

Keywords: distributed generation, heuristic approach, optimization, planning

Procedia PDF Downloads 525
3029 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors

Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara

Abstract:

Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.

Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement

Procedia PDF Downloads 121
3028 Detection and Quantification of Ochratoxin A in Food by Aptasensor

Authors: Moez Elsaadani, Noel Durand, Brice Sorli, Didier Montet

Abstract:

Governments and international instances are trying to improve the food safety system to prevent, reduce or avoid the increase of food borne diseases. This food risk is one of the major concerns for the humanity. The contamination by mycotoxins is a threat to the health and life of humans and animals. One of the most common mycotoxin contaminating feed and foodstuffs is Ochratoxin A (OTA), which is a secondary metabolite, produced by Aspergillus and Penicillium strains. OTA has a chronic toxic effect and proved to be mutagenic, nephrotoxic, teratogenic, immunosuppressive, and carcinogenic. On the other side, because of their high stability, specificity, affinity, and their easy chemical synthesis, aptamer based methods are applied to OTA biosensing as alternative to traditional analytical technique. In this work, five aptamers have been tested to confirm qualitatively and quantitatively their binding with OTA. In the same time, three different analytical methods were tested and compared based on their ability to detect and quantify the OTA. The best protocol that was established to quantify free OTA from linked OTA involved an ultrafiltration method in green coffee solution with. OTA was quantified by HPLC-FLD to calculate the binding percentage of all five aptamers. One aptamer (The most effective with 87% binding with OTA) has been selected to be our biorecognition element to study its electrical response (variation of electrical properties) in the presence of OTA in order to be able to make a pairing with a radio frequency identification (RFID). This device, which is characterized by its low cost, speed, and a simple wireless information transmission, will implement the knowledge on the mycotoxins molecular sensors (aptamers), an electronic device that will link the information, the quantification and make it available to operators.

Keywords: aptamer, aptasensor, detection, Ochratoxin A

Procedia PDF Downloads 181
3027 Telemedicine Services in Ophthalmology: A Review of Studies

Authors: Nasim Hashemi, Abbas Sheikhtaheri

Abstract:

Telemedicine is the use of telecommunication and information technologies to provide health care services that would often not be consistently available in distant rural communities to people at these remote areas. Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Thus, teleophthalmology can overcome geographical barriers and improve quality, access, and affordability of eye health care services. Since teleophthalmology has been widespread applied in recent years, the aim of this study was to determine the different applications of teleophthalmology in the world. To this end, three bibliographic databases (Medline, ScienceDirect, Scopus) were comprehensively searched with these keywords: eye care, eye health care, primary eye care, diagnosis, detection, and screening of different eye diseases in conjunction with telemedicine, telehealth, teleophthalmology, e-services, and information technology. All types of papers were included in the study with no time restriction. We conducted the search strategies until 2015. Finally 70 articles were surveyed. We classified the results based on the’type of eye problems covered’ and ‘the type of telemedicine services’. Based on the review, from the ‘perspective of health care levels’, there are three level for eye health care as primary, secondary and tertiary eye care. From the ‘perspective of eye care services’, the main application of teleophthalmology in primary eye care was related to the diagnosis of different eye diseases such as diabetic retinopathy, macular edema, strabismus and aged related macular degeneration. The main application of teleophthalmology in secondary and tertiary eye care was related to the screening of eye problems i.e. diabetic retinopathy, astigmatism, glaucoma screening. Teleconsultation between health care providers and ophthalmologists and also education and training sessions for patients were other types of teleophthalmology in world. Real time, store–forward and hybrid methods were the main forms of the communication from the perspective of ‘teleophthalmology mode’ which is used based on IT infrastructure between sending and receiving centers. In aspect of specialists, early detection of serious aged-related ophthalmic disease in population, screening of eye disease processes, consultation in an emergency cases and comprehensive eye examination were the most important benefits of teleophthalmology. Cost-effectiveness of teleophthalmology projects resulted from reducing transportation and accommodation cost, access to affordable eye care services and receiving specialist opinions were also the main advantages of teleophthalmology for patients. Teleophthalmology brings valuable secondary and tertiary care to remote areas. So, applying teleophthalmology for detection, treatment and screening purposes and expanding its use in new applications such as eye surgery will be a key tool to promote public health and integrating eye care to primary health care.

Keywords: applications, telehealth, telemedicine, teleophthalmology

Procedia PDF Downloads 374