Search results for: rumor detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3474

Search results for: rumor detection

2244 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations

Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan

Abstract:

Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.

Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers

Procedia PDF Downloads 80
2243 A Differential Detection Method for Chip-Scale Spin-Exchange Relaxation Free Atomic Magnetometer

Authors: Yi Zhang, Yuan Tian, Jiehua Chen, Sihong Gu

Abstract:

Chip-scale spin-exchange relaxation free (SERF) atomic magnetometer makes use of millimeter-scale vapor cells micro-fabricated by Micro-electromechanical Systems (MEMS) technique and SERF mechanism, resulting in the characteristics of high spatial resolution and high sensitivity. It is useful for biomagnetic imaging including magnetoencephalography and magnetocardiography. In a prevailing scheme, circularly polarized on-resonance laser beam is adapted for both pumping and probing the atomic polarization. And the magnetic-field-sensitive signal is extracted by transmission laser intensity enhancement as a result of atomic polarization increase on zero field level crossing resonance. The scheme is very suitable for integration, however, the laser amplitude modulation (AM) noise and laser frequency modulation to amplitude modulation (FM-AM) noise is superimposed on the photon shot noise reducing the signal to noise ratio (SNR). To suppress AM and FM-AM noise the paper puts forward a novel scheme which adopts circularly polarized on-resonance light pumping and linearly polarized frequency-detuning laser probing. The transmission beam is divided into transmission and reflection beams by a polarization analyzer, the angle between the analyzer's transmission polarization axis and frequency-detuning laser polarization direction is set to 45°. The magnetic-field-sensitive signal is extracted by polarization rotation enhancement of frequency-detuning laser which induces two beams intensity difference increase as the atomic polarization increases. Therefore, AM and FM-AM noise in two beams are common-mode and can be almost entirely canceled by differential detection. We have carried out an experiment to study our scheme. The experiment reveals that the noise in the differential signal is obviously smaller than that in each beam. The scheme is promising to be applied for developing more sensitive chip-scale magnetometer.

Keywords: atomic magnetometer, chip scale, differential detection, spin-exchange relaxation free

Procedia PDF Downloads 171
2242 Molecular Detection of Leishmania from the Phlebotomus Genus: Tendency towards Leishmaniasis Regression in Constantine, North-East of Algeria

Authors: K. Frahtia, I. Mihoubi, S. Picot

Abstract:

Leishmaniasis is a group of parasitic disease with a varied clinical expression caused by flagellate protozoa of the Leishmania genus. These diseases are transmitted to humans and animals by the sting of a vector insect, the female sandfly. Among the groups of dipteral disease vectors, Phlebotominae occupy a prime position and play a significant role in human pathology, such as leishmaniasis that affects nearly 350 million people worldwide. The vector control operation launched by health services throughout the country proves to be effective since despite the prevalence of the disease remains high especially in rural areas, leishmaniasis appears to be declining in Algeria. In this context, this study mainly concerns molecular detection of Leishmania from the vector. Furthermore, a molecular diagnosis has also been made on skin samples taken from patients in the region of Constantine, located in the North-East of Algeria. Concerning the vector, 5858 sandflies were captured, including 4360 males and 1498 females. Male specimens were identified based on their morphological. The morphological identification highlighted the presence of the Phlebotomus genus with a prevalence of 93% against 7% represented by the Sergentomyia genus. About the identified species, P. perniciosus is the most abundant with 59.4% of the male identified population followed by P. longicuspis with 24.7% of the workforce. P. perfiliewi is poorly represented by 6.7% of specimens followed by P. papatasi with 2.2% and 1.5% S. dreyfussi. Concerning skin samples, 45/79 (56.96%) collected samples were found positive by real-time PCR. This rate appears to be in sharp decline compared to previous years (alert peak of 30,227 cases in 2005). Concerning the detection of Leishmania from sandflies by RT-PCR, the results show that 3/60 PCR performed genus are positive with melting temperatures corresponding to that of the reference strain (84.1 +/- 0.4 ° C for L. infantum). This proves that the vectors were parasitized. On the other side, identification by RT-PCR species did not give any results. This could be explained by the presence of an insufficient amount of leishmanian DNA in the vector, and therefore support the hypothesis of the regression of leishmaniasis in Constantine.

Keywords: Algeria, molecular diagnostic, phlebotomus, real time PCR

Procedia PDF Downloads 273
2241 Understanding Jordanian Women's Values and Beliefs Related to Prevention and Early Detection of Breast Cancer

Authors: Khlood F. Salman, Richard Zoucha, Hani Nawafleh

Abstract:

Introduction: Jordan ranks the fourth highest breast cancer prevalence after Lebanon, Bahrain, and Kuwait. Considerable evidence showed that cultural, ethnic, and economic differences influence a woman’s practice to early detection and prevention of breast cancer. Objectives: To understand women’s health beliefs and values in relation to early detection of breast cancer; and to explore the impact of these beliefs on their decisions regarding reluctance or acceptance of early detection measures such as mammogram screening. Design: A qualitative focused ethnography was used to collect data for this study. Settings: The study was conducted in the second largest city surrounded by a large rural area in Ma’an- Jordan. Participants: A total of twenty seven women, with no history of breast cancer, between the ages of 18 and older, who had prior health experience with health providers, and were willing to share elements of personal health beliefs related to breast health within the larger cultural context. The participants were recruited using the snowball method and words of mouth. Data collection and analysis: A short questionnaire was designed to collect data related to socio demographic status (SDQ) from all participants. A Semi-structured interviews guide was used to elicit data through interviews with the informants. Nvivo10 a data manager was utilized to assist with data analysis. Leininger’s four phases of qualitative data analysis was used as a guide for the data analysis. The phases used to analyze the data included: 1) Collecting and documenting raw data, 2) Identifying of descriptors and categories according to the domains of inquiry and research questions. Emic and etic data is coded for similarities and differences, 3) Identifying patterns and contextual analysis, discover saturation of ideas and recurrent patterns, and 4) Identifying themes and theoretical formulations and recommendations. Findings: Three major themes were emerged within the cultural and religious context; 1. Fear, denial, embarrassment and lack of knowledge were common perceptions of Ma’anis’ women regarding breast health and screening mammography, 2. Health care professionals in Jordan were not quick to offer information and education about breast cancer and screening, and 3. Willingness to learn about breast health and cancer prevention. Conclusion: The study indicated the disparities between the infrastructure and resourcing in rural and urban areas of Jordan, knowledge deficit related to breast cancer, and lack of education about breast health may impact women’s decision to go for a mammogram screening. Cultural beliefs, fear, embarrassments as well as providers lack of focus on breast health were significant contributors against practicing breast health. Health providers and policy makers should provide resources for the establishment health education programs regarding breast cancer early detection and mammography screening. Nurses should play a major role in delivering health education about breast health in general and breast cancer in particular. A culturally appropriate health awareness messages can be used in creating educational programs which can be employed at the national levels.

Keywords: breast health, beliefs, cultural context, ethnography, mammogram screening

Procedia PDF Downloads 300
2240 Development of Sulfite Biosensor Based on Sulfite Oxidase Immobilized on 3-Aminoproplytriethoxysilane Modified Indium Tin Oxide Electrode

Authors: Pawasuth Saengdee, Chamras Promptmas, Ting Zeng, Silke Leimkühler, Ulla Wollenberger

Abstract:

Sulfite has been used as a versatile preservative to limit the microbial growth and to control the taste in some food and beverage. However, it has been reported to cause a wide spectrum of severe adverse reactions. Therefore, it is important to determine the amount of sulfite in food and beverage to ensure consumer safety. An efficient electrocatalytic biosensor for sulfite detection was developed by immobilizing of human sulfite oxidase (hSO) on 3-aminoproplytriethoxysilane (APTES) modified indium tin oxide (ITO) electrode. Cyclic voltammetry was employed to investigate the electrochemical characteristics of the hSO modified ITO electrode for various pretreatment and binding conditions. Amperometry was also utilized to demonstrate the current responses of the sulfite sensor toward sodium sulfite in an aqueous solution at a potential of 0 V (vs. Ag/AgCl 1 M KCl). The proposed sulfite sensor has a linear range between 0.5 to 2 mM with a correlation coefficient 0.972. Then, the additional polymer layer of PVA was introduced to extend the linear range of sulfite sensor and protect the enzyme. The linear range of sulfite sensor with 5% coverage increases from 2.8 to 20 mM at a correlation coefficient of 0.983. In addition, the stability of sulfite sensor with 5% PVA coverage increases until 14 days when kept in 0.5 mM Tris-buffer, pH 7.0 at 4 8C. Therefore, this sensor could be applied for the detection of sulfite in the real sample, especially in food and beverage.

Keywords: sulfite oxidase, bioelectrocatalytsis, indium tin oxide, direct electrochemistry, sulfite sensor

Procedia PDF Downloads 231
2239 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems

Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano

Abstract:

The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.

Keywords: environmental internet of things, EIoT, machine learning, anomaly detection, environment monitoring

Procedia PDF Downloads 152
2238 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method

Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson

Abstract:

Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.

Keywords: adversarial examples, attack, computer vision, image processing

Procedia PDF Downloads 193
2237 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves

Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira

Abstract:

Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.

Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary

Procedia PDF Downloads 330
2236 Fusion Models for Cyber Threat Defense: Integrating Clustering, Random Forests, and Support Vector Machines to Against Windows Malware

Authors: Azita Ramezani, Atousa Ramezani

Abstract:

In the ever-escalating landscape of windows malware the necessity for pioneering defense strategies turns into undeniable this study introduces an avant-garde approach fusing the capabilities of clustering random forests and support vector machines SVM to combat the intricate web of cyber threats our fusion model triumphs with a staggering accuracy of 98.67 and an equally formidable f1 score of 98.68 a testament to its effectiveness in the realm of windows malware defense by deciphering the intricate patterns within malicious code our model not only raises the bar for detection precision but also redefines the paradigm of cybersecurity preparedness this breakthrough underscores the potential embedded in the fusion of diverse analytical methodologies and signals a paradigm shift in fortifying against the relentless evolution of windows malicious threats as we traverse through the dynamic cybersecurity terrain this research serves as a beacon illuminating the path toward a resilient future where innovative fusion models stand at the forefront of cyber threat defense.

Keywords: fusion models, cyber threat defense, windows malware, clustering, random forests, support vector machines (SVM), accuracy, f1-score, cybersecurity, malicious code detection

Procedia PDF Downloads 72
2235 Bridging Urban Planning and Environmental Conservation: A Regional Analysis of Northern and Central Kolkata

Authors: Tanmay Bisen, Aastha Shayla

Abstract:

This study introduces an advanced approach to tree canopy detection in urban environments and a regional analysis of Northern and Central Kolkata that delves into the intricate relationship between urban development and environmental conservation. Leveraging high-resolution drone imagery from diverse urban green spaces in Kolkata, we fine-tuned the deep forest model to enhance its precision and accuracy. Our results, characterized by an impressive Intersection over Union (IoU) score of 0.90 and a mean average precision (mAP) of 0.87, underscore the model's robustness in detecting and classifying tree crowns amidst the complexities of aerial imagery. This research not only emphasizes the importance of model customization for specific datasets but also highlights the potential of drone-based remote sensing in urban forestry studies. The study investigates the spatial distribution, density, and environmental impact of trees in Northern and Central Kolkata. The findings underscore the significance of urban green spaces in met-ropolitan cities, emphasizing the need for sustainable urban planning that integrates green infrastructure for ecological balance and human well-being.

Keywords: urban greenery, advanced spatial distribution analysis, drone imagery, deep learning, tree detection

Procedia PDF Downloads 57
2234 Detection of Some Drugs of Abuse from Fingerprints Using Liquid Chromatography-Mass Spectrometry

Authors: Ragaa T. Darwish, Maha A. Demellawy, Haidy M. Megahed, Doreen N. Younan, Wael S. Kholeif

Abstract:

The testing of drug abuse is authentic in order to affirm the misuse of drugs. Several analytical approaches have been developed for the detection of drugs of abuse in pharmaceutical and common biological samples, but few methodologies have been created to identify them from fingerprints. Liquid Chromatography-Mass Spectrometry (LC-MS) plays a major role in this field. The current study aimed at assessing the possibility of detection of some drugs of abuse (tramadol, clonazepam, and phenobarbital) from fingerprints using LC-MS in drug abusers. The aim was extended in order to assess the possibility of detection of the above-mentioned drugs in fingerprints of drug handlers till three days of handling the drugs. The study was conducted on randomly selected adult individuals who were either drug abusers seeking treatment at centers of drug dependence in Alexandria, Egypt or normal volunteers who were asked to handle the different studied drugs (drug handlers). An informed consent was obtained from all individuals. Participants were classified into 3 groups; control group that consisted of 50 normal individuals (neither abusing nor handling drugs), drug abuser group that consisted of 30 individuals who abused tramadol, clonazepam or phenobarbital (10 individuals for each drug) and drug handler group that consisted of 50 individuals who were touching either the powder of drugs of abuse: tramadol, clonazepam or phenobarbital (10 individuals for each drug) or the powder of the control substances which were of similar appearance (white powder) and that might be used in the adulteration of drugs of abuse: acetyl salicylic acid and acetaminophen (10 individuals for each drug). Samples were taken from the handler individuals for three consecutive days for the same individual. The diagnosis of drug abusers was based on the current Diagnostic and Statistical Manual of Mental disorders (DSM-V) and urine screening tests using immunoassay technique. Preliminary drug screening tests of urine samples were also done for drug handlers and the control groups to indicate the presence or absence of the studied drugs of abuse. Fingerprints of all participants were then taken on a filter paper previously soaked with methanol to be analyzed by LC-MS using SCIEX Triple Quad or QTRAP 5500 System. The concentration of drugs in each sample was calculated using the regression equations between concentration in ng/ml and peak area of each reference standard. All fingerprint samples from drug abusers showed positive results with LC-MS for the tested drugs, while all samples from the control individuals showed negative results. A significant difference was noted between the concentration of the drugs and the duration of abuse. Tramadol, clonazepam, and phenobarbital were also successfully detected from fingerprints of drug handlers till 3 days of handling the drugs. The mean concentration of the chosen drugs of abuse among the handlers group decreased when the days of samples intake increased.

Keywords: drugs of abuse, fingerprints, liquid chromatography–mass spectrometry, tramadol

Procedia PDF Downloads 123
2233 On the Use of Machine Learning for Tamper Detection

Authors: Basel Halak, Christian Hall, Syed Abdul Father, Nelson Chow Wai Kit, Ruwaydah Widaad Raymode

Abstract:

The attack surface on computing devices is becoming very sophisticated, driven by the sheer increase of interconnected devices, reaching 50B in 2025, which makes it easier for adversaries to have direct access and perform well-known physical attacks. The impact of increased security vulnerability of electronic systems is exacerbated for devices that are part of the critical infrastructure or those used in military applications, where the likelihood of being targeted is very high. This continuously evolving landscape of security threats calls for a new generation of defense methods that are equally effective and adaptive. This paper proposes an intelligent defense mechanism to protect from physical tampering, it consists of a tamper detection system enhanced with machine learning capabilities, which allows it to recognize normal operating conditions, classify known physical attacks and identify new types of malicious behaviors. A prototype of the proposed system has been implemented, and its functionality has been successfully verified for two types of normal operating conditions and further four forms of physical attacks. In addition, a systematic threat modeling analysis and security validation was carried out, which indicated the proposed solution provides better protection against including information leakage, loss of data, and disruption of operation.

Keywords: anti-tamper, hardware, machine learning, physical security, embedded devices, ioT

Procedia PDF Downloads 154
2232 High-Resolution ECG Automated Analysis and Diagnosis

Authors: Ayad Dalloo, Sulaf Dalloo

Abstract:

Electrocardiogram (ECG) recording is prone to complications, on analysis by physicians, due to noise and artifacts, thus creating ambiguity leading to possible error of diagnosis. Such drawbacks may be overcome with the advent of high resolution Methods, such as Discrete Wavelet Analysis and Digital Signal Processing (DSP) techniques. This ECG signal analysis is implemented in three stages: ECG preprocessing, features extraction and classification with the aim of realizing high resolution ECG diagnosis and improved detection of abnormal conditions in the heart. The preprocessing stage involves removing spurious artifacts (noise), due to such factors as muscle contraction, motion, respiration, etc. ECG features are extracted by applying DSP and suggested sloping method techniques. These measured features represent peak amplitude values and intervals of P, Q, R, S, R’, and T waves on ECG, and other features such as ST elevation, QRS width, heart rate, electrical axis, QR and QT intervals. The classification is preformed using these extracted features and the criteria for cardiovascular diseases. The ECG diagnostic system is successfully applied to 12-lead ECG recordings for 12 cases. The system is provided with information to enable it diagnoses 15 different diseases. Physician’s and computer’s diagnoses are compared with 90% agreement, with respect to physician diagnosis, and the time taken for diagnosis is 2 seconds. All of these operations are programmed in Matlab environment.

Keywords: ECG diagnostic system, QRS detection, ECG baseline removal, cardiovascular diseases

Procedia PDF Downloads 297
2231 Steel Bridge Coating Inspection Using Image Processing with Neural Network Approach

Authors: Ahmed Elbeheri, Tarek Zayed

Abstract:

Steel bridges deterioration has been one of the problems in North America for the last years. Steel bridges deterioration mainly attributed to the difficult weather conditions. Steel bridges suffer fatigue cracks and corrosion, which necessitate immediate inspection. Visual inspection is the most common technique for steel bridges inspection, but it depends on the inspector experience, conditions, and work environment. So many Non-destructive Evaluation (NDE) models have been developed use Non-destructive technologies to be more accurate, reliable and non-human dependent. Non-destructive techniques such as The Eddy Current Method, The Radiographic Method (RT), Ultra-Sonic Method (UT), Infra-red thermography and Laser technology have been used. Digital Image processing will be used for Corrosion detection as an Alternative for visual inspection. Different models had used grey-level and colored digital image for processing. However, color image proved to be better as it uses the color of the rust to distinguish it from the different backgrounds. The detection of the rust is an important process as it’s the first warning for the corrosion and a sign of coating erosion. To decide which is the steel element to be repainted and how urgent it is the percentage of rust should be calculated. In this paper, an image processing approach will be developed to detect corrosion and its severity. Two models were developed 1st to detect rust and 2nd to detect rust percentage.

Keywords: steel bridge, bridge inspection, steel corrosion, image processing

Procedia PDF Downloads 306
2230 Carbon-Nanodots Modified Glassy Carbon Electrode for the Electroanalysis of Selenium in Water

Authors: Azeez O. Idris, Benjamin O. Orimolade, Potlako J. Mafa, Alex T. Kuvarega, Usisipho Feleni, Bhekie B. Mamba

Abstract:

We report a simple and cheaper method for the electrochemical detection of Se(IV) using carbon nanodots (CNDTs) prepared from oat. The carbon nanodots were synthesised by green and facile approach and characterised using scanning electron microscopy, high-resolution transmission electron microscopy, Fourier transform infrared spectroscopy, X-ray diffraction, and Raman spectroscopy. The CNDT was used to fabricate an electrochemical sensor for the quantification of Se(IV) in water. The modification of glassy carbon electrode (GCE) with carbon nanodots led to an increase in the electroactive surface area of the electrode, which enhances the redox current peak of [Fe(CN)₆]₃₋/₄‒ in comparison to the bare GCE. Using the square wave voltammetry, the detection limit and quantification limit of 0.05 and 0.167 ppb were obtained under the optimised parameters using deposition potential of -200 mV, 0.1 M HNO₃ electrolyte, electrodeposition time of 60 s, and pH 1. The results further revealed that the GCE-CNDT was not susceptible to many interfering cations except Cu(II) and Pb(II), and Fe(II). The sensor fabrication involves a one-step electrode modification and was used to detect Se(IV) in a real water sample, and the result obtained is in agreement with the inductively coupled plasma technique. Overall, the electrode offers a cheap, fast, and sensitive way of detecting selenium in environmental matrices.

Keywords: carbon nanodots, square wave voltammetry, nanomaterials, selenium, sensor

Procedia PDF Downloads 92
2229 Ischemic Stroke Detection in Computed Tomography Examinations

Authors: Allan F. F. Alves, Fernando A. Bacchim Neto, Guilherme Giacomini, Marcela de Oliveira, Ana L. M. Pavan, Maria E. D. Rosa, Diana R. Pina

Abstract:

Stroke is a worldwide concern, only in Brazil it accounts for 10% of all registered deaths. There are 2 stroke types, ischemic (87%) and hemorrhagic (13%). Early diagnosis is essential to avoid irreversible cerebral damage. Non-enhanced computed tomography (NECT) is one of the main diagnostic techniques used due to its wide availability and rapid diagnosis. Detection depends on the size and severity of lesions and the time spent between the first symptoms and examination. The Alberta Stroke Program Early CT Score (ASPECTS) is a subjective method that increases the detection rate. The aim of this work was to implement an image segmentation system to enhance ischemic stroke and to quantify the area of ischemic and hemorrhagic stroke lesions in CT scans. We evaluated 10 patients with NECT examinations diagnosed with ischemic stroke. Analyzes were performed in two axial slices, one at the level of the thalamus and basal ganglion and one adjacent to the top edge of the ganglionic structures with window width between 80 and 100 Hounsfield Units. We used different image processing techniques such as morphological filters, discrete wavelet transform and Fuzzy C-means clustering. Subjective analyzes were performed by a neuroradiologist according to the ASPECTS scale to quantify ischemic areas in the middle cerebral artery region. These subjective analysis results were compared with objective analyzes performed by the computational algorithm. Preliminary results indicate that the morphological filters actually improve the ischemic areas for subjective evaluations. The comparison in area of the ischemic region contoured by the neuroradiologist and the defined area by computational algorithm showed no deviations greater than 12% in any of the 10 examination tests. Although there is a tendency that the areas contoured by the neuroradiologist are smaller than those obtained by the algorithm. These results show the importance of a computer aided diagnosis software to assist neuroradiology decisions, especially in critical situations as the choice of treatment for ischemic stroke.

Keywords: ischemic stroke, image processing, CT scans, Fuzzy C-means

Procedia PDF Downloads 369
2228 An Experimental Study on the Optimum Installation of Fire Detector for Early Stage Fire Detecting in Rack-Type Warehouses

Authors: Ki Ok Choi, Sung Ho Hong, Dong Suck Kim, Don Mook Choi

Abstract:

Rack type warehouses are different from general buildings in the kinds, amount, and arrangement of stored goods, so the fire risk of rack type warehouses is different from those buildings. The fire pattern of rack type warehouses is different in combustion characteristic and storing condition of stored goods. The initial fire burning rate is different in the surface condition of materials, but the running time of fire is closely related with the kinds of stored materials and stored conditions. The stored goods of the warehouse are consisted of diverse combustibles, combustible liquid, and so on. Fire detection time may be delayed because the residents are less than office and commercial buildings. If fire detectors installed in rack type warehouses are inadaptable, the fire of the warehouse may be the great fire because of delaying of fire detection. In this paper, we studied what kinds of fire detectors are optimized in early detecting of rack type warehouse fire by real-scale fire tests. The fire detectors used in the tests are rate of rise type, fixed type, photo electric type, and aspirating type detectors. We considered optimum fire detecting method in rack type warehouses suggested by the response characteristic and comparative analysis of the fire detectors.

Keywords: fire detector, rack, response characteristic, warehouse

Procedia PDF Downloads 747
2227 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 107
2226 Noninvasive Disease Diagnosis through Breath Analysis Using DNA-functionalized SWNT Sensor Array

Authors: W. J. Zhang, Y. Q. Du, M. L. Wang

Abstract:

Noninvasive diagnostics of diseases via breath analysis has attracted considerable scientific and clinical interest for many years and become more and more promising with the rapid advancement in nanotechnology and biotechnology. The volatile organic compounds (VOCs) in exhaled breath, which are mainly blood borne, particularly provide highly valuable information about individuals’ physiological and pathophysiological conditions. Additionally, breath analysis is noninvasive, real-time, painless and agreeable to patients. We have developed a wireless sensor array based on single-stranded DNA (ssDNA)-decorated single-walled carbon nanotubes (SWNT) for the detection of a number of physiological indicators in breath. Eight DNA sequences were used to functionalize SWNT sensors to detect trace amount of methanol, benzene, dimethyl sulfide, hydrogen sulfide, acetone and ethanol, which are indicators of heavy smoking, excessive drinking, and diseases such as lung cancer, breast cancer, cirrhosis and diabetes. Our tests indicated that DNA functionalized SWNT sensors exhibit great selectivity, sensitivity, reproducibility, and repeatability. Furthermore, different molecules can be distinguished through pattern recognition enabled by this sensor array. Thus, the DNA-SWNT sensor array has great potential to be applied in chemical or bimolecular detection for the noninvasive diagnostics of diseases and health monitoring.

Keywords: breath analysis, diagnosis, DNA-SWNT sensor array, noninvasive

Procedia PDF Downloads 348
2225 PPB-Level H₂ Gas-Sensor Based on Porous Ni-MOF Derived NiO@CuO Nanoflowers for Superior Sensing Performance

Authors: Shah Sufaid, Hussain Shahid, Tianyan You, Liu Guiwu, Qiao Guanjun

Abstract:

Nickel oxide (NiO) is an optimal material for precise detection of hydrogen (H₂) gas due to its high catalytic activity and low resistivity. However, the gas response kinetics of H₂ gas molecules with the surface of NiO concurrence limitation imposed by its solid structure, leading to a diminished gas response value and slow electron-hole transport. Herein, NiO@CuO NFs with porous sharp-tip and nanospheres morphology were successfully synthesized by using a metal-organic framework (MOFs) as a precursor. The fabricated porous 2 wt% NiO@CuO NFs present outstanding selectivity towards H₂ gas, including a high sensitivity of a response value (170 to 20 ppm at 150 °C) higher than that of porous Ni-MOF (6), low detection limit (300 ppb) with a notable response (21), short response and recovery times at (300 ppb, 40/63 s and 20 ppm, 100/167 s), exceptional long-term stability and repeatability. Furthermore, an understanding of NiO@CuO sensor functioning in an actual environment has been obtained by using the impact of relative humidity as well. The boosted hydrogen sensing properties may be attributed due to synergistic effects of numerous facts including p-p heterojunction at the interface between NiO and CuO nanoflowers. Particularly, a porous Ni-MOF structure combined with the chemical sensitization effect of NiO with the rough surface of CuO nanosphere, are examined. This research presents an effective method for development of Ni-MOF derived metal oxide semiconductor (MOS) heterostructures with rigorous morphology and composition, suitable for gas sensing application.

Keywords: NiO@CuO NFs, metal organic framework, porous structure, H₂, gas sensing

Procedia PDF Downloads 47
2224 Experimental Device for Fluorescence Measurement by Optical Fiber Combined with Dielectrophoretic Sorting in Microfluidic Chips

Authors: Jan Jezek, Zdenek Pilat, Filip Smatlo, Pavel Zemanek

Abstract:

We present a device that combines fluorescence spectroscopy with fiber optics and dielectrophoretic micromanipulation in PDMS (poly-(dimethylsiloxane)) microfluidic chips. The device allows high speed detection (in the order of kHz) of the fluorescence signal, which is coming from the sample by an inserted optical fiber, e.g. from a micro-droplet flow in a microfluidic chip, or even from the liquid flowing in the transparent capillary, etc. The device uses a laser diode at a wavelength suitable for excitation of fluorescence, excitation and emission filters, optics for focusing the laser radiation into the optical fiber, and a highly sensitive fast photodiode for detection of fluorescence. The device is combined with dielectrophoretic sorting on a chip for sorting of micro-droplets according to their fluorescence intensity. The electrodes are created by lift-off technology on a glass substrate, or by using channels filled with a soft metal alloy or an electrolyte. This device found its use in screening of enzymatic reactions and sorting of individual fluorescently labelled microorganisms. The authors acknowledge the support from the Grant Agency of the Czech Republic (GA16-07965S) and Ministry of Education, Youth and Sports of the Czech Republic (LO1212) together with the European Commission (ALISI No. CZ.1.05/2.1.00/01.0017).

Keywords: dielectrophoretic sorting, fiber optics, laser, microfluidic chips, microdroplets, spectroscopy

Procedia PDF Downloads 719
2223 Using MALDI-TOF MS to Detect Environmental Microplastics (Polyethylene, Polyethylene Terephthalate, and Polystyrene) within a Simulated Tissue Sample

Authors: Kara J. Coffman-Rea, Karen E. Samonds

Abstract:

Microplastic pollution is an urgent global threat to our planet and human health. Microplastic particles have been detected within our food, water, and atmosphere, and found within the human stool, placenta, and lung tissue. However, most spectrometric microplastic detection methods require chemical digestion which can alter or destroy microplastic particles and makes it impossible to acquire information about their in-situ distribution. MALDI TOF MS (Matrix-assisted laser desorption ionization-time of flight mass spectrometry) is an analytical method using a soft ionization technique that can be used for polymer analysis. This method provides a valuable opportunity to both acquire information regarding the in-situ distribution of microplastics and also minimizes the destructive element of chemical digestion. In addition, MALDI TOF MS allows for expanded analysis of the microplastics including detection of specific additives that may be present within them. MALDI TOF MS is particularly sensitive to sample preparation and has not yet been used to analyze environmental microplastics within their specific location (e.g., biological tissues, sediment, water). In this study, microplastics were created using polyethylene gloves, polystyrene micro-foam, and polyethylene terephthalate cable sleeving. Plastics were frozen using liquid nitrogen and ground to obtain small fragments. An artificial tissue was created using a cellulose sponge as scaffolding coated with a MaxGel Extracellular Matrix to simulate human lung tissue. Optimal preparation techniques (e.g., matrix, cationization reagent, solvent, mixing ratio, laser intensity) were first established for each specific polymer type. The artificial tissue sample was subsequently spiked with microplastics, and specific polymers were detected using MALDI-TOF-MS. This study presents a novel method for the detection of environmental polyethylene, polyethylene terephthalate, and polystyrene microplastics within a complex sample. Results of this study provide an effective method that can be used in future microplastics research and can aid in determining the potential threats to environmental and human health that they pose.

Keywords: environmental plastic pollution, MALDI-TOF MS, microplastics, polymer identification

Procedia PDF Downloads 259
2222 Image Processing techniques for Surveillance in Outdoor Environment

Authors: Jayanth C., Anirudh Sai Yetikuri, Kavitha S. N.

Abstract:

This paper explores the development and application of computer vision and machine learning techniques for real-time pose detection, facial recognition, and number plate extraction. Utilizing MediaPipe for pose estimation, the research presents methods for detecting hand raises and ducking postures through real-time video analysis. Complementarily, facial recognition is employed to compare and verify individual identities using the face recognition library. Additionally, the paper demonstrates a robust approach for extracting and storing vehicle number plates from images, integrating Optical Character Recognition (OCR) with a database management system. The study highlights the effectiveness and versatility of these technologies in practical scenarios, including security and surveillance applications. The findings underscore the potential of combining computer vision techniques to address diverse challenges and enhance automated systems for both individual and vehicular identification. This research contributes to the fields of computer vision and machine learning by providing scalable solutions and demonstrating their applicability in real-world contexts.

Keywords: computer vision, pose detection, facial recognition, number plate extraction, machine learning, real-time analysis, OCR, database management

Procedia PDF Downloads 27
2221 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT

Authors: R. R. Ramsheeja, R. Sreeraj

Abstract:

For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.

Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification

Procedia PDF Downloads 509
2220 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: automatic detection, defects, fracture lines, wavelets

Procedia PDF Downloads 249
2219 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows

Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham

Abstract:

In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.

Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis

Procedia PDF Downloads 66
2218 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.

Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change

Procedia PDF Downloads 218
2217 Comparing Different Frequency Ground Penetrating Radar Antennas for Tunnel Health Assessment

Authors: Can Mungan, Gokhan Kilic

Abstract:

Structural engineers and tunnel owners have good reason to attach importance to the assessment and inspection of tunnels. Regular inspection is necessary to maintain and monitor the health of the structure not only at the present time but throughout its life cycle. Detection of flaws within the structure, such as corrosion and the formation of cracks within the internal elements of the structure, can go a long way to ensuring that the structure maintains its integrity over the course of its life. Other issues that may be detected earlier through regular assessment include tunnel surface delamination and the corrosion of the rebar. One advantage of new technology such as the ground penetrating radar (GPR) is the early detection of imperfections. This study will aim to discuss and present the effectiveness of GPR as a tool for assessing the structural integrity of the heavily used tunnel. GPR is used with various antennae in frequency and application method (2 GHz and 500 MHz GPR antennae). The paper will attempt to produce a greater understanding of structural defects and identify the correct tool for such purposes. Conquest View with 3D scanning capabilities was involved throughout the analysis, reporting, and interpretation of the results. This study will illustrate GPR mapping and its effectiveness in providing information of value when it comes to rebar position (lower and upper reinforcement). It will also show how such techniques can detect structural features that would otherwise remain unseen, as well as moisture ingress.

Keywords: tunnel, GPR, health monitoring, moisture ingress, rebar position

Procedia PDF Downloads 120
2216 Evaluation of Beam Structure Using Non-Destructive Vibration-Based Damage Detection Method

Authors: Bashir Ahmad Aasim, Abdul Khaliq Karimi, Jun Tomiyama

Abstract:

Material aging is one of the vital issues among all the civil, mechanical, and aerospace engineering societies. Sustenance and reliability of concrete, which is the widely used material in the world, is the focal point in civil engineering societies. For few decades, researchers have been able to present some form algorithms that could lead to evaluate a structure globally rather than locally without harming its serviceability and traffic interference. The algorithms could help presenting different methods for evaluating structures non-destructively. In this paper, a non-destructive vibration-based damage detection method is adopted to evaluate two concrete beams, one being in a healthy state while the second one contains a crack on its bottom vicinity. The study discusses that damage in a structure affects modal parameters (natural frequency, mode shape, and damping ratio), which are the function of physical properties (mass, stiffness, and damping). The assessment is carried out to acquire the natural frequency of the sound beam. Next, the vibration response is recorded from the cracked beam. Eventually, both results are compared to know the variation in the natural frequencies of both beams. The study concludes that damage can be detected using vibration characteristics of a structural member considering the decline occurred in the natural frequency of the cracked beam.

Keywords: concrete beam, natural frequency, non-destructive testing, vibration characteristics

Procedia PDF Downloads 112
2215 Wireless Sensor Network for Forest Fire Detection and Localization

Authors: Tarek Dandashi

Abstract:

WSNs may provide a fast and reliable solution for the early detection of environment events like forest fires. This is crucial for alerting and calling for fire brigade intervention. Sensor nodes communicate sensor data to a host station, which enables a global analysis and the generation of a reliable decision on a potential fire and its location. A WSN with TinyOS and nesC for the capturing and transmission of a variety of sensor information with controlled source, data rates, duration, and the records/displaying activity traces is presented. We propose a similarity distance (SD) between the distribution of currently sensed data and that of a reference. At any given time, a fire causes diverging opinions in the reported data, which alters the usual data distribution. Basically, SD consists of a metric on the Cumulative Distribution Function (CDF). SD is designed to be invariant versus day-to-day changes of temperature, changes due to the surrounding environment, and normal changes in weather, which preserve the data locality. Evaluation shows that SD sensitivity is quadratic versus an increase in sensor node temperature for a group of sensors of different sizes and neighborhood. Simulation of fire spreading when ignition is placed at random locations with some wind speed shows that SD takes a few minutes to reliably detect fires and locate them. We also discuss the case of false negative and false positive and their impact on the decision reliability.

Keywords: forest fire, WSN, wireless sensor network, algortihm

Procedia PDF Downloads 263