Search results for: one-shot object detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4479

Search results for: one-shot object detection

2889 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 106
2888 Developing a Web-Based Tender Evaluation System Based on Fuzzy Multi-Attributes Group Decision Making for Nigerian Public Sector Tendering

Authors: Bello Abdullahi, Yahaya M. Ibrahim, Ahmed D. Ibrahim, Kabir Bala

Abstract:

Public sector tendering has traditionally been conducted using manual paper-based processes which are known to be inefficient, less transparent and more prone to manipulations and errors. The advent of the Internet and the World Wide Web has led to the development of numerous e-Tendering systems that addressed some of the problems associated with the manual paper-based tendering system. However, most of these systems rarely support the evaluation of tenders and where they do it is mostly based on the single decision maker which is not suitable in public sector tendering, where for the sake of objectivity, transparency, and fairness, it is required that the evaluation is conducted through a tender evaluation committee. Currently, in Nigeria, the public tendering process in general and the evaluation of tenders, in particular, are largely conducted using manual paper-based processes. Automating these manual-based processes to digital-based processes can help in enhancing the proficiency of public sector tendering in Nigeria. This paper is part of a larger study to develop an electronic tendering system that supports the whole tendering lifecycle based on Nigerian procurement law. Specifically, this paper presents the design and implementation of part of the system that supports group evaluation of tenders based on a technique called fuzzy multi-attributes group decision making. The system was developed using Object-Oriented methodologies and Unified Modelling Language and hypothetically applied in the evaluation of technical and financial proposals submitted by bidders. The system was validated by professionals with extensive experiences in public sector procurement. The results of the validation showed that the system called NPS-eTender has an average rating of 74% with respect to correct and accurate modelling of the existing manual tendering domain and an average rating of 67.6% with respect to its potential to enhance the proficiency of public sector tendering in Nigeria. Thus, based on the results of the validation, the automation of the evaluation process to support tender evaluation committee is achievable and can lead to a more proficient public sector tendering system.

Keywords: e-Tendering, e-Procurement, group decision making, tender evaluation, tender evaluation committee, UML, object-oriented methodologies, system development

Procedia PDF Downloads 261
2887 A Saturation Attack Simulation on a Navy Warship Based on Discrete-Event Simulation Models

Authors: Yawei Liang

Abstract:

Threat from cruise missiles is among the most dangerous considerations to a warship in the modern era: anti-ship cruise missiles are fast, accurate, and extremely destructive. In this paper, the goal was to use an object-orientated environment to program a simulation to model a scenario in which a lone frigate is attacked by a wave of missiles fired at given intervals. The parameters of the simulation are modified to examine the relationships between different variables in the situation, and an analysis is performed on various aspects of the defending ship’s equipment. Finally, the results are presented, along with a brief discussion.

Keywords: discrete event simulation, Monte Carlo simulation, naval resource management, weapon-target allocation/assignment

Procedia PDF Downloads 93
2886 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method

Authors: A.R. Eskandari, M.R. Eskandari

Abstract:

A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.

Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)

Procedia PDF Downloads 387
2885 The Evolution and Driving Forces Analysis of Urban Spatial Pattern in Tibet Based on Archetype Theory

Authors: Qiuyu Chen, Bin Long, Junxi Yang

Abstract:

Located in the southwest of the "roof of the world", Tibet is the origin center of Tibetan Culture.Lhasa, Shigatse and Gyantse are three famous historical and cultural cities in Tibet. They have always been prominent political, economic and cultural cities, and have accumulated the unique aesthetic orientation and value consciousness of Tibet's urban construction. "Archetype" usually refers to the theoretical origin of things, which is the collective unconscious precipitation. The archetype theory fundamentally explores the dialectical relationship between image expression, original form and behavior mode. By abstracting and describing typical phenomena or imagery of the archetype object can observe the essence of objects, explore ways in which object phenomena arise. Applying archetype theory to the field of urban planning helps to gain insight, evaluation, and restructuring of the complex and ever-changing internal structural units of cities. According to existing field investigations, it has been found that Dzong, Temple, Linka and traditional residential systems are important structural units that constitute the urban space of Lhasa, Shigatse and Gyantse. This article applies the thinking method of archetype theory, starting from the imagery expression of urban spatial pattern, using technologies such as ArcGIS, Depthmap, and Computer Vision to descriptively identify the spatial representation and plane relationship of three cities through remote sensing images and historical maps. Based on historical records, the spatial characteristics of cities in different historical periods are interpreted in a hierarchical manner, attempting to clarify the origin of the formation and evolution of urban pattern imagery from the perspectives of geopolitical environment, social structure, religious theory, etc, and expose the growth laws and key driving forces of cities. The research results can provide technical and material support for important behaviors such as urban restoration, spatial intervention, and promoting transformation in the region.

Keywords: archetype theory, urban spatial imagery, original form and pattern, behavioral driving force, Tibet

Procedia PDF Downloads 64
2884 Noninvasive Disease Diagnosis through Breath Analysis Using DNA-functionalized SWNT Sensor Array

Authors: W. J. Zhang, Y. Q. Du, M. L. Wang

Abstract:

Noninvasive diagnostics of diseases via breath analysis has attracted considerable scientific and clinical interest for many years and become more and more promising with the rapid advancement in nanotechnology and biotechnology. The volatile organic compounds (VOCs) in exhaled breath, which are mainly blood borne, particularly provide highly valuable information about individuals’ physiological and pathophysiological conditions. Additionally, breath analysis is noninvasive, real-time, painless and agreeable to patients. We have developed a wireless sensor array based on single-stranded DNA (ssDNA)-decorated single-walled carbon nanotubes (SWNT) for the detection of a number of physiological indicators in breath. Eight DNA sequences were used to functionalize SWNT sensors to detect trace amount of methanol, benzene, dimethyl sulfide, hydrogen sulfide, acetone and ethanol, which are indicators of heavy smoking, excessive drinking, and diseases such as lung cancer, breast cancer, cirrhosis and diabetes. Our tests indicated that DNA functionalized SWNT sensors exhibit great selectivity, sensitivity, reproducibility, and repeatability. Furthermore, different molecules can be distinguished through pattern recognition enabled by this sensor array. Thus, the DNA-SWNT sensor array has great potential to be applied in chemical or bimolecular detection for the noninvasive diagnostics of diseases and health monitoring.

Keywords: breath analysis, diagnosis, DNA-SWNT sensor array, noninvasive

Procedia PDF Downloads 348
2883 PPB-Level H₂ Gas-Sensor Based on Porous Ni-MOF Derived NiO@CuO Nanoflowers for Superior Sensing Performance

Authors: Shah Sufaid, Hussain Shahid, Tianyan You, Liu Guiwu, Qiao Guanjun

Abstract:

Nickel oxide (NiO) is an optimal material for precise detection of hydrogen (H₂) gas due to its high catalytic activity and low resistivity. However, the gas response kinetics of H₂ gas molecules with the surface of NiO concurrence limitation imposed by its solid structure, leading to a diminished gas response value and slow electron-hole transport. Herein, NiO@CuO NFs with porous sharp-tip and nanospheres morphology were successfully synthesized by using a metal-organic framework (MOFs) as a precursor. The fabricated porous 2 wt% NiO@CuO NFs present outstanding selectivity towards H₂ gas, including a high sensitivity of a response value (170 to 20 ppm at 150 °C) higher than that of porous Ni-MOF (6), low detection limit (300 ppb) with a notable response (21), short response and recovery times at (300 ppb, 40/63 s and 20 ppm, 100/167 s), exceptional long-term stability and repeatability. Furthermore, an understanding of NiO@CuO sensor functioning in an actual environment has been obtained by using the impact of relative humidity as well. The boosted hydrogen sensing properties may be attributed due to synergistic effects of numerous facts including p-p heterojunction at the interface between NiO and CuO nanoflowers. Particularly, a porous Ni-MOF structure combined with the chemical sensitization effect of NiO with the rough surface of CuO nanosphere, are examined. This research presents an effective method for development of Ni-MOF derived metal oxide semiconductor (MOS) heterostructures with rigorous morphology and composition, suitable for gas sensing application.

Keywords: NiO@CuO NFs, metal organic framework, porous structure, H₂, gas sensing

Procedia PDF Downloads 45
2882 Experimental Device for Fluorescence Measurement by Optical Fiber Combined with Dielectrophoretic Sorting in Microfluidic Chips

Authors: Jan Jezek, Zdenek Pilat, Filip Smatlo, Pavel Zemanek

Abstract:

We present a device that combines fluorescence spectroscopy with fiber optics and dielectrophoretic micromanipulation in PDMS (poly-(dimethylsiloxane)) microfluidic chips. The device allows high speed detection (in the order of kHz) of the fluorescence signal, which is coming from the sample by an inserted optical fiber, e.g. from a micro-droplet flow in a microfluidic chip, or even from the liquid flowing in the transparent capillary, etc. The device uses a laser diode at a wavelength suitable for excitation of fluorescence, excitation and emission filters, optics for focusing the laser radiation into the optical fiber, and a highly sensitive fast photodiode for detection of fluorescence. The device is combined with dielectrophoretic sorting on a chip for sorting of micro-droplets according to their fluorescence intensity. The electrodes are created by lift-off technology on a glass substrate, or by using channels filled with a soft metal alloy or an electrolyte. This device found its use in screening of enzymatic reactions and sorting of individual fluorescently labelled microorganisms. The authors acknowledge the support from the Grant Agency of the Czech Republic (GA16-07965S) and Ministry of Education, Youth and Sports of the Czech Republic (LO1212) together with the European Commission (ALISI No. CZ.1.05/2.1.00/01.0017).

Keywords: dielectrophoretic sorting, fiber optics, laser, microfluidic chips, microdroplets, spectroscopy

Procedia PDF Downloads 719
2881 Using MALDI-TOF MS to Detect Environmental Microplastics (Polyethylene, Polyethylene Terephthalate, and Polystyrene) within a Simulated Tissue Sample

Authors: Kara J. Coffman-Rea, Karen E. Samonds

Abstract:

Microplastic pollution is an urgent global threat to our planet and human health. Microplastic particles have been detected within our food, water, and atmosphere, and found within the human stool, placenta, and lung tissue. However, most spectrometric microplastic detection methods require chemical digestion which can alter or destroy microplastic particles and makes it impossible to acquire information about their in-situ distribution. MALDI TOF MS (Matrix-assisted laser desorption ionization-time of flight mass spectrometry) is an analytical method using a soft ionization technique that can be used for polymer analysis. This method provides a valuable opportunity to both acquire information regarding the in-situ distribution of microplastics and also minimizes the destructive element of chemical digestion. In addition, MALDI TOF MS allows for expanded analysis of the microplastics including detection of specific additives that may be present within them. MALDI TOF MS is particularly sensitive to sample preparation and has not yet been used to analyze environmental microplastics within their specific location (e.g., biological tissues, sediment, water). In this study, microplastics were created using polyethylene gloves, polystyrene micro-foam, and polyethylene terephthalate cable sleeving. Plastics were frozen using liquid nitrogen and ground to obtain small fragments. An artificial tissue was created using a cellulose sponge as scaffolding coated with a MaxGel Extracellular Matrix to simulate human lung tissue. Optimal preparation techniques (e.g., matrix, cationization reagent, solvent, mixing ratio, laser intensity) were first established for each specific polymer type. The artificial tissue sample was subsequently spiked with microplastics, and specific polymers were detected using MALDI-TOF-MS. This study presents a novel method for the detection of environmental polyethylene, polyethylene terephthalate, and polystyrene microplastics within a complex sample. Results of this study provide an effective method that can be used in future microplastics research and can aid in determining the potential threats to environmental and human health that they pose.

Keywords: environmental plastic pollution, MALDI-TOF MS, microplastics, polymer identification

Procedia PDF Downloads 256
2880 Image Processing techniques for Surveillance in Outdoor Environment

Authors: Jayanth C., Anirudh Sai Yetikuri, Kavitha S. N.

Abstract:

This paper explores the development and application of computer vision and machine learning techniques for real-time pose detection, facial recognition, and number plate extraction. Utilizing MediaPipe for pose estimation, the research presents methods for detecting hand raises and ducking postures through real-time video analysis. Complementarily, facial recognition is employed to compare and verify individual identities using the face recognition library. Additionally, the paper demonstrates a robust approach for extracting and storing vehicle number plates from images, integrating Optical Character Recognition (OCR) with a database management system. The study highlights the effectiveness and versatility of these technologies in practical scenarios, including security and surveillance applications. The findings underscore the potential of combining computer vision techniques to address diverse challenges and enhance automated systems for both individual and vehicular identification. This research contributes to the fields of computer vision and machine learning by providing scalable solutions and demonstrating their applicability in real-world contexts.

Keywords: computer vision, pose detection, facial recognition, number plate extraction, machine learning, real-time analysis, OCR, database management

Procedia PDF Downloads 26
2879 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT

Authors: R. R. Ramsheeja, R. Sreeraj

Abstract:

For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.

Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification

Procedia PDF Downloads 509
2878 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: automatic detection, defects, fracture lines, wavelets

Procedia PDF Downloads 248
2877 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows

Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham

Abstract:

In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.

Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis

Procedia PDF Downloads 65
2876 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.

Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change

Procedia PDF Downloads 217
2875 Use of Pragmatic Cues for Word Learning in Bilingual and Monolingual Children

Authors: Isabelle Lorge, Napoleon Katsos

Abstract:

BACKGROUND: Children growing up in a multilingual environment face challenges related to the need to monitor the speaker’s linguistic abilities, more frequent communication failures, and having to acquire a large number of words in a limited amount of time compared to monolinguals. As a result, bilingual learners may develop different word learning strategies, rely more on some strategies than others, and engage cognitive resources such as theory of mind and attention skills in different ways. HYPOTHESIS: The goal of our study is to investigate whether multilingual exposure leads to improvements in the ability to use pragmatic inference for word learning, i.e., to use speaker cues to derive their referring intentions, often by overcoming lower level salience effects. The speaker cues we identified as relevant are (a) use of a modifier with or without stress (‘the WET dax’ prompting the choice of the referent which has a dry counterpart), (b) referent extension (‘this is a kitten with a fep’ prompting the choice of the unique rather than shared object), (c) referent novelty (choosing novel action rather than novel object which has been manipulated already), (d) teacher versus random sampling (assuming the choice of specific examples for a novel word to be relevant to the extension of that new category), and finally (e) emotional affect (‘look at the figoo’ uttered in a sad or happy voice) . METHOD: To this end, we implemented on a touchscreen computer a task corresponding to each of the cues above, where the child had to pick the referent of a novel word. These word learning tasks (a), (b), (c), (d) and (e) were adapted from previous word learning studies. 113 children have been tested (54 reception and 59 year 1, ranging from 4 to 6 years old) in a London primary school. Bilingual or monolingual status and other relevant information (age of onset, proficiency, literacy for bilinguals) is ascertained through language questionnaires from parents (34 out of 113 received to date). While we do not yet have the data that will allow us to test for effect of bilingualism, we can already see that performances are far from approaching ceiling in any of the tasks. In some cases the children’s performances radically differ from adults’ in a qualitative way, which means that there is scope for quantitative and qualitative effects to arise between language groups. The findings should contribute to explain the puzzling speed and efficiency that bilinguals demonstrate in acquiring competence in two languages.

Keywords: bilingualism, pragmatics, word learning, attention

Procedia PDF Downloads 139
2874 Financial Inclusion in Indonesia and Its Challenges

Authors: Yen Sun, Pariang Siagian

Abstract:

The aim of this paper is to examine the progress of financial inclusion in Indonesia. The object of this paper is Micro Enterprises (MEs) and methodology used will be qualitative method by using surveys and questionnaires. The results show that there are still 20% MEs have no banking facilities at all and about 78% MEs still use their own capital to run their business. Furthermore, personal characteristics such as gender and education are factors that can explain financial inclusion. It is also said that in general MEs need banking product and services. However, there are still barriers that hinder them to be financially included. The most barriers they have to face are marketing exclusion. It shows that they have lack information about banking product and services since marketing strategy from bank is not disseminated clearly through various media.

Keywords: financial inclusion, financial exclusion, micro enterprises, Indonesia

Procedia PDF Downloads 394
2873 Comparing Different Frequency Ground Penetrating Radar Antennas for Tunnel Health Assessment

Authors: Can Mungan, Gokhan Kilic

Abstract:

Structural engineers and tunnel owners have good reason to attach importance to the assessment and inspection of tunnels. Regular inspection is necessary to maintain and monitor the health of the structure not only at the present time but throughout its life cycle. Detection of flaws within the structure, such as corrosion and the formation of cracks within the internal elements of the structure, can go a long way to ensuring that the structure maintains its integrity over the course of its life. Other issues that may be detected earlier through regular assessment include tunnel surface delamination and the corrosion of the rebar. One advantage of new technology such as the ground penetrating radar (GPR) is the early detection of imperfections. This study will aim to discuss and present the effectiveness of GPR as a tool for assessing the structural integrity of the heavily used tunnel. GPR is used with various antennae in frequency and application method (2 GHz and 500 MHz GPR antennae). The paper will attempt to produce a greater understanding of structural defects and identify the correct tool for such purposes. Conquest View with 3D scanning capabilities was involved throughout the analysis, reporting, and interpretation of the results. This study will illustrate GPR mapping and its effectiveness in providing information of value when it comes to rebar position (lower and upper reinforcement). It will also show how such techniques can detect structural features that would otherwise remain unseen, as well as moisture ingress.

Keywords: tunnel, GPR, health monitoring, moisture ingress, rebar position

Procedia PDF Downloads 119
2872 Evaluation of Beam Structure Using Non-Destructive Vibration-Based Damage Detection Method

Authors: Bashir Ahmad Aasim, Abdul Khaliq Karimi, Jun Tomiyama

Abstract:

Material aging is one of the vital issues among all the civil, mechanical, and aerospace engineering societies. Sustenance and reliability of concrete, which is the widely used material in the world, is the focal point in civil engineering societies. For few decades, researchers have been able to present some form algorithms that could lead to evaluate a structure globally rather than locally without harming its serviceability and traffic interference. The algorithms could help presenting different methods for evaluating structures non-destructively. In this paper, a non-destructive vibration-based damage detection method is adopted to evaluate two concrete beams, one being in a healthy state while the second one contains a crack on its bottom vicinity. The study discusses that damage in a structure affects modal parameters (natural frequency, mode shape, and damping ratio), which are the function of physical properties (mass, stiffness, and damping). The assessment is carried out to acquire the natural frequency of the sound beam. Next, the vibration response is recorded from the cracked beam. Eventually, both results are compared to know the variation in the natural frequencies of both beams. The study concludes that damage can be detected using vibration characteristics of a structural member considering the decline occurred in the natural frequency of the cracked beam.

Keywords: concrete beam, natural frequency, non-destructive testing, vibration characteristics

Procedia PDF Downloads 112
2871 Wireless Sensor Network for Forest Fire Detection and Localization

Authors: Tarek Dandashi

Abstract:

WSNs may provide a fast and reliable solution for the early detection of environment events like forest fires. This is crucial for alerting and calling for fire brigade intervention. Sensor nodes communicate sensor data to a host station, which enables a global analysis and the generation of a reliable decision on a potential fire and its location. A WSN with TinyOS and nesC for the capturing and transmission of a variety of sensor information with controlled source, data rates, duration, and the records/displaying activity traces is presented. We propose a similarity distance (SD) between the distribution of currently sensed data and that of a reference. At any given time, a fire causes diverging opinions in the reported data, which alters the usual data distribution. Basically, SD consists of a metric on the Cumulative Distribution Function (CDF). SD is designed to be invariant versus day-to-day changes of temperature, changes due to the surrounding environment, and normal changes in weather, which preserve the data locality. Evaluation shows that SD sensitivity is quadratic versus an increase in sensor node temperature for a group of sensors of different sizes and neighborhood. Simulation of fire spreading when ignition is placed at random locations with some wind speed shows that SD takes a few minutes to reliably detect fires and locate them. We also discuss the case of false negative and false positive and their impact on the decision reliability.

Keywords: forest fire, WSN, wireless sensor network, algortihm

Procedia PDF Downloads 262
2870 Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4

Authors: Farmesk Abubaker, Francesco Tortorici, Marco Capogni, Concetta Sutera, Vincenzo Bellini

Abstract:

This project concerns with the detection efficiency of the portable triple-to-double coincidence ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.

Keywords: Birks constant, defocusing parameter, GEANT4 code, TDCR parameter

Procedia PDF Downloads 148
2869 Remote Assessment and Change Detection of GreenLAI of Cotton Crop Using Different Vegetation Indices

Authors: Ganesh B. Shinde, Vijaya B. Musande

Abstract:

Cotton crop identification based on the timely information has significant advantage to the different implications of food, economic and environment. Due to the significant advantages, the accurate detection of cotton crop regions using supervised learning procedure is challenging problem in remote sensing. Here, classifiers on the direct image are played a major role but the results are not much satisfactorily. In order to further improve the effectiveness, variety of vegetation indices are proposed in the literature. But, recently, the major challenge is to find the better vegetation indices for the cotton crop identification through the proposed methodology. Accordingly, fuzzy c-means clustering is combined with neural network algorithm, trained by Levenberg-Marquardt for cotton crop classification. To experiment the proposed method, five LISS-III satellite images was taken and the experimentation was done with six vegetation indices such as Simple Ratio, Normalized Difference Vegetation Index, Enhanced Vegetation Index, Green Atmospherically Resistant Vegetation Index, Wide-Dynamic Range Vegetation Index, Green Chlorophyll Index. Along with these indices, Green Leaf Area Index is also considered for investigation. From the research outcome, Green Atmospherically Resistant Vegetation Index outperformed with all other indices by reaching the average accuracy value of 95.21%.

Keywords: Fuzzy C-Means clustering (FCM), neural network, Levenberg-Marquardt (LM) algorithm, vegetation indices

Procedia PDF Downloads 318
2868 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 116
2867 Detection of Hepatitis B by the Use of Artifical Intelegence

Authors: Shizra Waris, Bilal Shoaib, Munib Ahmad

Abstract:

Background; The using of clinical decision support systems (CDSSs) may recover unceasing disease organization, which requires regular visits to multiple health professionals, treatment monitoring, disease control, and patient behavior modification. The objective of this survey is to determine if these CDSSs improve the processes of unceasing care including diagnosis, treatment, and monitoring of diseases. Though artificial intelligence is not a new idea it has been widely documented as a new technology in computer science. Numerous areas such as education business, medical and developed have made use of artificial intelligence Methods: The survey covers articles extracted from relevant databases. It uses search terms related to information technology and viral hepatitis which are published between 2000 and 2016. Results: Overall, 80% of studies asserted the profit provided by information technology (IT); 75% of learning asserted the benefits concerned with medical domain;25% of studies do not clearly define the added benefits due IT. The CDSS current state requires many improvements to hold up the management of liver diseases such as HCV, liver fibrosis, and cirrhosis. Conclusion: We concluded that the planned model gives earlier and more correct calculation of hepatitis B and it works as promising tool for calculating of custom hepatitis B from the clinical laboratory data.

Keywords: detection, hapataties, observation, disesese

Procedia PDF Downloads 157
2866 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics

Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer

Abstract:

Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.

Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS

Procedia PDF Downloads 346
2865 Application of Low-order Modeling Techniques and Neural-Network Based Models for System Identification

Authors: Venkatesh Pulletikurthi, Karthik B. Ariyur, Luciano Castillo

Abstract:

The system identification from the turbulence wakes will lead to the tactical advantage to prepare and also, to predict the trajectory of the opponents’ movements. A low-order modeling technique, POD, is used to predict the object based on the wake pattern and compared with pre-trained image recognition neural network (NN) to classify the wake patterns into objects. It is demonstrated that low-order modeling, POD, is able to predict the objects better compared to pretrained NN by ~30%.

Keywords: the bluff body wakes, low-order modeling, neural network, system identification

Procedia PDF Downloads 180
2864 The Design of Intelligent Classroom Management System with Raspberry PI

Authors: Sathapath Kilaso

Abstract:

Attendance checking in the classroom for student is object to record the student’s attendance in order to support the learning activities in the classroom. Despite the teaching trend in the 21st century is the student-center learning and the lecturer duty is to mentor and give an advice, the classroom learning is still important in order to let the student interact with the classmate and the lecturer or for a specific subject which the in-class learning is needed. The development of the system prototype by applied the microcontroller technology and embedded system with the “internet of thing” trend and the web socket technique will allow the lecturer to be alerted immediately whenever the data is updated.

Keywords: arduino, embedded system, classroom, raspberry PI

Procedia PDF Downloads 374
2863 Rapid Detection and Differentiation of Camel Pox, Contagious Ecthyma and Papilloma Viruses in Clinical Samples of Camels Using a Multiplex PCR

Authors: A. I. Khalafalla, K. A. Al-Busada, I. M. El-Sabagh

Abstract:

Pox and pox-like diseases of camels are a group of exanthematous skin conditions that have become increasingly important economically. They may be caused by three distinct viruses: camelpox virus (CMPV), camel contagious ecthyma virus (CCEV) and camel papillomavirus (CAPV). These diseases are difficult to differentiate based on clinical presentation in disease outbreaks. Molecular methods such as PCR targeting species-specific genes have been developed and used to identify CMPV and CCEV, but not simultaneously in a single tube. Recently, multiplex PCR has gained reputation as a convenient diagnostic method with cost- and time–saving benefits. In the present communication, we describe the development, optimization and validation a multiplex PCR assays able to detect simultaneously the genome of the three viruses in one single test allowing for rapid and efficient molecular diagnosis. The assay was developed based on the evaluation and combination of published and new primer sets, and was applied to the detection of 110 tissue samples. The method showed high sensitivity, and the specificity was confirmed by PCR-product sequencing. In conclusion, this rapid, sensitive and specific assay is considered a useful method for identifying three important viruses in specimens from camels and as part of a molecular diagnostic regime.

Keywords: multiplex PCR, diagnosis, pox and pox-like diseases, camels

Procedia PDF Downloads 468
2862 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 117
2861 Fraud Detection in Credit Cards with Machine Learning

Authors: Anjali Chouksey, Riya Nimje, Jahanvi Saraf

Abstract:

Online transactions have increased dramatically in this new ‘social-distancing’ era. With online transactions, Fraud in online payments has also increased significantly. Frauds are a significant problem in various industries like insurance companies, baking, etc. These frauds include leaking sensitive information related to the credit card, which can be easily misused. Due to the government also pushing online transactions, E-commerce is on a boom. But due to increasing frauds in online payments, these E-commerce industries are suffering a great loss of trust from their customers. These companies are finding credit card fraud to be a big problem. People have started using online payment options and thus are becoming easy targets of credit card fraud. In this research paper, we will be discussing machine learning algorithms. We have used a decision tree, XGBOOST, k-nearest neighbour, logistic-regression, random forest, and SVM on a dataset in which there are transactions done online mode using credit cards. We will test all these algorithms for detecting fraud cases using the confusion matrix, F1 score, and calculating the accuracy score for each model to identify which algorithm can be used in detecting frauds.

Keywords: machine learning, fraud detection, artificial intelligence, decision tree, k nearest neighbour, random forest, XGBOOST, logistic regression, support vector machine

Procedia PDF Downloads 148
2860 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 134