Search results for: detection and estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5078

Search results for: detection and estimation

4658 Sensing Mechanism of Nano-Toxic Ions Using Quartz Crystal Microbalance

Authors: Chanho Park, Juneseok You, Kuewhan Jang, Sungsoo Na

Abstract:

Detection technique of nanotoxic materials is strongly imperative, because nano-toxic materials can harmfully influence human health and environment as their engineering applications are growing rapidly in recent years. In present work, we report the DNA immobilized quartz crystal microbalance (QCM) based sensor for detection of nano-toxic materials such as silver ions, Hg2+ etc. by using functionalization of quartz crystal with a target-specific DNA. Since the mass of a target material is comparable to that of an atom, the mass change caused by target binding to DNA on the quartz crystal is so small that it is practically difficult to detect the ions at low concentrations. In our study, we have demonstrated fast and in situ detection of nanotoxic materials using quartz crystal microbalance. We report the label-free and highly sensitive detection of silver ion for present case, which is a typical nano-toxic material by using QCM and silver-specific DNA. The detection is based on the measurement of frequency shift of Quartz crystal from constitution of the cytosine-Ag+-cytosine binding. It is shown that the silver-specific DNA measured frequency shift by QCM enables the capturing of silver ions below 100pM. The results suggest that DNA-based detection opens a new avenue for the development of a practical water-testing sensor.

Keywords: nano-toxic ions, quartz crystal microbalance, frequency shift, target-specific DNA

Procedia PDF Downloads 300
4657 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)

Procedia PDF Downloads 345
4656 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 565
4655 An Efficient Clustering Technique for Copy-Paste Attack Detection

Authors: N. Chaitawittanun, M. Munlin

Abstract:

Due to rapid advancement of powerful image processing software, digital images are easy to manipulate and modify by ordinary people. Lots of digital images are edited for a specific purpose and more difficult to distinguish form their original ones. We propose a clustering method to detect a copy-move image forgery of JPEG, BMP, TIFF, and PNG. The process starts with reducing the color of the photos. Then, we use the clustering technique to divide information of measuring data by Hausdorff Distance. The result shows that the purposed methods is capable of inspecting the image file and correctly identify the forgery.

Keywords: image detection, forgery image, copy-paste, attack detection

Procedia PDF Downloads 315
4654 Green Synthesis of Silver Nanoparticles by Olive Leaf Extract: Application in the Colorimetric Detection of Fe+3 Ions

Authors: Nasibeh Azizi Khereshki

Abstract:

Olive leaf (OL) extract as a green reductant agent was utilized for the biogenic synthesis of silver nanoparticles (Ag NPs) for the first time in this study, and then its performance was evaluated for colorimetric detection of Fe3+ in different media. Some analytical methods were used to characterize the nanosensor. The effective sensing parameters were optimized by central composite design (CCD) combined with response surface methodology (RSM) application. Then, the prepared material's applicability in antibacterial and optical chemical sensing for naked-eye detection of Fe3+ ions in aqueous solutions were evaluated. Furthermore, OL-Ag NPs-loaded paper strips were successfully applied to the colorimetric visualization of Fe3+. The colorimetric probe based on OL-AgNPs illustrated excellent selectivity and sensitivity towards Fe3+ ions, with LOD and LOQ of 0.81 μM and 2.7 μM, respectively. In addition, the developed method was applied to detect Fe3+ ions in real water samples and validated with a 95% confidence level against a reference spectroscopic method.

Keywords: Ag NPs, colorimetric detection, Fe(III) ions, green synthesis, olive leaves

Procedia PDF Downloads 50
4653 Early Detection of Breast Cancer in Digital Mammograms Based on Image Processing and Artificial Intelligence

Authors: Sehreen Moorat, Mussarat Lakho

Abstract:

A method of artificial intelligence using digital mammograms data has been proposed in this paper for detection of breast cancer. Many researchers have developed techniques for the early detection of breast cancer; the early diagnosis helps to save many lives. The detection of breast cancer through mammography is effective method which detects the cancer before it is felt and increases the survival rate. In this paper, we have purposed image processing technique for enhancing the image to detect the graphical table data and markings. Texture features based on Gray-Level Co-Occurrence Matrix and intensity based features are extracted from the selected region. For classification purpose, neural network based supervised classifier system has been used which can discriminate between benign and malignant. Hence, 68 digital mammograms have been used to train the classifier. The obtained result proved that automated detection of breast cancer is beneficial for early diagnosis and increases the survival rates of breast cancer patients. The proposed system will help radiologist in the better interpretation of breast cancer.

Keywords: medical imaging, cancer, processing, neural network

Procedia PDF Downloads 237
4652 Deep Learning and Accurate Performance Measure Processes for Cyber Attack Detection among Web Logs

Authors: Noureddine Mohtaram, Jeremy Patrix, Jerome Verny

Abstract:

As an enormous number of online services have been developed into web applications, security problems based on web applications are becoming more serious now. Most intrusion detection systems rely on each request to find the cyber-attack rather than on user behavior, and these systems can only protect web applications against known vulnerabilities rather than certain zero-day attacks. In order to detect new attacks, we analyze the HTTP protocols of web servers to divide them into two categories: normal attacks and malicious attacks. On the other hand, the quality of the results obtained by deep learning (DL) in various areas of big data has given an important motivation to apply it to cybersecurity. Deep learning for attack detection in cybersecurity has the potential to be a robust tool from small transformations to new attacks due to its capability to extract more high-level features. This research aims to take a new approach, deep learning to cybersecurity, to classify these two categories to eliminate attacks and protect web servers of the defense sector which encounters different web traffic compared to other sectors (such as e-commerce, web app, etc.). The result shows that by using a machine learning method, a higher accuracy rate, and a lower false alarm detection rate can be achieved.

Keywords: anomaly detection, HTTP protocol, logs, cyber attack, deep learning

Procedia PDF Downloads 183
4651 A High Performance Piano Note Recognition Scheme via Precise Onset Detection and Segmented Short-Time Fourier Transform

Authors: Sonali Banrjee, Swarup Kumar Mitra, Aritra Acharyya

Abstract:

A piano note recognition method has been proposed by the authors in this paper. The authors have used a comprehensive method for onset detection of each note present in a piano piece followed by segmented short-time Fourier transform (STFT) for the identification of piano notes. The performance evaluation of the proposed method has been carried out in different harsh noisy environments by adding different levels of additive white Gaussian noise (AWGN) having different signal-to-noise ratio (SNR) in the original signal and evaluating the note detection error rate (NDER) of different piano pieces consisting of different number of notes at different SNR levels. The NDER is found to be remained within 15% for all piano pieces under consideration when the SNR is kept above 8 dB.

Keywords: AWGN, onset detection, piano note, STFT

Procedia PDF Downloads 144
4650 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 120
4649 An Erudite Technique for Face Detection and Recognition Using Curvature Analysis

Authors: S. Jagadeesh Kumar

Abstract:

Face detection and recognition is an authoritative technology for image database management, video surveillance, and human computer interface (HCI). Face recognition is a rapidly nascent method, which has been extensively discarded in forensics such as felonious identification, tenable entree, and custodial security. This paper recommends an erudite technique using curvature analysis (CA) that has less false positives incidence, operative in different light environments and confiscates the artifacts that are introduced during image acquisition by ring correction in polar coordinate (RCP) method. This technique affronts mean and median filtering technique to remove the artifacts but it works in polar coordinate during image acquisition. Investigational fallouts for face detection and recognition confirms decent recitation even in diagonal orientation and stance variation.

Keywords: curvature analysis, ring correction in polar coordinate method, face detection, face recognition, human computer interaction

Procedia PDF Downloads 262
4648 Lead-Time Estimation Approach Using the Process Capability Index

Authors: Abdel-Aziz M. Mohamed

Abstract:

This research proposes a methodology to estimate the customer order lead time in the supply chain based on the process capability index. The cases when the process output is normally distributed and when it is not are considered. The relationships between the system capability indices in both service and manufacturing applications, delivery system reliability and the percentages of orders delivered after their promised due dates are presented. The proposed method can be used to examine the current process capability to deliver the orders before the promised lead-time. If the system was found to be incapable, the method can be used to help revise the current lead-time to a proper value according to the service reliability level selected by the management. Numerical examples and a case study describing the lead time estimation methodology and testing the system capability of delivering the orders before their promised due date are illustrated.

Keywords: lead-time estimation, process capability index, delivery system reliability, statistical analysis, service achievement index, service quality

Procedia PDF Downloads 539
4647 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 103
4646 A Review of Intelligent Fire Management Systems to Reduce Wildfires

Authors: Nomfundo Ngombane, Topside E. Mathonsi

Abstract:

Remote sensing and satellite imaging have been widely used to detect wildfires; nevertheless, the technologies present some limitations in terms of early wildfire detection as the technologies are greatly influenced by weather conditions and can miss small fires. The fires need to have spread a few kilometers for the technologies to provide accurate detection. The South African Advanced Fire Information System uses MODIS (Moderate Resolution Imaging Spectroradiometer) as satellite imaging. MODIS has limitations as it can exclude small fires and can fall short in validating fire vulnerability. Thus in the future, a Machine Learning algorithm will be designed and implemented for the early detection of wildfires. A simulator will be used to evaluate the effectiveness of the proposed solution, and the results of the simulation will be presented.

Keywords: moderate resolution imaging spectroradiometer, advanced fire information system, machine learning algorithm, detection of wildfires

Procedia PDF Downloads 63
4645 Facility Detection from Image Using Mathematical Morphology

Authors: In-Geun Lim, Sung-Woong Ra

Abstract:

As high resolution satellite images can be used, lots of studies are carried out for exploiting these images in various fields. This paper proposes the method based on mathematical morphology for extracting the ‘horse's hoof shaped object’. This proposed method can make an automatic object detection system to track the meaningful object in a large satellite image rapidly. Mathematical morphology process can apply in binary image, so this method is very simple. Therefore this method can easily extract the ‘horse's hoof shaped object’ from any images which have indistinct edges of the tracking object and have different image qualities depending on filming location, filming time, and filming environment. Using the proposed method by which ‘horse's hoof shaped object’ can be rapidly extracted, the performance of the automatic object detection system can be improved dramatically.

Keywords: facility detection, satellite image, object, mathematical morphology

Procedia PDF Downloads 356
4644 X-Corner Detection for Camera Calibration Using Saddle Points

Authors: Abdulrahman S. Alturki, John S. Loomis

Abstract:

This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.

Keywords: camera calibration, corner detector, edge detector, saddle points

Procedia PDF Downloads 382
4643 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling

Authors: Farzaneh Khorram

Abstract:

This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.

Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging

Procedia PDF Downloads 30
4642 Analysis of Facial Expressions with Amazon Rekognition

Authors: Kashika P. H.

Abstract:

The development of computer vision systems has been greatly aided by the efficient and precise detection of images and videos. Although the ability to recognize and comprehend images is a strength of the human brain, employing technology to tackle this issue is exceedingly challenging. In the past few years, the use of Deep Learning algorithms to treat object detection has dramatically expanded. One of the key issues in the realm of image recognition is the recognition and detection of certain notable people from randomly acquired photographs. Face recognition uses a way to identify, assess, and compare faces for a variety of purposes, including user identification, user counting, and classification. With the aid of an accessible deep learning-based API, this article intends to recognize various faces of people and their facial descriptors more accurately. The purpose of this study is to locate suitable individuals and deliver accurate information about them by using the Amazon Rekognition system to identify a specific human from a vast image dataset. We have chosen the Amazon Rekognition system, which allows for more accurate face analysis, face comparison, and face search, to tackle this difficulty.

Keywords: Amazon rekognition, API, deep learning, computer vision, face detection, text detection

Procedia PDF Downloads 82
4641 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 75
4640 BodeACD: Buffer Overflow Vulnerabilities Detecting Based on Abstract Syntax Tree, Control Flow Graph, and Data Dependency Graph

Authors: Xinghang Lv, Tao Peng, Jia Chen, Junping Liu, Xinrong Hu, Ruhan He, Minghua Jiang, Wenli Cao

Abstract:

As one of the most dangerous vulnerabilities, effective detection of buffer overflow vulnerabilities is extremely necessary. Traditional detection methods are not accurate enough and consume more resources to meet complex and enormous code environment at present. In order to resolve the above problems, we propose the method for Buffer overflow detection based on Abstract syntax tree, Control flow graph, and Data dependency graph (BodeACD) in C/C++ programs with source code. Firstly, BodeACD constructs the function samples of buffer overflow that are available on Github, then represents them as code representation sequences, which fuse control flow, data dependency, and syntax structure of source code to reduce information loss during code representation. Finally, BodeACD learns vulnerability patterns for vulnerability detection through deep learning. The results of the experiments show that BodeACD has increased the precision and recall by 6.3% and 8.5% respectively compared with the latest methods, which can effectively improve vulnerability detection and reduce False-positive rate and False-negative rate.

Keywords: vulnerability detection, abstract syntax tree, control flow graph, data dependency graph, code representation, deep learning

Procedia PDF Downloads 148
4639 Manufacturing Anomaly Detection Using a Combination of Gated Recurrent Unit Network and Random Forest Algorithm

Authors: Atinkut Atinafu Yilma, Eyob Messele Sefene

Abstract:

Anomaly detection is one of the essential mechanisms to control and reduce production loss, especially in today's smart manufacturing. Quick anomaly detection aids in reducing the cost of production by minimizing the possibility of producing defective products. However, developing an anomaly detection model that can rapidly detect a production change is challenging. This paper proposes Gated Recurrent Unit (GRU) combined with Random Forest (RF) to detect anomalies in the production process in real-time quickly. The GRU is used as a feature detector, and RF as a classifier using the input features from GRU. The model was tested using various synthesis and real-world datasets against benchmark methods. The results show that the proposed GRU-RF outperforms the benchmark methods with the shortest time taken to detect anomalies in the production process. Based on the investigation from the study, this proposed model can eliminate or reduce unnecessary production costs and bring a competitive advantage to manufacturing industries.

Keywords: anomaly detection, multivariate time series data, smart manufacturing, gated recurrent unit network, random forest

Procedia PDF Downloads 85
4638 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 96
4637 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index

Authors: Shih-Pin Chen, Shih-Syuan You

Abstract:

This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.

Keywords: dissemblance index, forecasting, fuzzy sets, linear regression

Procedia PDF Downloads 335
4636 Data-Centric Anomaly Detection with Diffusion Models

Authors: Sheldon Liu, Gordon Wang, Lei Liu, Xuefeng Liu

Abstract:

Anomaly detection, also referred to as one-class classification, plays a crucial role in identifying product images that deviate from the expected distribution. This study introduces Data-centric Anomaly Detection with Diffusion Models (DCADDM), presenting a systematic strategy for data collection and further diversifying the data with image generation via diffusion models. The algorithm addresses data collection challenges in real-world scenarios and points toward data augmentation with the integration of generative AI capabilities. The paper explores the generation of normal images using diffusion models. The experiments demonstrate that with 30% of the original normal image size, modeling in an unsupervised setting with state-of-the-art approaches can achieve equivalent performances. With the addition of generated images via diffusion models (10% equivalence of the original dataset size), the proposed algorithm achieves better or equivalent anomaly localization performance.

Keywords: diffusion models, anomaly detection, data-centric, generative AI

Procedia PDF Downloads 60
4635 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application

Authors: Jui-Chien Hsieh

Abstract:

Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.

Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network

Procedia PDF Downloads 96
4634 USBware: A Trusted and Multidisciplinary Framework for Enhanced Detection of USB-Based Attacks

Authors: Nir Nissim, Ran Yahalom, Tomer Lancewiki, Yuval Elovici, Boaz Lerner

Abstract:

Background: Attackers increasingly take advantage of innocent users who tend to use USB devices casually, assuming these devices benign when in fact they may carry an embedded malicious behavior or hidden malware. USB devices have many properties and capabilities that have become the subject of malicious operations. Many of the recent attacks targeting individuals, and especially organizations, utilize popular and widely used USB devices, such as mice, keyboards, flash drives, printers, and smartphones. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched via USB devices. Significance: We propose USBWARE, a project that focuses on the vulnerabilities of USB devices and centers on the development of a comprehensive detection framework that relies upon a crucial attack repository. USBWARE will allow researchers and companies to better understand the vulnerabilities and attacks associated with USB devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The framework of USBWARE is aimed at accurate detection of both known and unknown USB-based attacks by a process that efficiently enhances the framework's detection capabilities over time. The framework will integrate two main security approaches in order to enhance the detection of USB-based attacks associated with a variety of USB devices. The first approach is aimed at the detection of known attacks and their variants, whereas the second approach focuses on the detection of unknown attacks. USBWARE will consist of six independent but complimentary detection modules, each detecting attacks based on a different approach or discipline. These modules include novel ideas and algorithms inspired from or already developed within our team's domains of expertise, including cyber security, electrical and signal processing, machine learning, and computational biology. The establishment and maintenance of the USBWARE’s dynamic and up-to-date attack repository will strengthen the capabilities of the USBWARE detection framework. The attack repository’s infrastructure will enable researchers to record, document, create, and simulate existing and new USB-based attacks. This data will be used to maintain the detection framework’s updatability by incorporating knowledge regarding new attacks. Based on our experience in the cyber security domain, we aim to design the USBWARE framework so that it will have several characteristics that are crucial for this type of cyber-security detection solution. Specifically, the USBWARE framework should be: Novel, Multidisciplinary, Trusted, Lightweight, Extendable, Modular and Updatable and Adaptable. Major Findings: Based on our initial survey, we have already found more than 23 types of USB-based attacks, divided into six major categories. Our preliminary evaluation and proof of concepts showed that our detection modules can be used for efficient detection of several basic known USB attacks. Further research, development, and enhancements are required so that USBWARE will be capable to cover all of the major known USB attacks and to detect unknown attacks. Conclusion: USBWARE is a crucial detection framework that must be further enhanced and developed.

Keywords: USB, device, cyber security, attack, detection

Procedia PDF Downloads 372
4633 Phillips Curve Estimation in an Emerging Economy: Evidence from Sub-National Data of Indonesia

Authors: Harry Aginta

Abstract:

Using Phillips curve framework, this paper seeks for new empirical evidence on the relationship between inflation and output in a major emerging economy. By exploiting sub-national data, the contribution of this paper is threefold. First, it resolves the issue of using on-target national inflation rates that potentially causes weakening inflation-output nexus. This is very relevant for Indonesia as its central bank has been adopting inflation targeting framework based on national consumer price index (CPI) inflation. Second, the study tests the relevance of mining sector in output gap estimation. The test for mining sector is important to control for the effects of mining regulation and nominal effects of coal prices on real economic activities. Third, the paper applies panel econometric method by incorporating regional variation that help to improve model estimation. The results from this paper confirm the strong presence of Phillips curve in Indonesia. Positive output gap that reflects excess demand condition gives rise to the inflation rates. In addition, the elasticity of output gap is higher if the mining sector is excluded from output gap estimation. In addition to inflation adaptation, the dynamics of exchange rate and international commodity price are also found to affect inflation significantly. The results are robust to the alternative measurement of output gap

Keywords: Phillips curve, inflation, Indonesia, panel data

Procedia PDF Downloads 103
4632 Applications of Hyperspectral Remote Sensing: A Commercial Perspective

Authors: Tuba Zahra, Aakash Parekh

Abstract:

Hyperspectral remote sensing refers to imaging of objects or materials in narrow conspicuous spectral bands. Hyperspectral images (HSI) enable the extraction of spectral signatures for objects or materials observed. These images contain information about the reflectance of each pixel across the electromagnetic spectrum. It enables the acquisition of data simultaneously in hundreds of spectral bands with narrow bandwidths and can provide detailed contiguous spectral curves that traditional multispectral sensors cannot offer. The contiguous, narrow bandwidth of hyperspectral data facilitates the detailed surveying of Earth's surface features. This would otherwise not be possible with the relatively coarse bandwidths acquired by other types of imaging sensors. Hyperspectral imaging provides significantly higher spectral and spatial resolution. There are several use cases that represent the commercial applications of hyperspectral remote sensing. Each use case represents just one of the ways that hyperspectral satellite imagery can support operational efficiency in the respective vertical. There are some use cases that are specific to VNIR bands, while others are specific to SWIR bands. This paper discusses the different commercially viable use cases that are significant for HSI application areas, such as agriculture, mining, oil and gas, defense, environment, and climate, to name a few. Theoretically, there is n number of use cases for each of the application areas, but an attempt has been made to streamline the use cases depending upon economic feasibility and commercial viability and present a review of literature from this perspective. Some of the specific use cases with respect to agriculture are crop species (sub variety) detection, soil health mapping, pre-symptomatic crop disease detection, invasive species detection, crop condition optimization, yield estimation, and supply chain monitoring at scale. Similarly, each of the industry verticals has a specific commercially viable use case that is discussed in the paper in detail.

Keywords: agriculture, mining, oil and gas, defense, environment and climate, hyperspectral, VNIR, SWIR

Procedia PDF Downloads 55
4631 Frequency Analysis of Minimum Ecological Flow and Gage Height in Indus River Using Maximum Likelihood Estimation

Authors: Tasir Khan, Yejuan Wan, Kalim Ullah

Abstract:

Hydrological frequency analysis has been conducted to estimate the minimum flow elevation of the Indus River in Pakistan to protect the ecosystem. The Maximum likelihood estimation (MLE) technique is used to estimate the best-fitted distribution for Minimum Ecological Flows at nine stations of the Indus River in Pakistan. The four selected distributions, Generalized Extreme Value (GEV) distribution, Generalized Logistics (GLO) distribution, Generalized Pareto (GPA) distribution, and Pearson type 3 (PE3) are fitted in all sites, usually used in hydro frequency analysis. Compare the performance of these distributions by using the goodness of fit tests, such as the Kolmogorov Smirnov test, Anderson darling test, and chi-square test. The study concludes that the Maximum Likelihood Estimation (MLE) method recommended that GEV and GPA are the most suitable distributions which can be effectively applied to all the proposed sites. The quantiles are estimated for the return periods from 5 to 1000 years by using MLE, estimations methods. The MLE is the robust method for larger sample sizes. The results of these analyses can be used for water resources research, including water quality management, designing irrigation systems, determining downstream flow requirements for hydropower, and the impact of long-term drought on the country's aquatic system.

Keywords: minimum ecological flow, frequency distribution, indus river, maximum likelihood estimation

Procedia PDF Downloads 58
4630 A Case Study of Deep Learning for Disease Detection in Crops

Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell

Abstract:

In the precision agriculture area, one of the main tasks is the automated detection of diseases in crops. Machine Learning algorithms have been studied in recent decades for such tasks in view of their potential for improving economic outcomes that automated disease detection may attain over crop fields. The latest generation of deep learning convolution neural networks has presented significant results in the area of image classification. In this way, this work has tested the implementation of an architecture of deep learning convolution neural network for the detection of diseases in different types of crops. A data augmentation strategy was used to meet the requirements of the algorithm implemented with a deep learning framework. Two test scenarios were deployed. The first scenario implemented a neural network under images extracted from a controlled environment while the second one took images both from the field and the controlled environment. The results evaluated the generalisation capacity of the neural networks in relation to the two types of images presented. Results yielded a general classification accuracy of 59% in scenario 1 and 96% in scenario 2.

Keywords: convolutional neural networks, deep learning, disease detection, precision agriculture

Procedia PDF Downloads 234
4629 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem

Authors: Kyugneun Lee, Ikjin Lee

Abstract:

Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.

Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis

Procedia PDF Downloads 289