Search results for: fault detection and classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5524

Search results for: fault detection and classification

4744 Facile Synthesis of CuO Nanosheets on Cu Foil for H2O2 Detection

Authors: Yu-Kuei Hsu, Yan-Gu Lin

Abstract:

A facile and simple fabrication of copper(II) oxide (CuO) nanosheet on copper foil as nanoelectrode for H2O2 sensing application was proposed in this study. The spontaneous formation of CuO nanosheets by immersing the copper foil into 0.1 M NaOH aqueous solution for 48 hrs was carried out at room temperature. The sheet-like morphology with several ten nanometers in thickness and ~500 nm in width was observed by SEM. Those nanosheets were confirmed the monoclinic-phase CuO by the structural analysis of XRD and Raman spectra. The directly grown CuO nanosheets film is mechanically stable and offers an excellent electrochemical sensing platform. The CuO nanosheets electrode shows excellent electrocatalytic response to H2O2 with significantly lower overpotentials for its oxidation and reduction and also exhibits a fast response and high sensitivity for the amperometric detection of H2O2. The novel spontaneously grown CuO nanosheets electrode is readily applicable to other analytes and has great potential applications in the electrochemical detection.

Keywords: CuO, nanosheets, H2O2 detection, Cu foil

Procedia PDF Downloads 278
4743 Random Subspace Ensemble of CMAC Classifiers

Authors: Somaiyeh Dehghan, Mohammad Reza Kheirkhahan Haghighi

Abstract:

The rapid growth of domains that have data with a large number of features, while the number of samples is limited has caused difficulty in constructing strong classifiers. To reduce the dimensionality of the feature space becomes an essential step in classification task. Random subspace method (or attribute bagging) is an ensemble classifier that consists of several classifiers that each base learner in ensemble has subset of features. In the present paper, we introduce Random Subspace Ensemble of CMAC neural network (RSE-CMAC), each of which has training with subset of features. Then we use this model for classification task. For evaluation performance of our model, we compare it with bagging algorithm on 36 UCI datasets. The results reveal that the new model has better performance.

Keywords: classification, random subspace, ensemble, CMAC neural network

Procedia PDF Downloads 313
4742 Hydrothermal Alteration and Mineralization of Cisarua, Nanggung District, Bogor Regency, West Java, Indonesia

Authors: A. Asaga, N. I. Basuki

Abstract:

The research area is located in Cisarua, Bogor Regency, West Java, with 12,8 km2 wide. This area belongs to mining region of PT Aneka Tambang Tbk. The purpose of this research is to study geological condition, alteration type and pattern, and type of mineralization. Geomorphology of the research area is at young to mature stage, which can be divided into Ciparigi’s Parasite Volcanic Cone Unit, Ciparigi Caldera Valley Unit, Ciparigi Caldera Rim Hill Unit, and Pongkor Volcanic Hill. Stratigraphy of the research area consist of five units, they are Laharic Breccia (Pliocene), Pyroclastic Breccia, Lapilli Tuff, Flow Tuff, Fall Tuff, and Andesite Lava (Pleistocene). Based on mineral composition, it is interpreted that there is magma composition changing from rhyolite to andesitic. Geological structures in the research area are caused by NE-SW and N-S stress direction; they are Ciparay Right Strike-Slip Fault (Pliocene), Cisarua Right Strike-Slip Fault, G. Singa Left Strike-Slip Fault, and Cinyuncung Right Strike-Slip Fault (Pleistocene). Weak to strong hydrothermal alteration can be found in the research area.They are Chlorite ± Smectite ± Halloysite Zone, Smectite - Illite - Quartz Zone, Smectite - Kaolinite - Illite - Chlorite Zone, and Smectite - Chlorite - Calcite - Quartz Zone. The distribution and assemblage of alteration minerals is controlled by lithology and geological structures in Pleistocene. Mineralization produce ore minerals, those are pyrite, marcasite, chalcopyrite, sphalerite, galena, and chalcocite. There are calcite and quartz veins that show colloform, comb, and crystalline textures. Hydrothermal alteration assemblages, ore minerals, and cavity filling textures suggest that mineralization type in research area is epithermal low sulphidation.

Keywords: Pongkor, hydrothermal alteration, epithermal, geochemistry

Procedia PDF Downloads 386
4741 Risk Analysis of Leaks from a Subsea Oil Facility Based on Fuzzy Logic Techniques

Authors: Belén Vinaixa Kinnear, Arturo Hidalgo López, Bernardo Elembo Wilasi, Pablo Fernández Pérez, Cecilia Hernández Fuentealba

Abstract:

The expanded use of risk assessment in legislative and corporate decision-making has increased the role of expert judgement in giving data for security-related decision-making. Expert judgements are required in most steps of risk assessment: danger recognizable proof, hazard estimation, risk evaluation, and examination of choices. This paper presents a fault tree analysis (FTA), which implies a probabilistic failure analysis applied to leakage of oil in a subsea production system. In standard FTA, the failure probabilities of items of a framework are treated as exact values while evaluating the failure probability of the top event. There is continuously insufficiency of data for calculating the failure estimation of components within the drilling industry. Therefore, fuzzy hypothesis can be used as a solution to solve the issue. The aim of this paper is to examine the leaks from the Zafiro West subsea oil facility by using fuzzy fault tree analysis (FFTA). As a result, the research has given theoretical and practical contributions to maritime safety and environmental protection. It has been also an effective strategy used traditionally in identifying hazards in nuclear installations and power industries.

Keywords: expert judgment, probability assessment, fault tree analysis, risk analysis, oil pipelines, subsea production system, drilling, quantitative risk analysis, leakage failure, top event, off-shore industry

Procedia PDF Downloads 175
4740 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform

Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba

Abstract:

Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.

Keywords: high level synthesis, canny edge detection, hardware accelerators, computer vision

Procedia PDF Downloads 461
4739 Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network

Authors: Li Qingjian, Li Ke, He Chun, Huang Yong

Abstract:

In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples.

Keywords: DBN, SOM, pattern classification, hyperspectral, data compression

Procedia PDF Downloads 324
4738 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video

Authors: Nidhal K. Azawi, John M. Gauch

Abstract:

Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.

Keywords: colonoscopy classification, feature extraction, image alignment, machine learning

Procedia PDF Downloads 239
4737 Predicting Groundwater Areas Using Data Mining Techniques: Groundwater in Jordan as Case Study

Authors: Faisal Aburub, Wael Hadi

Abstract:

Data mining is the process of extracting useful or hidden information from a large database. Extracted information can be used to discover relationships among features, where data objects are grouped according to logical relationships; or to predict unseen objects to one of the predefined groups. In this paper, we aim to investigate four well-known data mining algorithms in order to predict groundwater areas in Jordan. These algorithms are Support Vector Machines (SVMs), Naïve Bayes (NB), K-Nearest Neighbor (kNN) and Classification Based on Association Rule (CBA). The experimental results indicate that the SVMs algorithm outperformed other algorithms in terms of classification accuracy, precision and F1 evaluation measures using the datasets of groundwater areas that were collected from Jordanian Ministry of Water and Irrigation.

Keywords: classification, data mining, evaluation measures, groundwater

Procedia PDF Downloads 260
4736 Plagiarism Detection for Flowchart and Figures in Texts

Authors: Ahmadu Maidorawa, Idrissa Djibo, Muhammad Tella

Abstract:

This paper presents a method for detecting flow chart and figure plagiarism based on shape of image processing and multimedia retrieval. The method managed to retrieve flowcharts with ranked similarity according to different matching sets. Plagiarism detection is well known phenomenon in the academic arena. Copying other people is considered as serious offense that needs to be checked. There are many plagiarism detection systems such as turn-it-in that has been developed to provide these checks. Most, if not all, discard the figures and charts before checking for plagiarism. Discarding the figures and charts result in look holes that people can take advantage. That means people can plagiarize figures and charts easily without the current plagiarism systems detecting it. There are very few papers which talks about flowcharts plagiarism detection. Therefore, there is a need to develop a system that will detect plagiarism in figures and charts.

Keywords: flowchart, multimedia retrieval, figures similarity, image comparison, figure retrieval

Procedia PDF Downloads 444
4735 Investigation of Utilizing L-Band Horn Antenna in Landmine Detection

Authors: Ahmad H. Abdelgwad, Ahmed A. Nashat

Abstract:

Landmine detection is an important and yet challenging problem remains to be solved. Ground Penetrating Radar (GPR) is a powerful and rapidly maturing technology for subsurface threat identification. The detection methodology of GPR depends mainly on the contrast of the dielectric properties of the searched target and its surrounding soil. This contrast produces a partial reflection of the electromagnetic pulses that are being transmitted into the soil and then being collected by the GPR.  One of the most critical hardware components for the performance of GPR is the antenna system. The current paper explores the design and simulation of a pyramidal horn antenna operating at L-band frequencies (1- 2 GHz) to detect a landmine. A prototype model of the GPR system setup is developed to simulate full wave analysis of the electromagnetic fields in different soil types. The contrast in the dielectric permittivity of the landmine and the sandy soil is the most important parameter to be considered for detecting the presence of landmine. L-band horn antenna is proved to be well-versed in the investigation of landmine detection.

Keywords: full wave analysis, ground penetrating radar, horn antenna design, landmine detection

Procedia PDF Downloads 201
4734 Spatio-Temporal Assessment of Urban Growth and Land Use Change in Islamabad Using Object-Based Classification Method

Authors: Rabia Shabbir, Sheikh Saeed Ahmad, Amna Butt

Abstract:

Rapid land use changes have taken place in Islamabad, the capital city of Pakistan, over the past decades due to accelerated urbanization and industrialization. In this study, land use changes in the metropolitan area of Islamabad was observed by the combined use of GIS and satellite remote sensing for a time period of 15 years. High-resolution Google Earth images were downloaded from 2000-2015, and object-based classification method was used for accurate classification using eCognition software. The information regarding urban settlements, industrial area, barren land, agricultural area, vegetation, water, and transportation infrastructure was extracted. The results showed that the city experienced a spatial expansion, rapid urban growth, land use change and expanding transportation infrastructure. The study concluded the integration of GIS and remote sensing as an effective approach for analyzing the spatial pattern of urban growth and land use change.

Keywords: land use change, urban growth, Islamabad, object-based classification, Google Earth, remote sensing, GIS

Procedia PDF Downloads 141
4733 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 133
4732 Analyzing Tools and Techniques for Classification In Educational Data Mining: A Survey

Authors: D. I. George Amalarethinam, A. Emima

Abstract:

Educational Data Mining (EDM) is one of the newest topics to emerge in recent years, and it is concerned with developing methods for analyzing various types of data gathered from the educational circle. EDM methods and techniques with machine learning algorithms are used to extract meaningful and usable information from huge databases. For scientists and researchers, realistic applications of Machine Learning in the EDM sectors offer new frontiers and present new problems. One of the most important research areas in EDM is predicting student success. The prediction algorithms and techniques must be developed to forecast students' performance, which aids the tutor, institution to boost the level of student’s performance. This paper examines various classification techniques in prediction methods and data mining tools used in EDM.

Keywords: classification technique, data mining, EDM methods, prediction methods

Procedia PDF Downloads 105
4731 A Simulated Evaluation of Model Predictive Control

Authors: Ahmed AlNouss, Salim Ahmed

Abstract:

Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.

Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)

Procedia PDF Downloads 393
4730 Morphological Processing of Punjabi Text for Sentiment Analysis of Farmer Suicides

Authors: Jaspreet Singh, Gurvinder Singh, Prabhsimran Singh, Rajinder Singh, Prithvipal Singh, Karanjeet Singh Kahlon, Ravinder Singh Sawhney

Abstract:

Morphological evaluation of Indian languages is one of the burgeoning fields in the area of Natural Language Processing (NLP). The evaluation of a language is an eminent task in the era of information retrieval and text mining. The extraction and classification of knowledge from text can be exploited for sentiment analysis and morphological evaluation. This study coalesce morphological evaluation and sentiment analysis for the task of classification of farmer suicide cases reported in Punjab state of India. The pre-processing of Punjabi text involves morphological evaluation and normalization of Punjabi word tokens followed by the training of proposed model using deep learning classification on Punjabi language text extracted from online Punjabi news reports. The class-wise accuracies of sentiment prediction for four negatively oriented classes of farmer suicide cases are 93.85%, 88.53%, 83.3%, and 95.45% respectively. The overall accuracy of sentiment classification obtained using proposed framework on 275 Punjabi text documents is found to be 90.29%.

Keywords: deep neural network, farmer suicides, morphological processing, punjabi text, sentiment analysis

Procedia PDF Downloads 300
4729 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 250
4728 Personal Information Classification Based on Deep Learning in Automatic Form Filling System

Authors: Shunzuo Wu, Xudong Luo, Yuanxiu Liao

Abstract:

Recently, the rapid development of deep learning makes artificial intelligence (AI) penetrate into many fields, replacing manual work there. In particular, AI systems also become a research focus in the field of automatic office. To meet real needs in automatic officiating, in this paper we develop an automatic form filling system. Specifically, it uses two classical neural network models and several word embedding models to classify various relevant information elicited from the Internet. When training the neural network models, we use less noisy and balanced data for training. We conduct a series of experiments to test my systems and the results show that our system can achieve better classification results.

Keywords: artificial intelligence and office, NLP, deep learning, text classification

Procedia PDF Downloads 169
4727 Fault Diagnosis of Squirrel-Cage Induction Motor by a Neural Network Multi-Models

Authors: Yahia. Kourd, N. Guersi D. Lefebvre

Abstract:

In this paper we propose to study the faults diagnosis in squirrel-cage induction motor using MLP neural networks. We use neural healthy and faulty models of the behavior in order to detect and isolate some faults in machine. In the first part of this work, we have created a neural model for the healthy state using Matlab and a motor located in LGEB by acquirins data inputs and outputs of this engine. Then we detected the faults in the machine by residual generation. These residuals are not sufficient to isolate the existing faults. For this reason, we proposed additive neural networks to represent the faulty behaviors. From the analysis of these residuals and the choice of a threshold we propose a method capable of performing the detection and diagnosis of some faults in asynchronous machines with squirrel cage rotor.

Keywords: faults diagnosis, neural networks, multi-models, squirrel-cage induction motor

Procedia PDF Downloads 614
4726 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 60
4725 Fault Diagnosis in Induction Motors Using the Discrete Wavelet Transform

Authors: Khaled Yahia

Abstract:

This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.

Keywords: induction motors (IMs), inter-turn short-circuits diagnosis, discrete wavelet transform (DWT), current park’s vector modulus (CPVM)

Procedia PDF Downloads 548
4724 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation

Procedia PDF Downloads 218
4723 Improving Early Detection, Diagnosis And Intervention For Children With Autism Spectrum Disorder: A Cross-sectional Survey In China

Authors: Yushen Dai, Tao Deng, Miaoying Chen, Baoqin Huang, Yan Ji, Yongshen Feng, Shaofei Liu, Dongmei Zhong, Tao Zhang, Lifeng Zhang

Abstract:

Background: Detection and diagnosis are prerequisites for early interventions in the care of children with Autism Spectrum Disorder (ASD). However, few studies have focused on this topic. Aim: This study aims to characterize the timing from symptom detection to intervention in children with ASD and to identify the potential predictors of early detection, diagnosis, and intervention. Methods and procedures: A cross-sectional survey was conducted with 314 parents of children with ASD in Guangzhou, China. Outcomes and Results: This study found that most children (76.24%) were diagnosed within one year after detection, and 25.8% of them did not receive the intervention after diagnosis. Predictors to ASD diagnosis included ASD-related symptoms identified at a younger age, more serious symptoms, and initial symptoms with abnormal development and sensory anomalies. ASD-related symptoms observed at an older age, initial symptoms with the social deficit, sensory anomalies, and without language impairment, parents as the primary caregivers, family with lower income and less social support utilization increased the odds of the time lag between detection and diagnosis. Children whose fathers had a lower level of education were less likely to receive the intervention. Conclusions and Implications: The study described the time for detection, diagnosis, and interventions of children with ASD. Findings suggest that the ASD-related symptoms, the timing at which symptoms first become a concern, primary caregivers’ roles, father’s educational level, and the family economic status should be considered when offering support to improve early detection, diagnosis, and intervention. Helping children and their families take full advantage of support is also important.

Keywords: autism spectrum disorder, child, detection, diagnosis, intervention, social support

Procedia PDF Downloads 73
4722 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method

Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson

Abstract:

Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.

Keywords: adversarial examples, attack, computer vision, image processing

Procedia PDF Downloads 171
4721 Improvement of Brain Tumors Detection Using Markers and Boundaries Transform

Authors: Yousif Mohamed Y. Abdallah, Mommen A. Alkhir, Amel S. Algaddal

Abstract:

This was experimental study conducted to study segmentation of brain in MRI images using edge detection and morphology filters. For brain MRI images each film scanned using digitizer scanner then treated by using image processing program (MatLab), where the segmentation was studied. The scanned image was saved in a TIFF file format to preserve the quality of the image. Brain tissue can be easily detected in MRI image if the object has sufficient contrast from the background. We use edge detection and basic morphology tools to detect a brain. The segmentation of MRI images steps using detection and morphology filters were image reading, detection entire brain, dilation of the image, filling interior gaps inside the image, removal connected objects on borders and smoothen the object (brain). The results of this study were that it showed an alternate method for displaying the segmented object would be to place an outline around the segmented brain. Those filters approaches can help in removal of unwanted background information and increase diagnostic information of Brain MRI.

Keywords: improvement, brain, matlab, markers, boundaries

Procedia PDF Downloads 499
4720 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape

Authors: Moschos Vogiatzis, K. Perakis

Abstract:

Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.

Keywords: classification, land use/land cover, mapping, random forest

Procedia PDF Downloads 110
4719 Flap Structure Geometry in Breakthrough Structure: A Case Study from the Southern Tunisian Atlas Example, Orbata Anticline

Authors: Soulef Amamria, Mohamed Sadok Bensalem, Mohamed Ghanmi

Abstract:

The structural and sedimentological study of fault-related- folds in the Southern Tunisian Atlas is distinguished by a special geometry of the gravitational structures. This distinct geometry is observable in the example of a flap structure in Jebel Ben Zannouch with the formation of a stuck syncline. This geometry can be explained by the mechanism of major thrusting in Orbata anticline in the occidental extremity of Gafsa chains, with asymmetrical flank dips and hinge migration kinematics. These kinematics was originally controlled by the Breakthrough structure; the study of this special geometry of gravity flap structure depends on the sedimentation domain, shortening ratios, and erosion speed. This study constitutes one of the complete examples of kinematic model validation on a field scale.

Keywords: fault-related-folds, southern Tunisian Atlas, flap structure, breakthrough

Procedia PDF Downloads 76
4718 Feasibility of Weakly Interacting Massive Particles as Dark Matter Candidates: Exploratory Study on The Possible Reasons for Lack of WIMP Detection

Authors: Sloka Bhushan

Abstract:

Dark matter constitutes a majority of matter in the universe, yet very little is known about it due to its extreme lack of interaction with regular matter and the fundamental forces. Weakly Interacting Massive Particles, or WIMPs, have been contested to be one of the strongest candidates for dark matter due to their promising theoretical properties. However, various endeavors to detect these elusive particles have failed. This paper explores the various particles which may be WIMPs and the detection techniques being employed to detect WIMPs (such as underground detectors, LHC experiments, and so on). There is a special focus on the reasons for the lack of detection of WIMPs so far, and the possibility of limits in detection being a reason for the lack of physical evidence of the existence of WIMPs. This paper also explores possible inconsistencies within the WIMP particle theory as a reason for the lack of physical detection. There is a brief review on the possible solutions and alternatives to these inconsistencies. Additionally, this paper also reviews the supersymmetry theory and the possibility of the supersymmetric neutralino (A possible WIMP particle) being detectable. Lastly, a review on alternate candidates for dark matter such as axions and MACHOs has been conducted. The explorative study in this paper is conducted through a series of literature reviews.

Keywords: dark matter, particle detection, supersymmetry, weakly interacting massive particles

Procedia PDF Downloads 118
4717 Selective Circular Dichroism Sensor Based on the Generation of Quantum Dots for Cadmium Ion Detection

Authors: Pradthana Sianglam, Wittaya Ngeontae

Abstract:

A new approach for the fabrication of cadmium ion (Cd2+) sensor is demonstrated. The detection principle is based on the in-situ generation of cadmium sulfide quantum dots (CdS QDs) in the presence of chiral thiol containing compound and detection by the circular dichroism spectroscopy (CD). Basically, the generation of CdS QDs can be done in the presence of Cd2+, sulfide ion and suitable capping compounds. In addition, the strong CD signal can be recorded if the generated QDs possess chiral property (from chiral capping molecule). Thus, the degree of CD signal change depends on the number of the generated CdS QDs which can be related to the concentration of Cd2+ (excess of other components). In this work, we use the mixture of cysteamine (Cys) and L-Penicillamine (LPA) as the capping molecules. The strong CD signal can be observed when the solution contains sodium sulfide, Cys, LPA, and Cd2+. Moreover, the CD signal is linearly related to the concentration of Cd2+. This approach shows excellence selectivity towards the detection of Cd2+ when comparing to other cation. The proposed CD sensor provides low limit detection limits around 70 µM and can be used with real water samples with satisfactory results.

Keywords: circular dichroism sensor, quantum dots, enaniomer, in-situ generation, chemical sensor, heavy metal ion

Procedia PDF Downloads 352
4716 Terrain Classification for Ground Robots Based on Acoustic Features

Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow

Abstract:

The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.

Keywords: acoustic features, autonomous robots, feature extraction, terrain classification

Procedia PDF Downloads 347
4715 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves

Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira

Abstract:

Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.

Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary

Procedia PDF Downloads 312