Search results for: efficiency classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3504

Search results for: efficiency classification

3114 Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM

Authors: Hadeer R. M. Tawfik, Rania A. K. Birry, Amani A. Saad

Abstract:

Eyes are considered to be the most sensitive and important organ for human being. Thus, any eye disorder will affect the patient in all aspects of life. Cataract is one of those eye disorders that lead to blindness if not treated correctly and quickly. This paper demonstrates a model for automatic detection, classification, and grading of cataracts based on image processing techniques and artificial intelligence. The proposed system is developed to ease the cataract diagnosis process for both ophthalmologists and patients. The wavelet transform combined with 2D Log Gabor Wavelet transform was used as feature extraction techniques for a dataset of 120 eye images followed by a classification process that classified the image set into three classes; normal, early, and advanced stage. A comparison between the two used classifiers, the support vector machine SVM and the artificial neural network ANN were done for the same dataset of 120 eye images. It was concluded that SVM gave better results than ANN. SVM success rate result was 96.8% accuracy where ANN success rate result was 92.3% accuracy.

Keywords: Cataract, classification, detection, feature extraction, grading, log-gabor, neural networks, support vector machines, wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
3113 Environmental Efficiency of Electric Power Industry of the United States: A Data Envelopment Analysis Approach

Authors: Alexander Y. Vaninsky

Abstract:

Importance of environmental efficiency of electric power industry stems from high demand for energy combined with global warming concerns. It is especially essential for the world largest economies like that of the United States. The paper introduces a Data Envelopment Analysis (DEA) model of environmental efficiency using indicators of fossil fuels utilization, emissions rate, and electric power losses. Using DEA is advantageous in this situation over other approaches due to its nonparametric nature. The paper analyzes data for the period of 1990 - 2006 by comparing actual yearly levels in each dimension with the best values of partial indicators for the period. As positive factors of efficiency, tendency to the decline in emissions rates starting 2000, and in electric power losses starting 2004 may be mentioned together with increasing trend of fuel utilization starting 1999. As a result, dynamics of environmental efficiency is positive starting 2002. The main concern is the decline in fossil fuels utilization in 2006. This negative change should be reversed to comply with ecological and economic requirements.

Keywords: Environmental efficiency, electric power industry, DEA, United States.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
3112 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion detection system (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw dataset for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle component analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. This optimal feature subset is used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) are used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle component analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2713
3111 Wave Atom Transform Based Two Class Motor Imagery Classification

Authors: Nebi Gedik

Abstract:

Electroencephalography (EEG) investigations of the brain computer interfaces are based on the electrical signals resulting from neural activities in the brain. In this paper, it is offered a method for classifying motor imagery EEG signals. The suggested method classifies EEG signals into two classes using the wave atom transform, and the transform coefficients are assessed, creating the feature set. Classification is done with SVM and k-NN algorithms with and without feature selection. For feature selection t-test approaches are utilized. A test of the approach is performed on the BCI competition III dataset IIIa.

Keywords: motor imagery, EEG, wave atom transform, SVM, k-NN, t-test

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 428
3110 Person Identification by Using AR Model for EEG Signals

Authors: Gelareh Mohammadi, Parisa Shoushtari, Behnam Molaee Ardekani, Mohammad B. Shamsollahi

Abstract:

A direct connection between ElectroEncephaloGram (EEG) and the genetic information of individuals has been investigated by neurophysiologists and psychiatrists since 1960-s; and it opens a new research area in the science. This paper focuses on the person identification based on feature extracted from the EEG which can show a direct connection between EEG and the genetic information of subjects. In this work the full EO EEG signal of healthy individuals are estimated by an autoregressive (AR) model and the AR parameters are extracted as features. Here for feature vector constitution, two methods have been proposed; in the first method the extracted parameters of each channel are used as a feature vector in the classification step which employs a competitive neural network and in the second method a combination of different channel parameters are used as a feature vector. Correct classification scores at the range of 80% to 100% reveal the potential of our approach for person classification/identification and are in agreement to the previous researches showing evidence that the EEG signal carries genetic information. The novelty of this work is in the combination of AR parameters and the network type (competitive network) that we have used. A comparison between the first and the second approach imply preference of the second one.

Keywords: Person Identification, Autoregressive Model, EEG, Neural Network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
3109 A Robust Method for Finding Nearest-Neighbor using Hexagon Cells

Authors: Ahmad Attiq Al-Ogaibi, Ahmad Sharieh, Moh’d Belal Al-Zoubi, R. Bremananth

Abstract:

In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.

Keywords: Hexagon cells, k-nearest neighbors, Nearest Neighbor, Pattern recognition, Query pattern, Virtually grid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2752
3108 Evolving a Fuzzy Rule-Base for Image Segmentation

Authors: A. Borji, M. Hamidi

Abstract:

A new method for color image segmentation using fuzzy logic is proposed in this paper. Our aim here is to automatically produce a fuzzy system for color classification and image segmentation with least number of rules and minimum error rate. Particle swarm optimization is a sub class of evolutionary algorithms that has been inspired from social behavior of fishes, bees, birds, etc, that live together in colonies. We use comprehensive learning particle swarm optimization (CLPSO) technique to find optimal fuzzy rules and membership functions because it discourages premature convergence. Here each particle of the swarm codes a set of fuzzy rules. During evolution, a population member tries to maximize a fitness criterion which is here high classification rate and small number of rules. Finally, particle with the highest fitness value is selected as the best set of fuzzy rules for image segmentation. Our results, using this method for soccer field image segmentation in Robocop contests shows 89% performance. Less computational load is needed when using this method compared with other methods like ANFIS, because it generates a smaller number of fuzzy rules. Large train dataset and its variety, makes the proposed method invariant to illumination noise

Keywords: Comprehensive learning Particle Swarmoptimization, fuzzy classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910
3107 Comparative Study Using Weka for Red Blood Cells Classification

Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2939
3106 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
3105 Moment Invariants in Image Analysis

Authors: Jan Flusser

Abstract:

This paper aims to present a survey of object recognition/classification methods based on image moments. We review various types of moments (geometric moments, complex moments) and moment-based invariants with respect to various image degradations and distortions (rotation, scaling, affine transform, image blurring, etc.) which can be used as shape descriptors for classification. We explain a general theory how to construct these invariants and show also a few of them in explicit forms. We review efficient numerical algorithms that can be used for moment computation and demonstrate practical examples of using moment invariants in real applications.

Keywords: Object recognition, degraded images, moments, moment invariants, geometric invariants, invariants to convolution, moment computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3868
3104 Improving Automotive Efficiency through Lean Management Tools: A Case Study

Authors: Raed EL-Khalil, Hussein Zeaiter

Abstract:

Managing and improving efficiency in the current highly competitive global automotive industry demands that those companies adopt leaner and more flexible systems. During the past 20 years the domestic automotive industry in North America has been focusing on establishing new management strategies in order to meet market demands. The lean management process also known as Toyota Manufacturing Process (TPS) or lean manufacturing encompasses tools and techniques that were established in order to provide the best quality product with the fastest lead time at the lowest cost. The following paper presents a study that focused on improving labor efficiency at one of the Big Three (Ford, GM, Chrysler LLC) domestic automotive facility in North America. The objective of the study was to utilize several lean management tools in order to optimize the efficiency and utilization levels at the “Pre- Marriage” chassis area in a truck manufacturing and assembly facility. Utilizing three different lean tools (i.e. Standardization of work, 7 Wastes, and 5S) this research was able to improve efficiency by 51%, utilization by 246%, and reduce operations by 14%. The return on investment calculated based on the improvements made was 284%.

Keywords: Lean Manufacturing, Standardized Work, Operation Efficiency and Utilization, Operations Management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5429
3103 Systematic Approach for Energy-Supply-Orientated Production Planning

Authors: F. Keller, G. Reinhart

Abstract:

The efficient and economic allocation of resources is one main goal in the field of production planning and control. Nowadays, a new variable gains in importance throughout the planning process: Energy. Energy-efficiency has already been widely discussed in literature, but with a strong focus on reducing the overall amount of energy used in production. This paper provides a brief systematic approach, how energy-supply-orientation can be used for an energy-cost-efficient production planning and thus combining the idea of energy-efficiency and energy-flexibility.

Keywords: Production planning and control, energy, efficiency, flexibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
3102 Production Planning and Measuring Method for Non Patterned Production System Using Stock Cutting Model

Authors: S. Homrossukon, D. Aromstain

Abstract:

The simple methods used to plan and measure non patterned production system are developed from the basic definition of working efficiency. Processing time is assigned as the variable and used to write the equation of production efficiency. Consequently, such equation is extensively used to develop the planning method for production of interest using one-dimensional stock cutting problem. The application of the developed method shows that production efficiency and production planning can be determined effectively.

Keywords: Production Planning, Parallel Machine, Production Measurement, Cutting and Packing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158
3101 Classification Algorithms in Human Activity Recognition using Smartphones

Authors: Mohd Fikri Azli bin Abdullah, Ali Fahmi Perwira Negara, Md. Shohel Sayeed, Deok-Jai Choi, Kalaiarasi Sonai Muthu

Abstract:

Rapid advancement in computing technology brings computers and humans to be seamlessly integrated in future. The emergence of smartphone has driven computing era towards ubiquitous and pervasive computing. Recognizing human activity has garnered a lot of interest and has raised significant researches- concerns in identifying contextual information useful to human activity recognition. Not only unobtrusive to users in daily life, smartphone has embedded built-in sensors that capable to sense contextual information of its users supported with wide range capability of network connections. In this paper, we will discuss the classification algorithms used in smartphone-based human activity. Existing technologies pertaining to smartphone-based researches in human activity recognition will be highlighted and discussed. Our paper will also present our findings and opinions to formulate improvement ideas in current researches- trends. Understanding research trends will enable researchers to have clearer research direction and common vision on latest smartphone-based human activity recognition area.

Keywords: Classification algorithms, Human Activity Recognition (HAR), Smartphones

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6251
3100 Texture Feature Extraction using Slant-Hadamard Transform

Authors: M. J. Nassiri, A. Vafaei, A. Monadjemi

Abstract:

Random and natural textures classification is still one of the biggest challenges in the field of image processing and pattern recognition. In this paper, texture feature extraction using Slant Hadamard Transform was studied and compared to other signal processing-based texture classification schemes. A parametric SHT was also introduced and employed for natural textures feature extraction. We showed that a subtly modified parametric SHT can outperform ordinary Walsh-Hadamard transform and discrete cosine transform. Experiments were carried out on a subset of Vistex random natural texture images using a kNN classifier.

Keywords: Texture Analysis, Slant Transform, Hadamard, DCT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2624
3099 Extraction of Symbolic Rules from Artificial Neural Networks

Authors: S. M. Kamruzzaman, Md. Monirul Islam

Abstract:

Although backpropagation ANNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability.

Keywords: Backpropagation, clustering algorithm, constructivealgorithm, continuous activation function, pruning algorithm, ruleextraction algorithm, symbolic rules.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
3098 Superior Performances of the Neural Network on the Masses Lesions Classification through Morphological Lesion Differences

Authors: U. Bottigli, R.Chiarucci, B. Golosio, G.L. Masala, P. Oliva, S.Stumbo, D.Cascio, F. Fauci, M. Glorioso, M. Iacomi, R. Magro, G. Raso

Abstract:

Purpose of this work is to develop an automatic classification system that could be useful for radiologists in the breast cancer investigation. The software has been designed in the framework of the MAGIC-5 collaboration. In an automatic classification system the suspicious regions with high probability to include a lesion are extracted from the image as regions of interest (ROIs). Each ROI is characterized by some features based generally on morphological lesion differences. A study in the space features representation is made and some classifiers are tested to distinguish the pathological regions from the healthy ones. The results provided in terms of sensitivity and specificity will be presented through the ROC (Receiver Operating Characteristic) curves. In particular the best performances are obtained with the Neural Networks in comparison with the K-Nearest Neighbours and the Support Vector Machine: The Radial Basis Function supply the best results with 0.89 ± 0.01 of area under ROC curve but similar results are obtained with the Probabilistic Neural Network and a Multi Layer Perceptron.

Keywords: Neural Networks, K-Nearest Neighbours, Support Vector Machine, Computer Aided Detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
3097 Wavelet-Based Classification of Myocardial Ischemia, Arrhythmia, Congestive Heart Failure and Sleep Apnea

Authors: Santanu Chattopadhyay, Gautam Sarkar, Arabinda Das

Abstract:

This paper presents wavelet based classification of various heart diseases. Electrocardiogram signals of different heart patients have been studied. Statistical natures of electrocardiogram signals for different heart diseases have been compared with the statistical nature of electrocardiograms for normal persons. Under this study four different heart diseases have been considered as follows: Myocardial Ischemia (MI), Congestive Heart Failure (CHF), Arrhythmia and Sleep Apnea. Statistical nature of electrocardiograms for each case has been considered in terms of kurtosis values of two types of wavelet coefficients: approximate and detail. Nine wavelet decomposition levels have been considered in each case. Kurtosis corresponding to both approximate and detail coefficients has been considered for decomposition level one to decomposition level nine. Based on significant difference, few decomposition levels have been chosen and then used for classification.

Keywords: Arrhythmia, congestive heart failure, discrete wavelet transform, electrocardiogram, myocardial ischemia, sleep apnea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 621
3096 DEA Method for Evaluation of EU Performance

Authors: M. Staníčková

Abstract:

The paper deals with an application of quantitative analysis – the Data Envelopment Analysis (DEA) method to performance evaluation of the European Union Member States, in the reference years 2000 and 2011. The main aim of the paper is to measure efficiency changes over the reference years and to analyze a level of productivity in individual countries based on DEA method and to classify the EU Member States to homogeneous units (clusters) according to efficiency results. The theoretical part is devoted to the fundamental basis of performance theory and the methodology of DEA. The empirical part is aimed at measuring degree of productivity and level of efficiency changes of evaluated countries by basic DEA model – CCR CRS model, and specialized DEA approach – the Malmquist Index measuring the change of technical efficiency and the movement of production possibility frontier. Here, DEA method becomes a suitable tool for setting a competitive/uncompetitive position of each country because there is not only one factor evaluated, but a set of different factors that determine the degree of economic development.

Keywords: CCR CRS model, cluster analysis, DEA method, efficiency, EU, Malmquist index, performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2572
3095 The Relative Efficiency of Parameter Estimation in Linear Weighted Regression

Authors: Baoguang Tian, Nan Chen

Abstract:

A new relative efficiency in linear model in reference is instructed into the linear weighted regression, and its upper and lower bound are proposed. In the linear weighted regression model, for the best linear unbiased estimation of mean matrix respect to the least-squares estimation, two new relative efficiencies are given, and their upper and lower bounds are also studied.

Keywords: Linear weighted regression, Relative efficiency, Mean matrix, Trace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2421
3094 Joint Use of Factor Analysis (FA) and Data Envelopment Analysis (DEA) for Ranking of Data Envelopment Analysis

Authors: Reza Nadimi, Fariborz Jolai

Abstract:

This article combines two techniques: data envelopment analysis (DEA) and Factor analysis (FA) to data reduction in decision making units (DMU). Data envelopment analysis (DEA), a popular linear programming technique is useful to rate comparatively operational efficiency of decision making units (DMU) based on their deterministic (not necessarily stochastic) input–output data and factor analysis techniques, have been proposed as data reduction and classification technique, which can be applied in data envelopment analysis (DEA) technique for reduction input – output data. Numerical results reveal that the new approach shows a good consistency in ranking with DEA.

Keywords: Effectiveness, Decision Making, Data EnvelopmentAnalysis, Factor Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
3093 Multi-labeled Data Expressed by a Set of Labels

Authors: Tetsuya Furukawa, Masahiro Kuzunishi

Abstract:

Collected data must be organized to be utilized efficiently, and hierarchical classification of data is efficient approach to organize data. When data is classified to multiple categories or annotated with a set of labels, users request multi-labeled data by giving a set of labels. There are several interpretations of the data expressed by a set of labels. This paper discusses which data is expressed by a set of labels by introducing orders for sets of labels and shows that there are four types of orders, which are characterized by whether the labels of expressed data includes every label of the given set of labels within the range of the set. Desirable properties of the orders, data is also expressed by the higher set of labels and different sets of labels express different data, are discussed for the orders.

Keywords: Classification Hierarchies, Multi-labeled Data, Multiple Classificaiton, Orders of Sets of Labels

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264
3092 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
3091 Application of Computational Intelligence for Sensor Fault Detection and Isolation

Authors: A. Jabbari, R. Jedermann, W. Lang

Abstract:

The new idea of this research is application of a new fault detection and isolation (FDI) technique for supervision of sensor networks in transportation system. In measurement systems, it is necessary to detect all types of faults and failures, based on predefined algorithm. Last improvements in artificial neural network studies (ANN) led to using them for some FDI purposes. In this paper, application of new probabilistic neural network features for data approximation and data classification are considered for plausibility check in temperature measurement. For this purpose, two-phase FDI mechanism was considered for residual generation and evaluation.

Keywords: Fault detection and Isolation, Neural network, Temperature measurement, measurement approximation and classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2032
3090 Human Action Recognition System Based on Silhouette

Authors: S. Maheswari, P. Arockia Jansi Rani

Abstract:

Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of interest (ROI) is extracted to identify the human in the frame. Then, optical flow technique is used to extract the motion vectors. Using the extracted features similarity measure based classification is done to recognize the action. From experimentations upon the Weizmann database, it is found that the proposed method offers a high accuracy.

Keywords: Background subtraction, human silhouette, optical flow, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 958
3089 Effect of Environmental Conditions on Energy Efficiency of AAC-based Building Envelopes

Authors: V. Koci, J. Madera, R. Cerny

Abstract:

Calculations of energy efficiency of several AACbased building envelopes under different climatic conditions are presented. As thermal insulating materials, expanded polystyrene and hydrophobic and hydrophilic mineral wools are assumed. The computations are accomplished using computer code HEMOT developed at Department of Materials Engineering, Faculty of Civil Engineering at the Czech Technical University in Prague. The climatic data of Athens, Kazan, Oslo, Prague and Reykjavík are obtained using METEONORM software.

Keywords: climatic conditions, computational simulation, energy efficiency, thermal insulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
3088 Classifying Biomedical Text Abstracts based on Hierarchical 'Concept' Structure

Authors: Rozilawati Binti Dollah, Masaki Aono

Abstract:

Classifying biomedical literature is a difficult and challenging task, especially when a large number of biomedical articles should be organized into a hierarchical structure. In this paper, we present an approach for classifying a collection of biomedical text abstracts downloaded from Medline database with the help of ontology alignment. To accomplish our goal, we construct two types of hierarchies, the OHSUMED disease hierarchy and the Medline abstract disease hierarchies from the OHSUMED dataset and the Medline abstracts, respectively. Then, we enrich the OHSUMED disease hierarchy before adapting it to ontology alignment process for finding probable concepts or categories. Subsequently, we compute the cosine similarity between the vector in probable concepts (in the “enriched" OHSUMED disease hierarchy) and the vector in Medline abstract disease hierarchies. Finally, we assign category to the new Medline abstracts based on the similarity score. The results obtained from the experiments show the performance of our proposed approach for hierarchical classification is slightly better than the performance of the multi-class flat classification.

Keywords: Biomedical literature, hierarchical text classification, ontology alignment, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
3087 A Hybrid Feature Subset Selection Approach based on SVM and Binary ACO. Application to Industrial Diagnosis

Authors: O. Kadri, M. D. Mouss, L.H. Mouss, F. Merah

Abstract:

This paper proposes a novel hybrid algorithm for feature selection based on a binary ant colony and SVM. The final subset selection is attained through the elimination of the features that produce noise or, are strictly correlated with other already selected features. Our algorithm can improve classification accuracy with a small and appropriate feature subset. Proposed algorithm is easily implemented and because of use of a simple filter in that, its computational complexity is very low. The performance of the proposed algorithm is evaluated through a real Rotary Cement kiln dataset. The results show that our algorithm outperforms existing algorithms.

Keywords: Binary Ant Colony algorithm, Support VectorMachine, feature selection, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
3086 Electronic Nose Based On Metal Oxide Semiconductor Sensors as an Alternative Technique for the Spoilage Classification of Oat Milk

Authors: A. Deswal, N. S. Deora, H. N. Mishra

Abstract:

The aim of the present study was to develop a rapid method for electronic nose for online quality control of oat milk. Analysis by electronic nose and bacteriological measurements were performed to analyze spoilage kinetics of oat milk samples stored at room temperature and refrigerated conditions for up to 15 days. Principal component analysis (PCA), Discriminant Factorial Analysis (DFA) and Soft Independent Modelling by Class Analogy (SIMCA) classification techniques were used to differentiate the samples of oat milk at different days. The total plate count (bacteriological method) was selected as the reference method to consistently train the electronic nose system. The e-nose was able to differentiate between the oat milk samples of varying microbial load. The results obtained by the bacteria total viable countsshowed that the shelf-life of oat milk stored at room temperature and refrigerated conditions were 20hrs and 13 days, respectively. The models built classified oat milk samples based on the total microbial population into “unspoiled” and “spoiled”.

Keywords: Electronic-nose, bacteriological, shelf-life, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3230
3085 Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

Authors: Jesús Everardo Olguín Tiznado, Rafael García Martínez, Claudia Camargo Wilson, Juan Andrés López Barreras, Everardo Inzunza González, Javier Ordorica Villalvazo

Abstract:

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Keywords: RSM, dependent variable, independent variables, efficiency, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952