Search results for: Gaussian process classification model with multiclass
11949 Statistical Wavelet Features, PCA, and SVM Based Approach for EEG Signals Classification
Authors: R. K. Chaurasiya, N. D. Londhe, S. Ghosh
Abstract:
The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the supportvectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification.
Keywords: Discrete Wavelet Transform, Electroencephalogram, Pattern Recognition, Principal Component Analysis, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 311311948 Fusion of Colour and Depth Information to Enhance Wound Tissue Classification
Authors: Darren Thompson, Philip Morrow, Bryan Scotney, John Winder
Abstract:
Patients with diabetes are susceptible to chronic foot wounds which may be difficult to manage and slow to heal. Diagnosis and treatment currently rely on the subjective judgement of experienced professionals. An objective method of tissue assessment is required. In this paper, a data fusion approach was taken to wound tissue classification. The supervised Maximum Likelihood and unsupervised Multi-Modal Expectation Maximisation algorithms were used to classify tissues within simulated wound models by weighting the contributions of both colour and 3D depth information. It was found that, at low weightings, depth information could show significant improvements in classification accuracy when compared to classification by colour alone, particularly when using the maximum likelihood method. However, larger weightings were found to have an entirely negative effect on accuracy.Keywords: Classification, data fusion, diabetic foot, stereophotogrammetry, tissue colour.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171011947 Machine Learning Methods for Flood Hazard Mapping
Authors: S. Zappacosta, C. Bove, M. Carmela Marinelli, P. di Lauro, K. Spasenovic, L. Ostano, G. Aiello, M. Pietrosanto
Abstract:
This paper proposes a neural network approach for assessing flood hazard mapping. The core of the model is a machine learning component fed by frequency ratios, namely statistical correlations between flood event occurrences and a selected number of topographic properties. The classification capability was compared with the flood hazard mapping River Basin Plans (Piani Assetto Idrogeologico, acronimed as PAI) designed by the Italian Institute for Environmental Research and Defence, ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale), encoding four different increasing flood hazard levels. The study area of Piemonte, an Italian region, has been considered without loss of generality. The frequency ratios may be used as a standalone block to model the flood hazard mapping. Nevertheless, the mixture with a neural network improves the classification power of several percentage points, and may be proposed as a basic tool to model the flood hazard map in a wider scope.
Keywords: flood modeling, hazard map, neural networks, hydrogeological risk, flood risk assessment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72511946 Rapid Study on Feature Extraction and Classification Models in Healthcare Applications
Authors: S. Sowmyayani
Abstract:
The advancement of computer-aided design helps the medical force and security force. Some applications include biometric recognition, elderly fall detection, face recognition, cancer recognition, tumor recognition, etc. This paper deals with different machine learning algorithms that are more generically used for any health care system. The most focused problems are classification and regression. With the rise of big data, machine learning has become particularly important for solving problems. Machine learning uses two types of techniques: supervised learning and unsupervised learning. The former trains a model on known input and output data and predicts future outputs. Classification and regression are supervised learning techniques. Unsupervised learning finds hidden patterns in input data. Clustering is one such unsupervised learning technique. The above-mentioned models are discussed briefly in this paper.
Keywords: Supervised learning, unsupervised learning, regression, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34611945 An Efficient Hamiltonian for Discrete Fractional Fourier Transform
Authors: Sukrit Shankar, Pardha Saradhi K., Chetana Shanta Patsa, Jaydev Sharma
Abstract:
Fractional Fourier Transform, which is a generalization of the classical Fourier Transform, is a powerful tool for the analysis of transient signals. The discrete Fractional Fourier Transform Hamiltonians have been proposed in the past with varying degrees of correlation between their eigenvectors and Hermite Gaussian functions. In this paper, we propose a new Hamiltonian for the discrete Fractional Fourier Transform and show that the eigenvectors of the proposed matrix has a higher degree of correlation with the Hermite Gaussian functions. Also, the proposed matrix is shown to give better Fractional Fourier responses with various transform orders for different signals.Keywords: Fractional Fourier Transform, Hamiltonian, Eigen Vectors, Discrete Hermite Gaussians.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152911944 Effect of Personality Traits on Classification of Political Orientation
Authors: Vesile Evrim, Aliyu Awwal
Abstract:
Today, there is a large number of political transcripts available on the Web to be mined and used for statistical analysis, and product recommendations. As the online political resources are used for various purposes, automatically determining the political orientation on these transcripts becomes crucial. The methodologies used by machine learning algorithms to do an automatic classification are based on different features that are classified under categories such as Linguistic, Personality etc. Considering the ideological differences between Liberals and Conservatives, in this paper, the effect of Personality traits on political orientation classification is studied. The experiments in this study were based on the correlation between LIWC features and the BIG Five Personality traits. Several experiments were conducted using Convote U.S. Congressional- Speech dataset with seven benchmark classification algorithms. The different methodologies were applied on several LIWC feature sets that constituted by 8 to 64 varying number of features that are correlated to five personality traits. As results of experiments, Neuroticism trait was obtained to be the most differentiating personality trait for classification of political orientation. At the same time, it was observed that the personality trait based classification methodology gives better and comparable results with the related work.Keywords: Politics, personality traits, LIWC, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216211943 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: Active Contour, Bayesian, Echocardiographic image, Feature vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171311942 Lipschitz Classifiers Ensembles: Usage for Classification of Target Events in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
This paper introduces an original method for guaranteed estimation of the accuracy for an ensemble of Lipschitz classifiers. The solution was obtained as a finite closed set of alternative hypotheses, which contains an object of classification with probability of not less than the specified value. Thus, the classification is represented by a set of hypothetical classes. In this case, the smaller the cardinality of the discrete set of hypothetical classes is, the higher is the classification accuracy. Experiments have shown that if cardinality of the classifiers ensemble is increased then the cardinality of this set of hypothetical classes is reduced. The problem of the guaranteed estimation of the accuracy for an ensemble of Lipschitz classifiers is relevant in multichannel classification of target events in C-OTDR monitoring systems. Results of suggested approach practical usage to accuracy control in C-OTDR monitoring systems are present.
Keywords: Lipschitz classifiers, confidence set, C-OTDR monitoring, classifiers accuracy, classifiers ensemble.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195311941 A Case-Based Reasoning-Decision Tree Hybrid System for Stock Selection
Authors: Yaojun Wang, Yaoqing Wang
Abstract:
Stock selection is an important decision-making problem. Many machine learning and data mining technologies are employed to build automatic stock-selection system. A profitable stock-selection system should consider the stock’s investment value and the market timing. In this paper, we present a hybrid system including both engage for stock selection. This system uses a case-based reasoning (CBR) model to execute the stock classification, uses a decision-tree model to help with market timing and stock selection. The experiments show that the performance of this hybrid system is better than that of other techniques regarding to the classification accuracy, the average return and the Sharpe ratio.Keywords: Case-based reasoning, decision tree, stock selection, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170511940 Neural Network-Based Control Strategies Applied to a Fed-Batch Crystallization Process
Authors: P. Georgieva, S. Feyo de Azevedo
Abstract:
This paper is focused on issues of process modeling and two model based control strategies of a fed-batch sugar crystallization process applying the concept of artificial neural networks (ANNs). The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. Two control alternatives are considered – model predictive control (MPC) and feedback linearizing control (FLC). Adequate ANN process models are first built as part of the controller structures. MPC algorithm outperforms the FLC approach with respect to satisfactory reference tracking and smooth control action. However, the MPC is computationally much more involved since it requires an online numerical optimization, while for the FLC an analytical control solution was determined.Keywords: artificial neural networks, nonlinear model control, process identification, crystallization process
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 183811939 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157011938 Fusion of ETM+ Multispectral and Panchromatic Texture for Remote Sensing Classification
Authors: Mahesh Pal
Abstract:
This paper proposes to use ETM+ multispectral data and panchromatic band as well as texture features derived from the panchromatic band for land cover classification. Four texture features including one 'internal texture' and three GLCM based textures namely correlation, entropy, and inverse different moment were used in combination with ETM+ multispectral data. Two data sets involving combination of multispectral, panchromatic band and its texture were used and results were compared with those obtained by using multispectral data alone. A decision tree classifier with and without boosting were used to classify different datasets. Results from this study suggest that the dataset consisting of panchromatic band, four of its texture features and multispectral data was able to increase the classification accuracy by about 2%. In comparison, a boosted decision tree was able to increase the classification accuracy by about 3% with the same dataset.Keywords: Internal texture; GLCM; decision tree; boosting; classification accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173611937 Hybrid Reliability-Similarity-Based Approach for Supervised Machine Learning
Authors: Walid Cherif
Abstract:
Data mining has, over recent years, seen big advances because of the spread of internet, which generates everyday a tremendous volume of data, and also the immense advances in technologies which facilitate the analysis of these data. In particular, classification techniques are a subdomain of Data Mining which determines in which group each data instance is related within a given dataset. It is used to classify data into different classes according to desired criteria. Generally, a classification technique is either statistical or machine learning. Each type of these techniques has its own limits. Nowadays, current data are becoming increasingly heterogeneous; consequently, current classification techniques are encountering many difficulties. This paper defines new measure functions to quantify the resemblance between instances and then combines them in a new approach which is different from actual algorithms by its reliability computations. Results of the proposed approach exceeded most common classification techniques with an f-measure exceeding 97% on the IRIS Dataset.
Keywords: Data mining, knowledge discovery, machine learning, similarity measurement, supervised classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152711936 Orthogonal Functions Approach to LQG Control
Authors: B. M. Mohan, Sanjeeb Kumar Kar
Abstract:
In this paper a unified approach via block-pulse functions (BPFs) or shifted Legendre polynomials (SLPs) is presented to solve the linear-quadratic-Gaussian (LQG) control problem. Also a recursive algorithm is proposed to solve the above problem via BPFs. By using the elegant operational properties of orthogonal functions (BPFs or SLPs) these computationally attractive algorithms are developed. To demonstrate the validity of the proposed approaches a numerical example is included.
Keywords: Linear quadratic Gaussian control, linear quadratic estimator, linear quadratic regulator, time-invariant systems, orthogonal functions, block-pulse functions, shifted legendre polynomials.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185911935 Estimation Model of Dry Docking Duration Using Data Mining
Authors: Isti Surjandari, Riara Novita
Abstract:
Maintenance is one of the most important activities in the shipyard industry. However, sometimes it is not supported by adequate services from the shipyard, where inaccuracy in estimating the duration of the ship maintenance is still common. This makes estimation of ship maintenance duration is crucial. This study uses Data Mining approach, i.e., CART (Classification and Regression Tree) to estimate the duration of ship maintenance that is limited to dock works or which is known as dry docking. By using the volume of dock works as an input to estimate the maintenance duration, 4 classes of dry docking duration were obtained with different linear model and job criteria for each class. These linear models can then be used to estimate the duration of dry docking based on job criteria.
Keywords: Classification and regression tree (CART), data mining, dry docking, maintenance duration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243311934 Modeling and Simulation of a Serial Production Line with Constant Work-In-Process
Authors: Mehmet Savsar
Abstract:
This paper presents a model for an unreliable production line, which is operated according to demand with constant work-in-process (CONWIP). A simulation model is developed based on the discrete model and several case problems are analyzed using the model. The model is utilized to optimize storage space capacities at intermediate stages and the number of kanbans at the last stage, which is used to trigger the production at the first stage. Furthermore, effects of several line parameters on production rate are analyzed using design of experiments.Keywords: Production line simulator, Push-pull system, JIT system, Constant WIP, Machine failures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201111933 Classification of Radio Communication Signals using Fuzzy Logic
Authors: Zuzana Dideková, Beata Mikovičová
Abstract:
Characterization of radio communication signals aims at automatic recognition of different characteristics of radio signals in order to detect their modulation type, the central frequency, and the level. Our purpose is to apply techniques used in image processing in order to extract pertinent characteristics. To the single analysis, we add several rules for checking the consistency of hypotheses using fuzzy logic. This allows taking into account ambiguity and uncertainty that may remain after the extraction of individual characteristics. The aim is to improve the process of radio communications characterization.Keywords: fuzzy classification, fuzzy inference system, radio communication signals, telecommunications
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197111932 An SVM based Classification Method for Cancer Data using Minimum Microarray Gene Expressions
Authors: R. Mallika, V. Saravanan
Abstract:
This paper gives a novel method for improving classification performance for cancer classification with very few microarray Gene expression data. The method employs classification with individual gene ranking and gene subset ranking. For selection and classification, the proposed method uses the same classifier. The method is applied to three publicly available cancer gene expression datasets from Lymphoma, Liver and Leukaemia datasets. Three different classifiers namely Support vector machines-one against all (SVM-OAA), K nearest neighbour (KNN) and Linear Discriminant analysis (LDA) were tested and the results indicate the improvement in performance of SVM-OAA classifier with satisfactory results on all the three datasets when compared with the other two classifiers.Keywords: Support vector machines-one against all, cancerclassification, Linear Discriminant analysis, K nearest neighbour, microarray gene expression, gene pair ranking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 256211931 A Novel Approach to Fault Classification and Fault Location for Medium Voltage Cables Based on Artificial Neural Network
Authors: H. Khorashadi-Zadeh, M. R. Aghaebrahimi
Abstract:
A novel application of neural network approach to fault classification and fault location of Medium voltage cables is demonstrated in this paper. Different faults on a protected cable should be classified and located correctly. This paper presents the use of neural networks as a pattern classifier algorithm to perform these tasks. The proposed scheme is insensitive to variation of different parameters such as fault type, fault resistance, and fault inception angle. Studies show that the proposed technique is able to offer high accuracy in both of the fault classification and fault location tasks.Keywords: Artificial neural networks, cable, fault location andfault classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185111930 Gene Expression Data Classification Using Discriminatively Regularized Sparse Subspace Learning
Authors: Chunming Xu
Abstract:
Sparse representation which can represent high dimensional data effectively has been successfully used in computer vision and pattern recognition problems. However, it doesn-t consider the label information of data samples. To overcome this limitation, we develop a novel dimensionality reduction algorithm namely dscriminatively regularized sparse subspace learning(DR-SSL) in this paper. The proposed DR-SSL algorithm can not only make use of the sparse representation to model the data, but also can effective employ the label information to guide the procedure of dimensionality reduction. In addition,the presented algorithm can effectively deal with the out-of-sample problem.The experiments on gene-expression data sets show that the proposed algorithm is an effective tool for dimensionality reduction and gene-expression data classification.Keywords: sparse representation, dimensionality reduction, labelinformation, sparse subspace learning, gene-expression data classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144711929 Bayesian Networks for Earthquake Magnitude Classification in a Early Warning System
Authors: G. Zazzaro, F.M. Pisano, G. Romano
Abstract:
During last decades, worldwide researchers dedicated efforts to develop machine-based seismic Early Warning systems, aiming at reducing the huge human losses and economic damages. The elaboration time of seismic waveforms is to be reduced in order to increase the time interval available for the activation of safety measures. This paper suggests a Data Mining model able to correctly and quickly estimate dangerousness of the running seismic event. Several thousand seismic recordings of Japanese and Italian earthquakes were analyzed and a model was obtained by means of a Bayesian Network (BN), which was tested just over the first recordings of seismic events in order to reduce the decision time and the test results were very satisfactory. The model was integrated within an Early Warning System prototype able to collect and elaborate data from a seismic sensor network, estimate the dangerousness of the running earthquake and take the decision of activating the warning promptly.Keywords: Bayesian Networks, Decision Support System, Magnitude Classification, Seismic Early Warning System
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 359811928 Feature Selection Methods for an Improved SVM Classifier
Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp
Abstract:
Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, three feature selection methods are evaluated: Random Selection, Information Gain (IG) and Support Vector Machine feature selection (called SVM_FS). We show that the best results were obtained with SVM_FS method for a relatively small dimension of the feature vector. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).Keywords: Feature Selection, Learning with Kernels, SupportVector Machine, and Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182911927 Line Balancing in the Hard Disk Drive Process Using Simulation Techniques
Authors: Teerapun Saeheaw, Nivit Charoenchai, Wichai Chattinnawat
Abstract:
Simulation model is an easy way to build up models to represent real life scenarios, to identify bottlenecks and to enhance system performance. Using a valid simulation model may give several advantages in creating better manufacturing design in order to improve the system performances. This paper presents result of implementing a simulation model to design hard disk drive manufacturing process by applying line balancing to improve both productivity and quality of hard disk drive process. The line balance efficiency showed 86% decrease in work in process, output was increased by an average of 80%, average time in the system was decreased 86% and waiting time was decreased 90%.Keywords: line balancing, arena, hard disk drive process, simulation, work in process (WIP)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216411926 Discovering Complex Regularities by Adaptive Self Organizing Classification
Authors: A. Faro, D. Giordano, F. Maiorana
Abstract:
Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optmize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is also able to automatically suggest a strategy for number of classes optimization.The tool is used to classify macroeconomic data that report the most developed countries? import and export. It is possible to classify the countries based on their economic behaviour and use an ad hoc tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation.
Keywords: Unsupervised classification, Kohonen networks, macroeconomics, Visual data mining, cluster interpretation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156311925 Classification of Earthquake Distribution in the Banda Sea Collision Zone with Point Process Approach
Authors: Henry J. Wattimanela, Udjianna S. Pasaribu, Nanang T. Puspito, Sapto W. Indratno
Abstract:
Banda Sea Collision Zone (BSCZ) is the result of the interaction and convergence of Indo-Australian plate, Eurasian plate and Pacific plate. This location is located in eastern Indonesia. This zone has a very high seismic activity. In this research, we will calculate the rate (λ) and Mean Square Error (MSE). By this result, we will classification earthquakes distribution in the BSCZ with the point process approach. Chi-square is used to determine the type of earthquakes distribution in the sub region of BSCZ. The data used in this research is data of earthquakes with a magnitude ≥ 6 SR for the period 1964-2013 and sourced from BMKG Jakarta. This research is expected to contribute to the Moluccas Province and surrounding local governments in performing spatial plan document related to disaster management.Keywords: Banda sea collision zone, earthquakes, mean square error, Poisson distribution, chi-square test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 211711924 Analysis of Classifications of Unsolicited Bulk Emails
Authors: Jatinderkumar R. Saini, Apurva A. Desai
Abstract:
In recent times, the problem of Unsolicited Bulk Email (UBE) or commonly known as Spam Email, has increased at a tremendous growth rate. We present an analysis of survey based on classifications of UBE in various research works. There are many research instances for classification between spam and non-spam emails but very few research instances are available for classification of spam emails, per se. This paper does not intend to assert some UBE classification to be better than the others nor does it propose any new classification but it bemoans the lack of harmony on number and definition of categories proposed by different researchers. The paper also elaborates on factors like intent of spammer, content of UBE and ambiguity in different categories as proposed in related research works of classifications of UBE.Keywords: E-mail, Scams, Spam Email, Unsolicited Bulk Email(UBE)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172811923 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach
Authors: Mukesh Kumar Shah, Tushar Gupta
Abstract:
An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.
Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68411922 Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm
Authors: M. Analoui, M. Fadavi Amiri
Abstract:
The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.Keywords: Feature reduction, genetic algorithm, pattern classification, nearest neighbor rule classifiers (k-NNR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 176611921 Ensemble Learning with Decision Tree for Remote Sensing Classification
Authors: Mahesh Pal
Abstract:
In recent years, a number of works proposing the combination of multiple classifiers to produce a single classification have been reported in remote sensing literature. The resulting classifier, referred to as an ensemble classifier, is generally found to be more accurate than any of the individual classifiers making up the ensemble. As accuracy is the primary concern, much of the research in the field of land cover classification is focused on improving classification accuracy. This study compares the performance of four ensemble approaches (boosting, bagging, DECORATE and random subspace) with a univariate decision tree as base classifier. Two training datasets, one without ant noise and other with 20 percent noise was used to judge the performance of different ensemble approaches. Results with noise free data set suggest an improvement of about 4% in classification accuracy with all ensemble approaches in comparison to the results provided by univariate decision tree classifier. Highest classification accuracy of 87.43% was achieved by boosted decision tree. A comparison of results with noisy data set suggests that bagging, DECORATE and random subspace approaches works well with this data whereas the performance of boosted decision tree degrades and a classification accuracy of 79.7% is achieved which is even lower than that is achieved (i.e. 80.02%) by using unboosted decision tree classifier.Keywords: Ensemble learning, decision tree, remote sensingclassification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 258411920 Conceptual Method for Flexible Business Process Modeling
Authors: Adla Bentellis, Zizette Boufaïda
Abstract:
Nowadays, the pace of business change is such that, increasingly, new functionality has to be realized and reliably installed in a matter of days, or even hours. Consequently, more and more business processes are prone to a continuous change. The objective of the research in progress is to use the MAP model, in a conceptual modeling method for flexible and adaptive business process. This method can be used to capture the flexibility dimensions of a business process; it takes inspiration from modularity concept in the object oriented paradigm to establish a hierarchical construction of the BP modeling. Its intent is to provide a flexible modeling that allows companies to quickly adapt their business processes.Keywords: Business Process, Business process modeling, flexibility, MAP Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898