Search results for: Kernel Methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4075

Search results for: Kernel Methods

4075 Application of Formal Methods for Designing a Separation Kernel for Embedded Systems

Authors: Kei Kawamorita, Ryouta Kasahara, Yuuki Mochizuki, Kenichiro Noguchi

Abstract:

A separation-kernel-based operating system (OS) has been designed for use in secure embedded systems by applying formal methods to the design of the separation-kernel part. The separation kernel is a small OS kernel that provides an abstract distributed environment on a single CPU. The design of the separation kernel was verified using two formal methods, the B method and the Spin model checker. A newly designed semi-formal method, the extended state transition method, was also applied. An OS comprising the separation-kernel part and additional OS services on top of the separation kernel was prototyped on the Intel IA-32 architecture. Developing and testing of a prototype embedded application, a point-of-sale application, on the prototype OS demonstrated that the proposed architecture and the use of formal methods to design its kernel part are effective for achieving a secure embedded system having a high-assurance separation kernel.

Keywords: B method, embedded systems, extended state transition, formal methods, separation kernel, Spin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
4074 Online Prediction of Nonlinear Signal Processing Problems Based Kernel Adaptive Filtering

Authors: Hamza Nejib, Okba Taouali

Abstract:

This paper presents two of the most knowing kernel adaptive filtering (KAF) approaches, the kernel least mean squares and the kernel recursive least squares, in order to predict a new output of nonlinear signal processing. Both of these methods implement a nonlinear transfer function using kernel methods in a particular space named reproducing kernel Hilbert space (RKHS) where the model is a linear combination of kernel functions applied to transform the observed data from the input space to a high dimensional feature space of vectors, this idea known as the kernel trick. Then KAF is the developing filters in RKHS. We use two nonlinear signal processing problems, Mackey Glass chaotic time series prediction and nonlinear channel equalization to figure the performance of the approaches presented and finally to result which of them is the adapted one.

Keywords: KLMS, online prediction, KAF, signal processing, RKHS, Kernel methods, KRLS, KLMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1004
4073 Adaptive Kernel Filtering Used in Video Processing

Authors: Rasmus Engholm, Eva B. Vedel Jensen, Henrik Karstoft

Abstract:

In this paper we present a noise reduction filter for video processing. It is based on the recently proposed two dimensional steering kernel, extended to three dimensions and further augmented to suit the spatial-temporal domain of video processing. Two alternative filters are proposed - the time symmetric kernel and the time asymmetric kernel. The first reduces the noise on single sequences, but to handle the problems at scene shift the asymmetric kernel is introduced. The performance of both are tested on simulated data and on a real video sequence together with the existing steering kernel. The proposed kernels improves the Rooted Mean Squared Error (RMSE) compared to the original steering kernel method on video material.

Keywords: Adaptive image filtering, noise reduction, kernel methods, video processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
4072 Face Recognition using Features Combination and a New Non-linear Kernel

Authors: Essam Al Daoud

Abstract:

To improve the classification rate of the face recognition, features combination and a novel non-linear kernel are proposed. The feature vector concatenates three different radius of local binary patterns and Gabor wavelet features. Gabor features are the mean, standard deviation and the skew of each scaling and orientation parameter. The aim of the new kernel is to incorporate the power of the kernel methods with the optimal balance between the features. To verify the effectiveness of the proposed method, numerous methods are tested by using four datasets, which are consisting of various emotions, orientations, configuration, expressions and lighting conditions. Empirical results show the superiority of the proposed technique when compared to other methods.

Keywords: Face recognition, Gabor wavelet, LBP, Non-linearkerner

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
4071 Comparative Studies of Support Vector Regression between Reproducing Kernel and Gaussian Kernel

Authors: Wei Zhang, Su-Yan Tang, Yi-Fan Zhu, Wei-Ping Wang

Abstract:

Support vector regression (SVR) has been regarded as a state-of-the-art method for approximation and regression. The importance of kernel function, which is so-called admissible support vector kernel (SV kernel) in SVR, has motivated many studies on its composition. The Gaussian kernel (RBF) is regarded as a “best" choice of SV kernel used by non-expert in SVR, whereas there is no evidence, except for its superior performance on some practical applications, to prove the statement. Its well-known that reproducing kernel (R.K) is also a SV kernel which possesses many important properties, e.g. positive definiteness, reproducing property and composing complex R.K by simpler ones. However, there are a limited number of R.Ks with explicit forms and consequently few quantitative comparison studies in practice. In this paper, two R.Ks, i.e. SV kernels, composed by the sum and product of a translation invariant kernel in a Sobolev space are proposed. An exploratory study on the performance of SVR based general R.K is presented through a systematic comparison to that of RBF using multiple criteria and synthetic problems. The results show that the R.K is an equivalent or even better SV kernel than RBF for the problems with more input variables (more than 5, especially more than 10) and higher nonlinearity.

Keywords: admissible support vector kernel, reproducing kernel, reproducing kernel Hilbert space, support vector regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
4070 Finger Vein Recognition using PCA-based Methods

Authors: Sepehr Damavandinejadmonfared, Ali Khalili Mobarakeh, Mohsen Pashna, , Jiangping Gou Sayedmehran Mirsafaie Rizi, Saba Nazari, Shadi Mahmoodi Khaniabadi, Mohamad Ali Bagheri

Abstract:

In this paper a novel algorithm is proposed to merit the accuracy of finger vein recognition. The performances of Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA), and Kernel Entropy Component Analysis (KECA) in this algorithm are validated and compared with each other in order to determine which one is the most appropriate one in terms of finger vein recognition.

Keywords: Biometrics, finger vein recognition, PrincipalComponent Analysis (PCA), Kernel Principal Component Analysis(KPCA), Kernel Entropy Component Analysis (KPCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2627
4069 Kernel’s Parameter Selection for Support Vector Domain Description

Authors: Mohamed EL Boujnouni, Mohamed Jedra, Noureddine Zahid

Abstract:

Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.

Keywords: Gravity centers, Kernel’s parameter, Support Vector Domain Description, Variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
4068 On Discretization of Second-order Derivatives in Smoothed Particle Hydrodynamics

Authors: R. Fatehi, M.A. Fayazbakhsh, M.T. Manzari

Abstract:

Discretization of spatial derivatives is an important issue in meshfree methods especially when the derivative terms contain non-linear coefficients. In this paper, various methods used for discretization of second-order spatial derivatives are investigated in the context of Smoothed Particle Hydrodynamics. Three popular forms (i.e. "double summation", "second-order kernel derivation", and "difference scheme") are studied using one-dimensional unsteady heat conduction equation. To assess these schemes, transient response to a step function initial condition is considered. Due to parabolic nature of the heat equation, one can expect smooth and monotone solutions. It is shown, however in this paper, that regardless of the type of kernel function used and the size of smoothing radius, the double summation discretization form leads to non-physical oscillations which persist in the solution. Also, results show that when a second-order kernel derivative is used, a high-order kernel function shall be employed in such a way that the distance of inflection point from origin in the kernel function be less than the nearest particle distance. Otherwise, solutions may exhibit oscillations near discontinuities unlike the "difference scheme" which unconditionally produces monotone results.

Keywords: Heat conduction, Meshfree methods, Smoothed ParticleHydrodynamics (SPH), Second-order derivatives.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3033
4067 Adaptive Kernel Principal Analysis for Online Feature Extraction

Authors: Mingtao Ding, Zheng Tian, Haixia Xu

Abstract:

The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.

Keywords: adaptive method, kernel principal component analysis, online extraction, recursive algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1510
4066 A New Composition Method of Admissible Support Vector Kernel Based on Reproducing Kernel

Authors: Wei Zhang, Xin Zhao, Yi-Fan Zhu, Xin-Jian Zhang

Abstract:

Kernel function, which allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, makes the Support Vector Machines (SVM) have been successfully applied in many fields, e.g. classification and regression. The importance of kernel has motivated many studies on its composition. It-s well-known that reproducing kernel (R.K) is a useful kernel function which possesses many properties, e.g. positive definiteness, reproducing property and composing complex R.K by simple operation. There are two popular ways to compute the R.K with explicit form. One is to construct and solve a specific differential equation with boundary value whose handicap is incapable of obtaining a unified form of R.K. The other is using a piecewise integral of the Green function associated with a differential operator L. The latter benefits the computation of a R.K with a unified explicit form and theoretical analysis, whereas there are relatively later studies and fewer practical computations. In this paper, a new algorithm for computing a R.K is presented. It can obtain the unified explicit form of R.K in general reproducing kernel Hilbert space. It avoids constructing and solving the complex differential equations manually and benefits an automatic, flexible and rigorous computation for more general RKHS. In order to validate that the R.K computed by the algorithm can be used in SVM well, some illustrative examples and a comparison between R.K and Gaussian kernel (RBF) in support vector regression are presented. The result shows that the performance of R.K is close or slightly superior to that of RBF.

Keywords: admissible support vector kernel, reproducing kernel, reproducing kernel Hilbert space, Green function, support vectorregression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474
4065 A Bayesian Kernel for the Prediction of Protein- Protein Interactions

Authors: Hany Alashwal, Safaai Deris, Razib M. Othman

Abstract:

Understanding proteins functions is a major goal in the post-genomic era. Proteins usually work in context of other proteins and rarely function alone. Therefore, it is highly relevant to study the interaction partners of a protein in order to understand its function. Machine learning techniques have been widely applied to predict protein-protein interactions. Kernel functions play an important role for a successful machine learning technique. Choosing the appropriate kernel function can lead to a better accuracy in a binary classifier such as the support vector machines. In this paper, we describe a Bayesian kernel for the support vector machine to predict protein-protein interactions. The use of Bayesian kernel can improve the classifier performance by incorporating the probability characteristic of the available experimental protein-protein interactions data that were compiled from different sources. In addition, the probabilistic output from the Bayesian kernel can assist biologists to conduct more research on the highly predicted interactions. The results show that the accuracy of the classifier has been improved using the Bayesian kernel compared to the standard SVM kernels. These results imply that protein-protein interaction can be predicted using Bayesian kernel with better accuracy compared to the standard SVM kernels.

Keywords: Bioinformatics, Protein-protein interactions, Bayesian Kernel, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
4064 A Comparison of the Nonparametric Regression Models using Smoothing Spline and Kernel Regression

Authors: Dursun Aydin

Abstract:

This paper study about using of nonparametric models for Gross National Product data in Turkey and Stanford heart transplant data. It is discussed two nonparametric techniques called smoothing spline and kernel regression. The main goal is to compare the techniques used for prediction of the nonparametric regression models. According to the results of numerical studies, it is concluded that smoothing spline regression estimators are better than those of the kernel regression.

Keywords: Kernel regression, Nonparametric models, Prediction, Smoothing spline.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3054
4063 Generalization Kernel for Geopotential Approximation by Harmonic Splines

Authors: Elena Kotevska

Abstract:

This paper presents a generalization kernel for gravitational potential determination by harmonic splines. It was shown in [10] that the gravitational potential can be approximated using a kernel represented as a Newton integral over the real Earth body. On the other side, the theory of geopotential approximation by harmonic splines uses spherically oriented kernels. The purpose of this paper is to show that in the spherical case both kernels have the same type of representation, which leads us to conclusion that it is possible to consider the kernel represented as a Newton integral over the real Earth body as a kind of generalization of spherically harmonic kernels to real geometries.

Keywords: Geopotential, Reproducing Kernel, Approximation, Regular Surface

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1253
4062 Weighted k-Nearest-Neighbor Techniques for High Throughput Screening Data

Authors: Kozak K, M. Kozak, K. Stapor

Abstract:

The k-nearest neighbors (knn) is a simple but effective method of classification. In this paper we present an extended version of this technique for chemical compounds used in High Throughput Screening, where the distances of the nearest neighbors can be taken into account. Our algorithm uses kernel weight functions as guidance for the process of defining activity in screening data. Proposed kernel weight function aims to combine properties of graphical structure and molecule descriptors of screening compounds. We apply the modified knn method on several experimental data from biological screens. The experimental results confirm the effectiveness of the proposed method.

Keywords: biological screening, kernel methods, KNN, QSAR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
4061 Kernel Matching versus Inverse Probability Weighting: A Comparative Study

Authors: Andy Handouyahia, Tony Haddad, Frank Eaton

Abstract:

Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.

Keywords: Treatment effect, causal inference, observational studies, Propensity score based matching, Kernel Matching, Inverse Probability Weighting, Estimation methods for incremental effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6857
4060 Composite Kernels for Public Emotion Recognition from Twitter

Authors: Chien-Hung Chen, Yan-Chun Hsing, Yung-Chun Chang

Abstract:

The Internet has grown into a powerful medium for information dispersion and social interaction that leads to a rapid growth of social media which allows users to easily post their emotions and perspectives regarding certain topics online. Our research aims at using natural language processing and text mining techniques to explore the public emotions expressed on Twitter by analyzing the sentiment behind tweets. In this paper, we propose a composite kernel method that integrates tree kernel with the linear kernel to simultaneously exploit both the tree representation and the distributed emotion keyword representation to analyze the syntactic and content information in tweets. The experiment results demonstrate that our method can effectively detect public emotion of tweets while outperforming the other compared methods.

Keywords: Public emotion recognition, natural language processing, composite kernel, sentiment analysis, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
4059 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919
4058 Feature Selection Methods for an Improved SVM Classifier

Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, three feature selection methods are evaluated: Random Selection, Information Gain (IG) and Support Vector Machine feature selection (called SVM_FS). We show that the best results were obtained with SVM_FS method for a relatively small dimension of the feature vector. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).

Keywords: Feature Selection, Learning with Kernels, SupportVector Machine, and Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773
4057 Study of Effective Moisture Diffusivity of Oak Acorn

Authors: Habibeh Nalbandi, Sadegh Seiiedlou, Hamid R. Ghasemzadeh, Naser Hamdami

Abstract:

The purpose of present work was to study the drying kinetics of whole acorn and its kernel at different drying air temperatures and their effective moisture diffusivity. The results indicated that the drying time of whole acorn was 442, 206 and 188 min at the air temperature of 65, 75 and 85ºC, respectively. At the same temperatures, the drying time of kernel was 131, 56 and 76min. The results showed that the effect of drying air temperature increasing on the drying time reduction could not be significant on acorn drying at all conditions. The effective moisture diffusivity of whole acorn and kernel increased with increasing air temperature from 65 to 75ºC. However more air temperature increasing, led to decreasing this property of acorn kernel. The critical temperature of acorn drying was about 75°C in which acorn kernel had the highest effective moisture diffusivity.

Keywords: Critical temperature, Drying kinetics, Moisture diffusivity, Oak acorn.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
4056 Waste Lubricating Oil Treatment by Adsorption Process Using Different Adsorbents

Authors: Nabil M. Abdel-Jabbar, Essam A.H. Al Zubaidy, Mehrab Mehrvar

Abstract:

Waste lubricating oil re-refining adsorption process by different adsorbent materials was investigated. Adsorbent materials such as oil adsorbent, egg shale powder, date palm kernel powder, and acid activated date palm kernel powder were used. The adsorption process over fixed amount of adsorbent at ambient conditions was investigated. The adsorption/extraction process was able to deposit the asphaltenic and metallic contaminants from the waste oil to lower values. It was found that the date palm kernel powder with contact time of 4 h was able to give the best conditions for treating the waste oil. The recovered solvent could be also reused. It was also found that the activated bentonite gave the best physical properties followed by the date palm kernel powder.

Keywords: activated bentonite, egg shale powder, datepalm kernel powder, used oil treatment, used oilcharacteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3777
4055 On the Efficient Implementation of a Serial and Parallel Decomposition Algorithm for Fast Support Vector Machine Training Including a Multi-Parameter Kernel

Authors: Tatjana Eitrich, Bruno Lang

Abstract:

This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.

Keywords: Support Vector Machine Training, Multi-ParameterKernels, Shared Memory Parallel Computing, Large Data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
4054 An Iterative Algorithm for KLDA Classifier

Authors: D.N. Zheng, J.X. Wang, Y.N. Zhao, Z.H. Yang

Abstract:

The Linear discriminant analysis (LDA) can be generalized into a nonlinear form - kernel LDA (KLDA) expediently by using the kernel functions. But KLDA is often referred to a general eigenvalue problem in singular case. To avoid this complication, this paper proposes an iterative algorithm for the two-class KLDA. The proposed KLDA is used as a nonlinear discriminant classifier, and the experiments show that it has a comparable performance with SVM.

Keywords: Linear discriminant analysis (LDA), kernel LDA (KLDA), conjugate gradient algorithm, nonlinear discriminant classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
4053 Evaluating some Feature Selection Methods for an Improved SVM Classifier

Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of features selection methods to reduce the dimensionality of the document-representation vector. Four feature selection methods are evaluated: Random Selection, Information Gain (IG), Support Vector Machine (called SVM_FS) and Genetic Algorithm with SVM (GA_FS). We showed that the best results were obtained with SVM_FS and GA_FS methods for a relatively small dimension of the features vector comparative with the IG method that involves longer vectors, for quite similar classification accuracies. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).

Keywords: Features selection, learning with kernels, support vector machine, genetic algorithms and classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
4052 A New Algorithm for Determining the Leading Coefficient of in the Parabolic Equation

Authors: Shiping Zhou, Minggen Cui

Abstract:

This paper investigates the inverse problem of determining the unknown time-dependent leading coefficient in the parabolic equation using the usual conditions of the direct problem and an additional condition. An algorithm is developed for solving numerically the inverse problem using the technique of space decomposition in a reproducing kernel space. The leading coefficients can be solved by a lower triangular linear system. Numerical experiments are presented to show the efficiency of the proposed methods.

Keywords: parabolic equations, coefficient inverse problem, reproducing kernel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
4051 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis

Authors: Amir Hajian, Sepehr Damavandinejadmonfared

Abstract:

In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.

Keywords: Biometrics, finger vein recognition, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
4050 Hardware Error Analysis and Severity Characterization in Linux-Based Server Systems

Authors: N. Georgoulopoulos, A. Hatzopoulos, K. Karamitsios, K. Kotrotsios, A. I. Metsai

Abstract:

Current server systems are responsible for critical applications that run in different infrastructures, such as the cloud, physical machines, and virtual machines. A common challenge that these systems face are the various hardware faults that may occur due to the high load, among other reasons, which translates to errors resulting in malfunctions or even server downtime. The most important hardware parts, that are causing most of the errors, are the CPU, RAM, and the hard drive - HDD. In this work, we investigate selected CPU, RAM, and HDD errors, observed or simulated in kernel ring buffer log files from GNU/Linux servers. Moreover, a severity characterization is given for each error type. Understanding these errors is crucial for the efficient analysis of kernel logs that are usually utilized for monitoring servers and diagnosing faults. In addition, to support the previous analysis, we present possible ways of simulating hardware errors in RAM and HDD, aiming to facilitate the testing of methods for detecting and tackling the above issues in a server running on GNU/Linux.

Keywords: hardware errors, Kernel logs, GNU/Linux servers, RAM, HDD, CPU

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610
4049 Flame Kernel Growth and Related Effects of Spark Plug Electrodes: Fluid Motion Interaction in an Optically Accessible DISI Engine

Authors: A. Schirru, A. Irimescu, S. Merola, A. d’Adamo, S. Fontanesi

Abstract:

One of the aspects that are usually neglected during the design phase of an engine is the effect of the spark plug on the flow field inside the combustion chamber. Because of the difficulties in the experimental investigation of the mutual interaction between flow alteration and early flame kernel convection effect inside the engine combustion chamber, CFD-3D simulation is usually exploited in such cases. Experimentally speaking, a particular type of engine has to be used in order to directly observe the flame propagation process. In this study, a double electrode spark plug was fitted into an optically accessible engine and a high-speed camera was used to capture the initial stages of the combustion process. Both the arc and the kernel phases were observed. Then, a morphologic analysis was carried out and the position of the center of mass of the flame, relative to the spark plug position, was calculated. The crossflow orientation was chosen for the spark plug and the kernel growth process was observed for different air-fuel ratios. It was observed that during a normal cycle the flow field between the electrodes tends to transport the arc deforming it. Because of that, the kernel growth phase takes place away from the electrodes and the flame propagates with a preferential direction dictated by the flow field.

Keywords: Combustion, Kernel growth, optically accessible engine, spark-ignition engine, spark plug orientation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
4048 A Kernel Classifier using Linearised Bregman Iteration

Authors: K. A. D. N. K Wimalawarne

Abstract:

In this paper we introduce a novel kernel classifier based on a iterative shrinkage algorithm developed for compressive sensing. We have adopted Bregman iteration with soft and hard shrinkage functions and generalized hinge loss for solving l1 norm minimization problem for classification. Our experimental results with face recognition and digit classification using SVM as the benchmark have shown that our method has a close error rate compared to SVM but do not perform better than SVM. We have found that the soft shrinkage method give more accuracy and in some situations more sparseness than hard shrinkage methods.

Keywords: Compressive sensing, Bregman iteration, Generalisedhinge loss, sparse, kernels, shrinkage functions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332
4047 Genetic Algorithms and Kernel Matrix-based Criteria Combined Approach to Perform Feature and Model Selection for Support Vector Machines

Authors: A. Perolini

Abstract:

Feature and model selection are in the center of attention of many researches because of their impact on classifiers- performance. Both selections are usually performed separately but recent developments suggest using a combined GA-SVM approach to perform them simultaneously. This approach improves the performance of the classifier identifying the best subset of variables and the optimal parameters- values. Although GA-SVM is an effective method it is computationally expensive, thus a rough method can be considered. The paper investigates a joined approach of Genetic Algorithm and kernel matrix criteria to perform simultaneously feature and model selection for SVM classification problem. The purpose of this research is to improve the classification performance of SVM through an efficient approach, the Kernel Matrix Genetic Algorithm method (KMGA).

Keywords: Feature and model selection, Genetic Algorithms, Support Vector Machines, kernel matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
4046 Face Recognition with PCA and KPCA using Elman Neural Network and SVM

Authors: Hossein Esbati, Jalil Shirazi

Abstract:

In this paper, in order to categorize ORL database face pictures, principle Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) methods by using Elman neural network and Support Vector Machine (SVM) categorization methods are used. Elman network as a recurrent neural network is proposed for modeling storage systems and also it is used for reviewing the effect of using PCA numbers on system categorization precision rate and database pictures categorization time. Categorization stages are conducted with various components numbers and the obtained results of both Elman neural network categorization and support vector machine are compared. In optimum manner 97.41% recognition accuracy is obtained.

Keywords: Face recognition, Principal Component Analysis, Kernel Principal Component Analysis, Neural network, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876