A Robust LS-SVM Regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
A Robust LS-SVM Regression

Authors: József Valyon, Gábor Horváth

Abstract:

In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solution is obtained, by first mapping the input vector to a high dimensional kernel space in a nonlinear fashion, where the solution is calculated from a linear equation set. In this paper a geometric view of the kernel space is introduced, which enables us to develop a new formulation to achieve a sparse and robust estimate.

Keywords: Support Vector Machines, Least Squares SupportVector Machines, Regression, Sparse approximation.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1333746

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063

References:


[1] V. Vapnik, "The Nature of Statistical Learning Theory", New-York: Springer-Verlag., 1995
[2] J. A. K. Suykens, V. T. Gestel, J. De Brabanter, B. De Moor, J. Vandewalle, "Least Squares Support Vector Machines", World Scientific, 2002
[3] J. A. K. Suykens, L. Lukas, and J. Vandewalle, "Sparse approximation using least squares support vector machines", IEEE International Symposium on Circuits and Systems ISCAS'2000, 2000
[4] J. A. K. Suykens, L. Lukas, and J. Vandewalle, "Sparse least squares support vector machine classifiers", ESANN'2000 European Symposium on Artificial Neural Networks, 2000, pp. 37-42.
[5] J.A.K. Suykens, J. De Brabanter, L. Lukas, and J. Vandewalle, "Weighted least squares support vector machines: robustness and sparse approximation", Neurocomputing, 2002. pp. 85-105
[6] Yuh-Jye Lee and O. L. Mangasarian, "RSVM: Reduced Support Vector Machines", Proceedings of the First SIAM International Conference on Data Mining, Chicago, 2001. April 5-7.
[7] J. Valyon and G. Horv├íth, ÔÇ×A generalized LS-SVM", SYSID'2003 Rotterdam, 2003, pp. 827-832.
[8] J. Valyon and G. Horv├íth, ÔÇ×A Sparse Least Squares Support Vector Machine Classifier", Proceedings of the International Joint Conference on Neural Networks IJCNN 2004, 2004, pp. 543-548.
[9] Holland, P. W., and R. E. Welsch, "Robust Regression Using Iteratively Reweighted Least-Squares," Communications in Statistics: Theory and Methods, A6, 1977, pp. 813-827.
[10] Huber, P. J., Robust Statistics, Wiley, 1981.
[11] W. H. Press, S. A. Teukolsky, W. T. Wetterling and B. P. Flannery , "Numerical Recipes in C", Cambridge University Press, Books On-Line, Available: www.nr.com, 2002
[12] B. Schölkopf, S. Mika, C.J.C. Burges, P. Knirsch, K.-R. M├╝ller, G. R├ñtsch, and A. Smola, ÔÇ×Input space vs. feature space in kernel-based methods". IEEE Transactions on Neural Networks, 1999, 10(5), pp. 1000-1017.
[13] G. Baudat and F. Anouar, "Kernel-based methods and function approximation". In International Joint Conference on Neural Networks, pages 1244-1249, Washington DC, 2001. July 15-19.
[14] H. Golub and Charles F. Van Loan, Matrix Computations", Gene Johns Hopkins University Press, 1989.