Modeling Language for Constructing Solvers in Machine Learning: Reductionist Perspectives
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Modeling Language for Constructing Solvers in Machine Learning: Reductionist Perspectives

Authors: Tsuyoshi Okita

Abstract:

For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach in order to make a solver quickly. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem. It is noted that our formal modeling language is not intend for providing an efficient notation for data mining application, but for facilitating a designer who develops solvers in machine learning.

Keywords: Formal language, statistical inference problem, reduction.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1084364

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331

References:


[1] Abe, N., Zadrozny, B., and Langford, J. (2004). An Iterative Method for Multi-Class Cost-Sensitive Learning. KDD -04.
[2] Allison, L. (2003). Types and Classes of Machine Learning and Data Mining. Twenty-Six Australasian Computer Science Conference (ACSC2003), pp.207-215, Australia.
[3] Bartlett, P. L., Collins, M., McAllester, D., and Taskar, B. (2004). Large margin methods for structured classification: Exponentiated Gradient algorithms and PAC-Bayesian generalization bounds. NIPS Conference.
[4] Cristianini, N., Shawe-Taylor, J. (2000). Introduction to Support Vector Machines. Cambridge University Press.
[5] Dietterich, T.G., and Bakiri, G. (1995). Solving Multiclass Learning Problems via Error-Correcting Output Codes. Journal of Artificial Intelligence Research, 2:263-286.
[6] Jaakkola, T. (2000) Tutorial on Variational Approximation Method. In Advanced Mean Field Methods: Theory and Practice, MIT Press.
[7] Lafferty, J., McCallum, A., Pereira, F. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. International Conference on Machine Learning (ICML).
[8] Langford, J., Beygelzimer, A. (2002). Sensitive Error Correcting Output Codes.
[9] Lagoudakis, M.G.,Parr, R.(2003). Reinforcement Learning as Classification: Leveraging Modern Classifiers. Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003).
[10] Mitchell, T. (1997). Machine Learning. McGraw Hills.
[11] Okita, T., Manderick, B. (2003). Distributed Learning in Support Vector Machines (poster), Conference On Learning Theory and Kernel Machines, Washington.
[12] Pednault, E., Abe, N., and Zadrozny, B. (2002). Sequential Cost- Sensitive Decision Making with Reinforcement Learning. SIGKDD -02.
[13] Rabiner, L. R. (1989) A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, VOL. 77, No. 2, February 1989.
[14] Scholkopf, B., Williamson, R.C., Smola, A.J., Shawe-Taylor, J. (2000). Support Vector Method for Novelty Detection. In Neural Information Processing Systems.
[15] Shawe-Taylor, J., Cristianini, N. (2004). Kernel Methods for Pattern Analysis. Cambridge University Press.
[16] Zadrozny, B. (2001). Reducing Multiclass to Binary by Coupling Probability Estimates. In Neural Information Processing Systems.