Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
Meta Random Forests
Authors: Praveen Boinee, Alessandro De Angelis, Gian Luca Foresti
Abstract:
Leo Breimans Random Forests (RF) is a recent development in tree based classifiers and quickly proven to be one of the most important algorithms in the machine learning literature. It has shown robust and improved results of classifications on standard data sets. Ensemble learning algorithms such as AdaBoost and Bagging have been in active research and shown improvements in classification results for several benchmarking data sets with mainly decision trees as their base classifiers. In this paper we experiment to apply these Meta learning techniques to the random forests. We experiment the working of the ensembles of random forests on the standard data sets available in UCI data sets. We compare the original random forest algorithm with their ensemble counterparts and discuss the results.Keywords: Random Forests [RF], ensembles, UCI.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1330977
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2708References:
[1] Breiman, L.: Random Forests Technical Report, University of California, 2001.
[2] http://www.stat.berkeley.edu/users/breiman/RandomForests/cc_home.ht m#intro
[3] Breiman, L.: Looking Inside the Black Box, Wald Lecture II, Department of Statistics, California University, 2002.
[4] Sikonja. M, Improving Random Forests. In J.-F. Boulicaut et al.(Eds): ECML 2004, LNAI 3210, Springer, Berlin, 2004, pp. 359-370.
[5] Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. Classification and regression trees. Wadsworth Inc., Belmont, California, 1984.
[6] J. Ross Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco, 1993.
[7] Igor Kononenko. Estimating attributes: analysis and extensions of Relief. In Luc De Raedt and Francesco Bergadano, editors, Machine Learning: ECML-94, pages 171-182. Springer Verlag, Berlin, 1994.
[8] Igor Kononenko. On biases in estimating multi-valued attributes. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-95), pages 1034-1040. Morgan Kaufmann, 1995.
[9] Thomas G. Dietterich, Michael Kerns, and Yishay Mansour. Applying the weak learning framework to understand and improve C4.5. In Lorenza Saitta, editor, Machine Learning: Proceedings of the Thirteenth International Conference (ICML-96), pages 96-103. Morgan Kaufmann, San Francisco, 1996.
[10] Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. Classification and regression trees. Wadsworth Inc., Belmont, California, 1984.
[11] http://www.ics.uci.edu/~mlearn/MLRepository.html
[12] J.R. Quinlan, Bagging, Boosting, and C4.5, In Proceedings, Fourteenth National Conference on Artificial Intelligence, 1996.
[13] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. In Douglas H. Fisher, editor, Machine Learning: Proceedings of the Fourteenth International Conference (ICML-97), pages 322-330. Morgan Kaufmann, 1997.
[14] Breiman, L., Bagging Predictors, Machine Learning (1996) 24:123- 140.
[15] Carney, J., Cunningham, P.: The NeuralBAG algorithm: optimizing generalization performance in Bagged Neural Networks. In: Verleysen, M. (eds.): Proceedings of the 7th European Symposium on Artificial Neural Networks (1999), pp. 3540.
[16] Freund, Y., Schapire, RE.: Experiments with a new boosting algorithm. In Proceedings 13th International Conference on Machine Learning (1996) 148-156.
[17] Skurichina, M., Duin, R.P.W.: Bagging, Boosting and the Random Subspace Method for Linear Classifiers, Vol. 5, no. 2, Pattern Analysis and Applications (2002) 121-135.