Combining Bagging and Additive Regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Combining Bagging and Additive Regression

Authors: Sotiris B. Kotsiantis

Abstract:

Bagging and boosting are among the most popular re-sampling ensemble methods that generate and combine a diversity of regression models using the same learning algorithm as base-learner. Boosting algorithms are considered stronger than bagging on noise-free data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using an averaging methodology of bagging and boosting ensembles with 10 sub-learners in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-learners on standard benchmark datasets and the proposed ensemble gave better accuracy.

Keywords: Regressors, statistical learning.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1062832

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593

References:


[1] Avnimelech R. and Intrator N. (1999), Boosting regression estimators. Neural Computation, 11, 491-513.
[2] Bauer, E. & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, Vol. 36, pp. 105-139.
[3] Blake, C.L. & Merz, C.J. (1998). UCI Repository of machine learning databases. Irvine, CA: University of California, Department of Information and Computer Science.
[http://www.ics.uci.edu/~mlearn/MLRepository.html]
[4] Breiman L. (1996). Bagging Predictors. Machine Learning, Vol. 24, No. 3, pp. 123-140.
[5] Breiman, L. (2001), Using Iterated Bagging to Debias Regressions, Machine Learning, Volume 45, Issue 3, Dec 2001, Pages 261 - 277.
[6] Brown, G., Wyatt, J., and P. Tino (2005). Managing diversity in regression ensembles. Journal of Machine Learning Research, 6, 2005.
[7] Dietterich, T.G. (2001). Ensemble methods in machine learning. In Kittler, J., Roli, F., eds.: Multiple Classifier Systems. LNCS Vol. 1857, pp. 1-15.
[8] Duffy N., Helmbold, D. (2002), Boosting Methods for Regression, Machine Learning, Volume 47, Issue 2 ? 3, 153 - 200.
[9] Freund Y. and Robert E. Schapire (1996). Experiments with a New Boosting Algorithm, in proceedings of ICML'96, pp. 148-156.
[10] Iba, W., & Langley, P. (1992). Induction of one-level decision trees, in proceedings of Ninth International Machine Learning Conference (1992). Aberdeen, Scotland.
[11] Nadeau, C., Bengio, Y. (2003), Inference for the Generalization Error. Machine Learning, 52, 239-281.
[12] Opitz D. & Maclin R. (1999). Popular Ensemble Methods: An Empirical Study, Artificial Intelligence Research, Vol. 11, pp. 169-198.
[13] Witten, I. and Frank, E., Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Mateo, CA, 2000.
[14] Zemel, R., Pitassi, T. (2001), A gradient-based boosting algorithm for regression problems. In T. K. Leen, T. G. Dietterich, & V. Tresp (Eds.), Advances in Neural Information Precessing Systems 13 (pp. 696-702). Cambridge, MA: MIT Press.