Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30184
Performance Comparison and Evaluation of AdaBoost and SoftBoost Algorithms on Generic Object Recognition

Authors: Doaa Hegazy, Joachim Denzler


SoftBoost is a recently presented boosting algorithm, which trades off the size of achieved classification margin and generalization performance. This paper presents a performance evaluation of SoftBoost algorithm on the generic object recognition problem. An appearance-based generic object recognition model is used. The evaluation experiments are performed using a difficult object recognition benchmark. An assessment with respect to different degrees of label noise as well as a comparison to the well known AdaBoost algorithm is performed. The obtained results reveal that SoftBoost is encouraged to be used in cases when the training data is known to have a high degree of noise. Otherwise, using Adaboost can achieve better performance.

Keywords: SoftBoost algorithm, AdaBoost algorithm, Generic object recognition.

Digital Object Identifier (DOI):

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472


[1] Ayhan Demiriz, Kristin P. Bennett, and John Shawe-Taylor. Linear programming boosting via column generation. Machine Learning, 46(1- 3):225-254, 2002.
[2] Yoav. Freund and Robert E. Schapire. A decision theoretic generalization of online learning and application to boosting. Computer and System Sciences, 55(1):119-139, 1997.
[3] Doaa Hegazy and Joachim Denzler. Boosting local colored features for generic object recognition. Pattern Recognition and Image Understanding, 18:323-327, 2008.
[4] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60:91-110, 2004.
[5] Krystian Mikolajczyk and Cordelia Schmid. An affine invariant interest point detector. In 7th European Conference on Computer Vision ECCV02, pages 128-142, 2002.
[6] Pierre Moreels and Pietro Perona. Evaluation of features detectors and descriptors based on 3d objects. Int. J. Comput. Vision, 73(3):263-284, 2007.
[7] Andreas Opelt, Axel Pinz, Michael Fussenegger, and Peter Auer. Generic object recognirion with boosting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(3):416-431, 2006.
[8] Gunnar. R¨atsch, T. Onoda, and K.-R. M¨uller. Soft margins for AdaBoost. Machine Learning, 42(3):287-320, March 2001.
[9] Gunnar R¨atsch and Manfred K. Warmuth. Efficient margin maximizing with boosting. Journal of Machine Learning Research, 6:2131-2152, 2002.
[10] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. In Proc. 14th International Conference on Machine Learning, pages 322-330. Morgan Kaufmann, 1997.
[11] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:297-336, 1999.
[12] Zehang Sun, George Bebis, and Ronald Miller. Boosting object detection using feature selection. page 290, 2003.
[13] Joost van de Weijer and Cordelia Schmid. Coloring Local Feature Extraction. In 8th European Conference on Computer Vision ECCV06, volume 2, pages 334-348, 2006.
[14] Paul Viola and Michael Jones. Rapid object detection uisng a boosted cascade of simple features. In IEEE Computer Scociety Conference on Computer Vision and Pattern Recognition CVPR01, volume 1, pages 511-518, 2001.
[15] Manfred K. Warmuth, Karen Glocer, and Gunnar R¨atsch. Boosting algorithms for maximizining the soft margin. Advances in Neural Information Processing Systems (NIPS-08), 2008.
[16] Manfred K. Warmuth, Jun Liao, and Gunnar R¨atsch. Totally corrective boosting algorithms that maximize the margin. In ICML -06: Proceedings of the 23rd international conference on Machine learning, pages 1001-1008, New York, NY, USA, 2006. ACM.