Multi-Sensor Target Tracking Using Ensemble Learning
Authors: Bhekisipho Twala, Mantepu Masetshaba, Ramapulana Nkoana
Abstract:
Multiple classifier systems combine several individual classifiers to deliver a final classification decision. However, an increasingly controversial question is whether such systems can outperform the single best classifier, and if so, what form of multiple classifiers system yields the most significant benefit. Also, multi-target tracking detection using multiple sensors is an important research field in mobile techniques and military applications. In this paper, several multiple classifiers systems are evaluated in terms of their ability to predict a system’s failure or success for multi-sensor target tracking tasks. The Bristol Eden project dataset is utilised for this task. Experimental and simulation results show that the human activity identification system can fulfil requirements of target tracking due to improved sensors classification performances with multiple classifier systems constructed using boosting achieving higher accuracy rates.
Keywords: Single classifier, machine learning, ensemble learning, multi-sensor target tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 602References:
[1] Aha, D. W., Kibbler, D. W. & Albert, M. K. (1991). Instance-based learning algorithms. Machine Learning, 6 (37): 37-66.
[2] Breiman, L. (1996). Bagging predictors. Machine Learning, 26 (2), 123-140.
[3] Breiman, L., Friedman, J., Olshen, R. & Stone, C. (1984). Classification and Regression Trees. Wadsworth.
[4] Cervone, G., Michalski, R. S., Kaufman, K. and Panait, L. (2000). Combining Machine Learning with Evolutionary Computation: Recent Results on LEM," Proceedings of the Fifth International Workshop on Multistrategy Learning (MSL-2000), Guimarães, Portugal, pp. 41-58.
[5] Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20 (3), 273–297.
[6] Cox, D. R. (1966) Some procedures associated with the logistic qualitative response curve. In Research Papers in Statistics: Festschrift for J. Neyman (ed. F. N. David), Wiley, New York, pp. 55-71.
[7] Dietterich, T. (2000). An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomisation. Machine Learning, 40 (2), 139-158.
[8] Duda, R. O. and Hart, P. E. (1973). Pattern Classification, 2nd Edition, New York: John Wiley & Sons.
[9] Freund, Y. & Schapire, R. (1996). A decision-theoretic generalisation of on-line learning and an application to boosting. Journal of Computing and Systems, 55: 119-139.
[10] Finlay, S. (2011). Multiple classifier architectures and their application to credit risk assessment, European Journal of Operational Research, 210 (2): pp. 368-378.
[11] Ho, T. K. (1995). Random decision forests. In Proceedings of the 3rd international conference on document analysis and recognition, 278-282.
[12] Hosmer, D. W. and Lameshow, S. (1989). Applied Logistic Regression. Wiley: New York.
[13] Jolliffe, I. (1986). Principal Component Analysis. Springer Verlag.
[14] Kirk, E.E. (1982). Experimental design (2nd Ed.). Monterey, CA: Brooks, Cole Publishing Company.
[15] Kittler, J., Hatef, M., Duin, R. P. W. & Matas, J. (1998). On combining classifiers. IEEE Transaction on Pattern Analysis and Machine Intelligence, 20 (3): 226-239.
[16] Kononenko, I. (1991). Semi-naïve Bayesian classifier. In Proceedings of the European Conference on Artificial Intelligence, 206-219.
[17] Kuncheva, L. I. (2002). Switching between Selection and Fusion in Combining Classifiers: An Experiment. IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics; 32.
[18] Lewis, J. J., Nokolov, S. G., Loza, A., Fernandez, E., Canga, N., Cvejic, N., Li, J., Cardinali, A., Canagarajah, C. N., Bull, D. R., Riley, T., Hickman, D., Smith, M. I. (2006). The Eden Project Multi-Sensor Data Set. Bristol University Technical report (TR-UoB-WS-Eden-Project-Data-Set.
[19] MATLAB. (2019). version 9.6 (R2019a). Natick, Massachusetts: The MathWorks Inc.
[20] Michalski, R. S. Kaufman, K. &. Wnek, J. (1991). The AQ Family of Learning Programs: A Review of Recent Developments and an Exemplary Application", Reports of Machine Learning and Inference Laboratory, George Mason University.
[21] Ouyang Z., Sun X., Chen J., Yue D., Zhang T. (2018) Multi-view stacking ensemble for power consumption anomaly detection in the context of industrial internet of things. IEEE Access 6:9623–9631
[22] Nweke, H. F., The, Y. W., Mujtaba, G. et al. Multi-sensor fusion based on multiple classifier systems for human activity identification. Hum. Cent. Comput. Inf. Sci. 9, 34 (2019).
[23] Pires I., Garcia N., Pombo N., Flórez-Revuelta F., Spinsante S. (2018) Approach for the development of a framework for the identification of activities of daily living using sensors in mobile devices. Sensors 18:640
[24] Quinlan, J. R. (1993). C.4.5: Programs for Machine Learning. Los Altos, California: Morgan Kauffman Publishers, Inc.
[25] Ripley, B. D. (1992). Pattern Recognition and Neural Networks. Cambridge University Press, New York: John Wiley.
[26] Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning internal representations by error propagation. In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed Processing, volume 1, pages 318-362. MIT Press.
[27] Safavian, S. R. and Landgrebe, D., 1991. A Survey of Decision Tree Classifier Methodology. IEEE Trans. Syst. Man Cybernet. 21, pp. 660–674.
[28] Schapire, R., Freund, Y., Bartlett, P. and Lee, W., 1997. Boosting the margin: a new explanation for the effectiveness of voting methods. Proceedings of International Conference on Machine Learning, Morgan Kaufmann, San Francisco pp. 322–330.
[29] Sebbak F, Benhammadi F, Chibani A, Amirat Y, Mokhtari A (2014) Dempster-Shafer theory-based human activity recognition in smart home environments. Ann Telecommun 69:171–184
[30] Twala, B. (2009). Multiple Classifier Learning to Credit Risk Assessment. Expert Systems and Applications, 37 (2010): pp. 3326-3336.
[31] Wojtusiak, J. and Michalski, R. S. (2006). The LEM3 Implementation of Learnable Evolution Model and Its Testing on Complex Function Optimization Problems," Proceedings of Genetic and Evolutionary Computation Conference, GECCO 2006, Seattle, WA, July 8--12, 2006.
[32] Wolpert, D. (1992). Stacked generalisation. Neural Networks, 5 (2): 241-259.
[33] Zdravevski E, Lameski P, Trajkovik V, Kulakov A, Chorbev I, Goleva R, Pombo, N., Garcia, N., (2017). Improving activity recognition accuracy in ambient-assisted living systems by automated feature engineering. IEEE Access 5:5262–5280.
[34] Zhang, Z.; Fu, K.; Sun, X.; Ren, W. (2019). Multiple Target Tracking Based on Multiple Hypotheses Tracking and Modified Ensemble Kalman Filter in Multi-Sensor Fusion. Sensors, 19, 3118.
[35] Zhu, H., Beling, P. A., and Overstreet, G. A. (2001) A study in the combination of two consumer credit scores. Journal of the Operational Research Society, 52: 2543-2559.
[36] Zhu J., San-Segundo R., Pardo J. M. (2017) Feature extraction for robust physical activity recognition. Human-centric Comput Inf Sci 7:16.