Human Interactive E-learning Systems using Head Posture Images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Human Interactive E-learning Systems using Head Posture Images

Authors: Yucel Ugurlu

Abstract:

This paper explains a novel approach to human interactive e-learning systems using head posture images. Students- face and hair information are used to identify a human presence and estimate the gaze direction. We then define the human-computer interaction level and test the definition using ten students and seventy different posture images. The experimental results show that head posture images provide adequate information for increasing human-computer interaction in e-learning systems.

Keywords: E-learning, image segmentation, human-presence, gaze-direction, human-computer interaction, LabVIEW

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1086293

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571

References:


[1] M. Rosenberg, E-Learning: Strategies for Delivering Knowledge in the Digital Age, McGraw Hill, Columbus, OH, 2001.
[2] S. Alexander, "E-learning developments and experiences," Education + Training, Vol. 43 No. 4/5, pp. 240-248, 2001.
[3] L. Shen, M. Wang, and R. Shen, "Affective e-learning: Using emotional data to improve learning in pervasive learning environment," Educational Technology & Society, Vol. 12, No. 2, pp. 176-189, 2009.
[4] H. Ekanayake, D.D. Karunarathna, and K.P. Hewagamage, "Cognitive architecture for affective e-learning," Int. J. of the Computer, the Internet and Management, Vol. 14, No. SP1, 2006.
[5] R.W. Picard, Affective Computing, MIT Press, Cambridge, Massachusetts, 1997.
[6] F. Karray, M. Alemzadeh, J.A. Saleh, and M.N. Arab, "Human-computer interaction: Overview on state of the art," Int. J. on Smart Sensing and Intelligent Systems, Vol. 1, No. 1, pp. 137-159, 2008.
[7] B. Shneiderman, "The future of interactive systems and the emergence of direct manipulation," Behavior and Information Technology, pp. 237-256, 1982.
[8] M. Eisenhauer, B. Hoffman, and D. Kretschmer, "State of the art human-computer interaction," GigaMobile/D2.7.1, 2002.
[9] Y. Ugurlu and H. Sakuta, "E-learning for graphical system design courses: A case study," IEEE Int. Conf. on Technology Enhanced Education, Kerala, India, 2012.
[10] S.Z. Li and A.K. Jain. Handbook of Face Recognition, Springer, 2004.
[11] R. Brunelli and T. Poggio, "Face recognition: Features versus templates," IEEE Trans. Pattern Anal. Machine Intell., Vol. 15, pp. 1042-1052, 1993.
[12] P. Viola and M. Jones, "Robust real-time face detection," Int. J. Computer Vision, Vol. 57, No. 2, pp. 137-154, 2004.
[13] R. L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, "Face detection in color images," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 24, No. 5, pp. 696-706, May 2002.
[14] B. Weyrauch, J. Huang, B. Heisele, and V. Blanz. “Component-based face recognition with 3D morphable models,” First IEEE Workshop on Face Processing in Video, Washington, D.C., 2004.
[15] S.L. Phung, A. Bouzerdoum, and D. Chai, “Skin segmentation using color pixel classification: Analysis and comparison,” IEEE Trans. on Pattern Analysis and Machine Intelligence., Vol. 27, No.1, pp.148-154, 2005.
[16] M.J. Jones and J.M. Rehg, “Statistical color models with application to skin detection,” Proc. IEEE Conf. Computer Vision and Pattern Recognition., Fort Collins, pp. 274-280, 1999.
[17] L.E. Sibert and R.J.K. Jacob, “Evaluation of eye gaze interaction,” Proc. Human Factors in Computing Systems Conf, Addison-Wesley/ACM Press, pp. 281-288, 2000.