An Efficient Algorithm for Motion Detection Based Facial Expression Recognition using Optical Flow
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
An Efficient Algorithm for Motion Detection Based Facial Expression Recognition using Optical Flow

Authors: Ahmad R. Naghsh-Nilchi, Mohammad Roshanzamir

Abstract:

One of the popular methods for recognition of facial expressions such as happiness, sadness and surprise is based on deformation of facial features. Motion vectors which show these deformations can be specified by the optical flow. In this method, for detecting emotions, the resulted set of motion vectors are compared with standard deformation template that caused by facial expressions. In this paper, a new method is introduced to compute the quantity of likeness in order to make decision based on the importance of obtained vectors from an optical flow approach. For finding the vectors, one of the efficient optical flow method developed by Gautama and VanHulle[17] is used. The suggested method has been examined over Cohn-Kanade AU-Coded Facial Expression Database, one of the most comprehensive collections of test images available. The experimental results show that our method could correctly recognize the facial expressions in 94% of case studies. The results also show that only a few number of image frames (three frames) are sufficient to detect facial expressions with rate of success of about 83.3%. This is a significant improvement over the available methods.

Keywords: Facial expression, Facial features, Optical flow, Motion vectors.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1063056

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2378

References:


[1] D. Darwin, "The Expression of Emotion in Man and Animals," John Murray, 1872, reprinted by University of Chicago Press, 1965.
[2] E.W. Young and H.D. Ellis (Eds.), "Handbook of Research on Processing," Elsevier Science Publishers, 1989.
[3] Cohn-Kanade AU-Coded Facial Expression Database. Available: http://vasc.ri.cmu.edu/idb/html/face/facial_expression/
[4] G.J. Edwards, T.F. Cootes, and C.J. Taylor, "Face Recognition Using Active Appearance Models," Proc. European Conf. Computer Vision, vol. 2, pp. 581-695, 1998.
[5] H. Hong, H. Neven, and C. von der Malsburg, "Online Facial Expression Recognition Based on Personalized Galleries," Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 354-359, 1998.
[6] M.J. Black and Y. Yacoob, "Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion," lnt'l J. Computer Vision, vol. 25, no. 1, pp. 23-48, 1997.
[7] J.F. Cohn, A.J. Zlochower, J.J. Lien, and T. Kanade, "Feature-Point Tracking by Optical Flow Discriminates Subtle Differences in Facial Expression," Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 396-401, 1998.
[8] J.N. Bassili, "Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of face," Journal of Personality and Social Psychology, Vol. 37, 2049-2059, 1979.
[9] Y. Yacoob, L. Davis, "Recognizing human facial expressions from long image sequences using optical flow," IEEE Trans. Pattern Anal. Machine Intell. 16 (6), 636-642. 1994.
[10] M. Barlett, P. Viola, T. Sejnowski, L. Larsen, J. Hager, P. Ekman, "Classifying facial action," In: Touretzky, D., Mozer, M., Hasselmo, M. (Eds.), Advances in Neural Information Processing Systems. MIT Press, Cambridge, MA. 1996.
[11] T. Otsuka, J. Ohya, "Recognizing multiple persons facial expressions using HMM based on automatic extraction of significant frames from image sequence," In: Proc. Int. Conf. On Image Processing, pp. 546- 549. 1997.
[12] X. Chen, T, Huang, "Facial expression recognition: A clustering-based approach," In: Pattern Recognition Letters, Elsevier Science, pp 1295- 1302, 2003.
[13] P. Ekman, W. Friesen, "Unmasking the face," Prentice-Hall, 1975.
[14] Y. Zhang, Q. Ji, "Facial Expression Understanding in Image Sequences Using Dynamic and Active Visual Information Fusion," Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003), 2003.
[15] C. Morimoto, D. Koons, A. Amir, and M. Flicker, "Framerate pupil detector and gaze tracker," In Proc. of the IEEE ICCV99 Frame-Rate Workshop, (Kerkyra, Greece), Sep. 1999.
[16] ]Z. Zhu, Q. Ji, K. Fujimura, and K. Lee, "Combining Kalman filtering and mean shift for real time eye tracking under active IR illumination," In Proc. Int-l Conf. Pattern Recognition, Aug. 2002.
[17] T. Gautama, M. M. VanHulle, "A phase-based approach to the estimation of the optical flow field using spatial filtering," IEEE Trans, Neural Networks, VOL. 13(5), September 2002.
[18] J.L. Barron , D.J. Fleet, and S. Beauchemin, "Performance of optical flow techniques," International Journal of Computer Vision, vol. 12(1), pp. 43-77, 1994.
[19] D.J. Fleet, and A.D. Jepson, "Computation of Component Image Velocity from Local Phase Information," Int. J. Comput. Vision, vol. 5(1), pp. 77-104, 1990.
[20] B.D. Lucas, and T. Kanade, "An Iterative Image Registration Technique with an Application to Stereo Vision," 7th International Joint Conference on Artificial Intelligence (IJCAI), pp. 674-679, 1981.