Video-Based Face Recognition Based On State-Space Model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Video-Based Face Recognition Based On State-Space Model

Authors: Cheng-Chieh Chiang, Yi-Chia Chan, Greg C. Lee

Abstract:

This paper proposes a video-based framework for face recognition to identify which faces appear in a video sequence. Our basic idea is like a tracking task - to track a selection of person candidates over time according to the observing visual features of face images in video frames. Hence, we employ the state-space model to formulate video-based face recognition by dividing this problem into two parts: the likelihood and the transition measures. The likelihood measure is to recognize whose face is currently being observed in video frames, for which two-dimensional linear discriminant analysis is employed. The transition measure estimates the probability of changing from an incorrect recognition at the previous stage to the correct person at the current stage. Moreover, extra nodes associated with head nodes are incorporated into our proposed state-space model. The experimental results are also provided to demonstrate the robustness and efficiency of our proposed approach.

Keywords: 2DLDA, face recognition, state-space model, likelihood measure, transition measure.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1088632

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630

References:


[1] Z. Ghahramani, “An introduction to hidden Markov models and Bayesian networks,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 15, no. 1, pp. 9–42, 2001.
[2] K. P. Murphy, “Dynamic Bayesian networks: representation, inference and learning”, U. C. Berkeley, PhD. Thesis, 2002.
[3] M. Turk and A. Pentland, “Eigenface for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991.
[4] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern classification. In second ed., John Wiley and Sons, Inc, 2001.
[5] J. Yang, D. Zhang, X. Yong, and J. Yang, “Two-dimensional discriminant transform for face recognition,” Pattern Recognition, vol. 38, no. 7, 2005.
[6] B. Li, C.-H. Zheng, and D.-S. Huang, “Locally linear discriminant embedding: An efficient method for face recognition,” Pattern Recognition, vol. 41, pp. 3813–3821, 2008.
[7] X.-F. He, S.-C. Yan, Y.X. Hu, P. Niyogi, and H.-J. Zhang, “Face recognition using Laplacianfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328–340, March 2005.
[8] X. F. He and P. Niyogi, “Locality preserving projections,” in Proceedings of 17th Annual Conference on Neural Information Processing Systems, NIPS, 2003.
[9] X. Liu and T. Chen, “Video-based face recognition using adaptive hidden markov models,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, vol. 1, pp. 340–345, 2003.
[10] B. Heisele, P. Ho, and T. Poggio, “Face recognition with support vector machines: Global versus component-based approach,” in Proceedings of International Conference on Computer Vision, ICCV, 2001.
[11] P.-H. Lee, G.-S. Hsu, and Y.-P. Hung, “Face verification and identification using facial trait code,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, 2009.
[12] A. Rama, F. Tarrs, and J. Rurainsky, ”Aligned texture map creation for pose invariant face recognition,” Multimedia Tools and Applications, vol. 24, no. 2, pp. 321–335, 2010.
[13] A.A. Mohammed, R. Minhas, Q.M. Jonathan Wu, and M.A. Sid-Ahmed, “Human face recognition based on multidimensional PCA and extreme learning machine,” Pattern Recognition, vol. 44, no. 10–11, pp. 2588–2597, 2011.
[14] G. Shakhnarovich, J. W. Fisher, and T. Darrell, “Face recognition from long-term observations,” In Proceedings of European Conference on Computer Vision, ECCV, pp. 851–865, 2002.
[15] K.-C. Lee, J. Ho, M.-H. Yang, and D. Kriegman, “Visual tracking and recognition using probabilistic appearance manifolds,” Computer Vision and Image Understanding, vol. 99, no. 3, pp. 303–331, 2005.
[16] C.-Y. Lu, J.-M. Jiang, and G.-C. Feng, “A boosted manifold learning for automatic face recognition,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 24, no. 2, pp. 321–335, 2010.
[17] R.-P. Wang, S.-G. Shan, X.-L. Chen, and W. Gao, “Manifold-manifold distance with application to face recognition based on image set,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, 2008.
[18] Z. Stone, T. Zickler, and T. Darrell, “Toward large-scale face recognition using social network context,” in the Proceedings of the IEEE, vol. 98, no. 8, 2010.
[19] R.G. Cinbis, J. Verbeek, and C. Schmid, “Unsupervised metric learning for face identification in TV video,” in Proceedings of IEEE International Conference on Computer Vision, ICCV, 2011.
[20] F. B. ter Haar and R. C. Veltkamp, “3d face model fitting for recognition,” in Proceedings of European Conference on Computer Vision, ECCV, 2008.
[21] X.-Z Zhang and Y.-S. Gao, “Face recognition across pose: a review, ” Pattern Recognition, vol. 42, pp. 2876–2896, 2009.
[22] M. Kim, S. Kumar, V. Pavlovic, and H. Rowley, “Face tracking and recognition with visual constraints in real-world videos,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, 2008.
[23] O. Arandjelovic and R. Cipolla, “A pose-wise linear illumination manifold model for face recognition using video,” Computer Vision and Image Understanding, vol. 113, pp. 113–125, 2009.
[24] J. Yang, J.-Y. Yang, A.F. Frangi, and D. Zhang, “Uncorrelated projection discriminant analysis and its application to face image feature extraction,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 17, no. 8, pp. 1325–1347, 2003.
[25] The honda/ucsd video database, http://vision.ucsd.edu/leekc/ hondaucsdvideodatabase/hondaucsd.html.