Gait Biometric for Person Re-Identification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33126
Gait Biometric for Person Re-Identification

Authors: Lavanya Srinivasan

Abstract:

Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat and case recorded using long wave infrared, short wave infrared, medium wave infrared and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using You Only Look Once, background subtraction, silhouettes extraction and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the Principal Component Analysis and recognized using different classifiers. The comparative results with the different classifier show that Linear Discriminant Analysis outperform other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.

Keywords: biometric, gait, silhouettes, You Only Look Once

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 541

References:


[1] N. Dalal, B. Triggs, “Histogram of Oriented Gradients for Human Detection”, IEEE international conference on computer vision and pattern recognition (CVPR), pp. 886-893, 2005.
[2] N. Dala1, B. Triggs, C. Schmid, “Human Detection Using Oriented Histograms of flow and appearance”, European Conference on Computer Vision (ECCV), pp. 428-441, May 2006.
[3] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge”, International journal of computer vision, pp.88(2) :303– 338, 2010.
[4] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, “Microsoft coco: Com- ´ mon objects in context”, In European Conference on Computer Vision, pp. 740–755, Springer, 2014.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei, “Imagenet: A large-scale hierarchical image database”, In Computer Vision and Pattern Recognition, IEEE Conference on CVPR, pp.248–255, 2009.
[6] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. “Yfcc100m: The new data in multimedia research”, Communications of the ACM, pp.59(2):64–73, 2016.
[7] Joseph Redmon, Ali Farhadi, “YOLO9000: Better, Faster, Stronger”, Computer Vision and Pattern Recognition, 2016.
[8] Kaiming He, Georgia Gkioxari, Piotr Dollar and Ross Girshick, “Mask R-CNN”, Computer Vision and Pattern Recognition, 2018.
[9] Friedman, N., Russell, S., “Image segmentation in video sequences: a probabilistic approach”, In: Proc. 13th Conf. on Uncertainty in Artificial Intelligence, 1997.
[10] Stauffer, C., Grimson, W., “Adaptive background mixture models for real-time tracking”, In: Proc. of the Conf. on Computer Vision and Pattern Recognition. pp. 246–252, 1999.
[11] Power, P.W., Schoonees, J.A., “Understanding background mixture models for foreground segmentation”, In: Proc. of the Image and Vision Computing New Zealand, 2002.
[12] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russel. “Towards robust automatic traffic scene analysis in real-time”, In Proc. of the International Conference on Pattern Recognition, Israel, November 1994.
[13] Christof Ridder, Olaf Munkelt, and Harald Kirchner “Adaptive Background Estimation and Foreground Detection using Kalman-Filtering,” Proceedings of International Conference on recent Advances in Mechatronics, ICRAM’95, UNESCO Chair on Mechatronics, pp. 193-199, 1995.
[14] O. Barnich, M. Van Droogenbroeck, “ViBe: A universal background subtraction algorithm for video sequences”, IEEE Transactions on Image Processing, 20(6), pp. 1709-1724, June 2011.
[15] K. Bashir, T. Xiang, S. Gong, “Gait recognition without subject cooperation”, Pattern Recognition Letters, 31(13), pp. 2052-2060, October 2010.
[16] JOLLIFFE, I.T., “Principal Component Analysis”, second edition, New York: Springer-Verlag New York, 2002.
[17] Padraig Cunningham, Sarah Jane Delany, “k-Nearest Neighbour Classifiers: 2nd Edition (with Python examples)”, Machine Learning, 2020.
[18] Leo Breiman, “Random Forests”, Machine Learning, volume 45, pp.5-32,2001.
[19] Leo Breiman, Jerome Friedman, Charles J. Stone, R.A. Olshen, “Classification and Regression Trees”, Taylor & Francis, Mathematics, pp. 368, 1984.
[20] Leo Breiman, “Bagging Predictors”, Machine Learning, volume 24, pp. 123–140, 1996.
[21] I. Rish, “An empirical study of the naive Bayes classifier”, 2001.
[22] Ian H. Witten, EibeFrank, Mark A.Hall, Christopher J.Pal, “Data transformations”, Data Mining (Fourth Edition), Practical Machine Learning Tools and Techniques, pp. 285-334, 2017.
[23] S. Gunn, “Support Vector Machines for Classification and Regression”, Mathematics, 1998.
[24] N. H. Bingham, John M. Fry, “Regression Linear Models in Statistics”, Springer Undergraduate Mathematics Series, 2010.