Human Action Recognition System Based on Silhouette
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Human Action Recognition System Based on Silhouette

Authors: S. Maheswari, P. Arockia Jansi Rani

Abstract:

Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of interest (ROI) is extracted to identify the human in the frame. Then, optical flow technique is used to extract the motion vectors. Using the extracted features similarity measure based classification is done to recognize the action. From experimentations upon the Weizmann database, it is found that the proposed method offers a high accuracy.

Keywords: Background subtraction, human silhouette, optical flow, classification.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1125521

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1006

References:


[1] A. Bobick and J. Davis. The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):257–267, 2002.
[2] Antonios Oikonomopoulos, Ioannis Patras and Maja Pantic. "Spatiotemporal Salient Points For visual Recognition of Human actions." IEEE Transactions on Image Processing Vol. 36, no. No. 3. (2006).
[3] Antonios Oikonomopoulos, Maja Pantic, Ioannis Patras “Sparse B-spline polynomial descriptors for human activity recognition” in 2009.
[4] Danielweinland, RemiRonfard and Edmond Boyer, “A Survey of Vision- Based Methods for Action Representation, Segmentation and Recognition”, October 18, 2010.
[5] Droogenbroeck, O. Barnich and M. Van. "ViBe: A universal background subtraction algorithm for video sequences." IEEE Transactions on Image Processing, June 2011. 20(6):1709-1724.
[6] Enric Meinhardt-Llopis, Javier Sanchez, Daniel Kondermann , “Horn–Schunck Optical Flow with a Multi-Scale Strategy”, Image Processing On Line, 3 (2013), pp. 151–172.
[7] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2008.
[8] J. K. Aggarwal, Q. Cai, W. Liao, and B. Sabata, Articulated and elastic non-rigid motion: a review, in Workshop on Motion of Non-Rigid and Articulated Objects, Austin, TX, 1994, pp. 2–14.
[9] J. Yamato, J. Ohya, and K. Ishii. Recognizing human action in time-sequential images using hidden markov model. In CVPR, 1992.
[10] K. Schindler and L. van Gool. Action snippets: How many frames does human action recognition require? In CVPR, 2008.
[11] Keigo Takahara, Takashi Toriu and Thi Thi Zin. "Making Background Subtraction Robustto Various Illumination Changes." IJCSNS International Journal of Computer Science and Network Security, March 2011: VOL.11 No.3.
[12] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. "Actions as space-time shapes." In ICCV, 2005.
[13] M. Rodriguez, J. Ahmed, and M. Shah. Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2008.
[14] Maja Pantic, Alexpentland and Thomas Huang, “Human Computing and Machine Understanding of Human Behaviour”, ICMI’06, November 2-4, 2006.
[15] N.A. Deepak and U.N.Sinha, “Silhouette Based Human Motion Detection and Recognising their Actions from the captured Video Streams”, IntJ. Advanced Networking and Applications Volume: 02, Issue: 05, Pages: 817-823(2011).
[16] R. Polana and R. Nelson. Low level recognition of human motion. In IEEE Workshop onNonrigid and Articulate Motion, 1994.