Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.
Keywords: Canny pruning, hand recognition, machine learning, skin tracking.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1316863
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314References:
[1] H. Birk and T. B Moeslund, “Recognizing Gestures from the Hand Alphabet Using Principal Component Analysis”, Laboratory of Image Analysis, Aalborg University, Denmark, 1996.
[2] A. Wilson and A. Bobick, “Learning visual behavior for gesture analysis,” In Proceedings of the IEEE Symposium on Computer Vision, Coral Gables, Florida, pp. 19-21, November 1995.
[3] H. Grant, C. Lai, “Simulation modelling with artificial reality technology (smart): an integration of virtual reality simulation modelling”, Proceedings of the Winter Simulation Conference, 1998.
[4] R. George, K. Gerard J. Nigel, “Hand Gesture Recognition for the Application of Sign Language Interpretation”, Karunya University, Coimbatore.
[5] msdn.microsoft.com/enus/library/hh438998.aspx. (Accessed 13 Feb. 2018).
[6] K. Jacobs, M. Ghaziasgar, I. Venter, and R. Dodds, “South African Sign Language Hand Shape and Orientation Recognition Using Deep Learning”. Southern Africa Telecommunications Networks and Applications Conference, George, Western Cape, 2016.
[7] P. Premaratne and Q. Nguyen, “Consumer electronics control system based on hand gesture moment invariant”, IET Computer Vision, 1(1), 2007, 35-41.
[8] T. Khan and A. H Pathan, “Hand Gesture Recognition based on Digital Image Processing using MATLAB”. International Journal of Scientific & Engineering Research, Volume 6, Issue 9, September 2015.
[9] https://www.sciencedirect.com/science/article/pii/S0895435696000029, (Accessed 13 Feb. 2018).
[10] P. Karthick, N. Pratibha, V. B. Rekha, S. Thanalaxmi, “Transforming Indian Sign Language into Text Using Leap Motion”, International Journal of Innovative Research in science, Engineering and Technology, pp. 10906-10908,2014.
[11] L. E Potter, J. A. L Carter, “The leap Motion Controller: A view on sign language”, Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collabration.pp.1-4, 2013.
[12] R. Bowden, D. Windridge, T. Kadir, A. Zisserman, M. Brady, “A linguistic Feature Vector for the Visual Interpretation of sign Language”, 8th European conference on computer vision, Prague, Czech Republic, proceedings, part 1, pp: 390396, 2004.
[13] Z. Ghahramani, “An Introduction to Hidden Markov Models and Bayesian Networks”, International Journal of Pattern Recognition and Artificial Intelligence, vol. 15, no. 1, pp. 9–42, 2001.
[14] G. R. Bradski, “Real time face and object tracking as a component of a perceptual user interface”, in Applications of Computer Vision, 1998. WACV’98. Proceedings., Fourth IEEE Workshop on. IEEE, 1998, pp. 214–219.
[15] P. Viola, M. J. Jones. “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004.
[16] https://en.wikipedia.org/wiki/Canny_operator.
[17] G. Gomezand E. Morales, “Automatic feature construction and a simple rule induction algorithm for skin detection,” in Proc. of the ICML workshop on Machine Learning in Computer Vision, 2002, pp. 31–38.
[18] M. Ghaziasgar, J. Connan, and A. B. Bagula, “Enhanced adaptive skin detection with contextual tracking feedback,” in Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), 2016. IEEE, 2016, pp. 1–6.