Hand Gesture Recognition: Sign to Voice System (S2V)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32799
Hand Gesture Recognition: Sign to Voice System (S2V)

Authors: Oi Mean Foong, Tan Jung Low, Satrio Wibowo

Abstract:

Hand gesture is one of the typical methods used in sign language for non-verbal communication. It is most commonly used by people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems have been developed by manufacturers around the globe but they are neither flexible nor cost-effective for the end users. This paper presents a system prototype that is able to automatically recognize sign language to help normal people to communicate more effectively with the hearing or speech impaired people. The Sign to Voice system prototype, S2V, was developed using Feed Forward Neural Network for two-sequence signs detection. Different sets of universal hand gestures were captured from video camera and utilized to train the neural network for classification purpose. The experimental results have shown that neural network has achieved satisfactory result for sign-to-voice translation.

Keywords: Hand gesture detection, neural network, signlanguage, sequence detection.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1072317

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797

References:


[1] Noor Saliza Mohd Salleh, Jamilin Jais, Lucyantie Mazalan, Roslan Ismail, Salman Yussof, Azhana Ahmad, Adzly Anuar, and Dzulkifli Mohamad, "Sign Language to Voice Recognition: Hand Detection Techniques for Vision-Based Approach," Current Developments in Technology-Assisted Education, FORMATEX 2006, vol. 2, pp.967-972.
[2] C. Neider, J. Kegel, D. MacLaughlin, B. Bahan, and R.G. Lee, The syntax of American sign language. Cambridge: The MIT Press, 2000.
[3] M. J. Jones, and J. M. Rehg, "Statistical Color Models with Application to skin Detection," International Journal of Computer Vision, Jan. 2002, vol. 46, no.1, pp. 81-96.
[4] D. Saxe, and R. Foulds, "Automatic Face and Gesture Recognition," IEEE International Conference on Automatic Face and Gesture Recognition, Sept. 1996, pp. 379-384.
[5] E.J. Ong, and R. Bowden, "A Boosted Classifier Tree for Hand Shape Detection," Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR 2004), IEEE Computer Society, 2004, pp. 889-894.
[6] C.S. Lee, J.S. Kim, G. T. Park, W. Jang, and Z.N. Bien, "Implementation of Real-time Recognition System for Continuous Korean Sign Language (KSL) mixed with Korean Manual Alphabet (KMA)," Journal of the Korea Institute of Telematics and Electronics, 1998, vol. 35, no.6, pp. 76-87.
[7] Gonzalez R., and R. Woods, Digital Image Processing. Addison Wesley, 1992.
[8] Boyle R., and R. Thomas, Computer Vision: A First Course. Blackwell Scientific Publications, 1988.
[9] Davies E., Machine Vision: Theory, Algorithms and Practicalities. Academic Press, 1990.
[10] X.L. Teng, B. Wu, W.Yu, and C.Q. Liu, "A Hand Gesture Recognition System Based on Local Linear Embedding," Journal of Visual Languages and Computing, vol. 16, Elsevier Ltd., 2005, pp. 442 - 454.
[11] W.W. Kong, and S.Ranganath, "Signing Exact English (SEE): Modeling and Recognition," The Journal of Pattern Recognition Society, vol. 41, Elsevier Ltd., 2008, pp. 1638 -1652.
[12] Y.H.Lee, and C.Y. Tsai, "Taiwan Sign Language (TSL) Recognition based on 3D Data and Neural Networks," Expert Systems with Applications, Elsevier Ltd., 2007, pp. 1-6.