Commenced in January 2007
Paper Count: 30075
A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network
Abstract:Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.2363175Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 289
 World Health Organization, “Deafness and hearing loss,” 2015, available at:
 B. S. Guedes, “Sobre surdos, bocas e m˜aos: saberes que constituem o curr´ıculo de fonoaudiologia,” Master’s thesis, Universidade do Vale do Rio do Sinos, 2010.
 P. H. Witchs, “A educac¸ ˜ao de surdos no estado novo: pr´aticas que constituem uma brasilidade surda,” Master’s thesis, Universidade do Vale do Rio dos Sinos, 2014.
 M. T. de Souza and R. Porrozzi, “Ensino de libras para os profissionais de sa´ude: uma necessidade premente,” Revista Pr´axis, vol. 1, no. 2, 2009.
 I. S. X. de Franc¸a, J. da Silva Arag˜ao, A. S. Coura, C. E. N. K. Vieira, J. F. da Silva, and G. K. P. Cruz, “A relac¸ ˜ao entre atividades de lazer e n´ıveis glicˆemicos de adultos surdos,” Northeast Network Nursing Journal, vol. 14, no. 6, 2013.
 E. M. Flores, J. L. V. Barbosa, and S. J. Rigo, “Um estudo de t´ecnicas aplicadas ao reconhecimento da l´ıngua de sinais: novas possibilidades de inclus˜ao digital,” RENOTE, vol. 10, no. 3, 2012.
 P. R. Rodrigues and L. R. G. Alves, “Criar e compartilhar games: novas possibilidades de letramento digital para crianc¸as surdas,” RENOTE, vol. 12, no. 2, 2014.
 F. I. da Silva, F. Reis, P. R. Gauto, S. G. de Lima da Silva, and U. Paterno, Aprendendo Libras como segunda lngua: nvel bsico. Federal Instite from Santa Catarina, 2017.
 I. L. Bastos, M. F. Angelo, and A. C. Loula, “Recognition of static gestures applied to brazilian sign language (libras),” in Graphics, Patterns and Images (SIBGRAPI), 2015 28th SIBGRAPI Conference on. IEEE, 2015, pp. 305–312.
 S. Saha, R. Lahiri, A. Konar, and A. K. Nagar, “A novel approach to american sign language recognition using madaline neural network,” in 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Dec 2016, pp. 1–6.
 G. Garca-Bautista, F. Trujillo-Romero, and S. O. Caballero-Morales, “Mexican sign language recognition using kinect and data time warping algorithm,” in 2017 International Conference on Electronics, Communications and Computers (CONIELECOMP), Feb 2017, pp. 1–5.
 T. Pariwat and P. Seresangtakul, “Thai finger-spelling sign language recognition using global and local features with svm,” in 2017 9th International Conference on Knowledge and Smart Technology (KST), Feb 2017, pp. 116–120.
 M. Mohandes, S. Aliyu, and M. Deriche, “Arabic sign language recognition using the leap motion controller,” in 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE), June 2014, pp. 960–965.
 C. H. Chuan, E. Regina, and C. Guardino, “American sign language recognition using leap motion sensor,” in 2014 13th International Conference on Machine Learning and Applications, Dec 2014, pp. 541–544.
 A. S. Elons, M. Ahmed, H. Shedid, and M. F. Tolba, “Arabic sign language recognition using leap motion sensor,” in 2014 9th International Conference on Computer Engineering Systems (ICCES), Dec 2014, pp. 368–373.
 K. Y. Fok, N. Ganganath, C. T. Cheng, and C. K. Tse, “A real-time asl recognition system using leap motion sensors,” in 2015 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, Sept 2015, pp. 411–414.
 D. Naglot and M. Kulkarni, “Real time sign language recognition using the leap motion controller,” in 2016 International Conference on Inventive Computation Technologies (ICICT), vol. 3, Aug 2016, pp. 1–5.
 M. A. Almasre and H. Al-Nuaim, “Recognizing arabic sign language gestures using depth sensors and a ksvm classifier,” in Computer Science and Electronic Engineering (CEEC), Sept 2016, pp. 146–151.
 Leap Motion, 2014, available at:
 S. O. Haykin, Neural Networks and Learning Machines, 3rd ed. Pearson, 2008.
 H. Walter and T. Michael, “Recent developments in multilayer perceptron neural networks,” in Proceedings of the 7th Annual Memphis Area Engineering and Science Conference, 2005.
 D. F. Silva and G. E. Batista, “Speeding up all-pairwise dynamic time warping matrix calculation,” in Proceedings of the 2016 SIAM International Conference on Data Mining. SIAM, 2016, pp. 837–845.
 T. Prtzlich, J. Driedger, and M. Mller, “Memory-restricted multiscale dynamic time warping,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2016, pp. 569–573.
 M. Muller, H. Mattes, and F. Kurth, “An efficient multiscale approach to audio synchronization.” in ISMIR, 2006, pp. 192–197.
 A. Gupta, J. He, J. Martinez, J. J. Little, and R. J. Woodham, “Efficient video-based retrieval of human motion with flexible alignment,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), March 2016, pp. 1–9.