An Exploratory Survey Questionnaire to Understand What Emotions Are Important and Difficult to Communicate for People with Dysarthria and Their Methodology of Communicating
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32807
An Exploratory Survey Questionnaire to Understand What Emotions Are Important and Difficult to Communicate for People with Dysarthria and Their Methodology of Communicating

Authors: Lubna Alhinti, Heidi Christensen, Stuart Cunningham

Abstract:

People with speech disorders may rely on augmentative and alternative communication (AAC) technologies to help them communicate. However, the limitations of the current AAC technologies act as barriers to the optimal use of these technologies in daily communication settings. The ability to communicate effectively relies on a number of factors that are not limited to the intelligibility of the spoken words. In fact, non-verbal cues play a critical role in the correct comprehension of messages and having to rely on verbal communication only, as is the case with current AAC technology, may contribute to problems in communication. This is especially true for people’s ability to express their feelings and emotions, which are communicated to a large part through non-verbal cues. This paper focuses on understanding more about the non-verbal communication ability of people with dysarthria, with the overarching aim of this research being to improve AAC technology by allowing people with dysarthria to better communicate emotions. Preliminary survey results are presented that gives an understanding of how people with dysarthria convey emotions, what emotions that are important for them to get across, what emotions that are difficult for them to convey, and whether there is a difference in communicating emotions when speaking to familiar versus unfamiliar people.

Keywords: Alternative and augmentative communication technology, dysarthria, speech emotion recognition, VIVOCA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938

References:


[1] M. Walshe and N. Miller, “Living with acquired dysarthria: the speaker’s perspective,” Disability and rehabilitation, vol. 33, no. 3, pp. 195–203, 2011.
[2] J. R. Duffy, Motor Speech Disorders-E-Book: Substrates, Differential Diagnosis, and Management. Elsevier Health Sciences, 2013.
[3] S. Steinberg, Introduction to communication course book 1: The basics. Juta and Company Ltd, 1995, vol. 1.
[4] M. Fried-Oken, D. R. Beukelman, and K. Hux, “Current and future aac research considerations for adults with acquired cognitive and communication impairments,” Assistive Technology, vol. 24, no. 1, pp. 56–66, 2012.
[5] D. Mcnaughton and D. N. Bryen, “Aac technologies to enhance participation and access to meaningful societal roles for adolescents and adults with developmental disabilities who require aac,” Augmentative and Alternative Communication, vol. 23, no. 3, pp. 217–229, 2007.
[6] D. J. Higginbotham, H. Shane, S. Russell, and K. Caves, “Access to aac: Present, past, and future,” Augmentative and alternative communication, vol. 23, no. 3, pp. 243–257, 2007.
[7] M. S. Hawley, S. P. Cunningham, P. D. Green, P. Enderby, R. Palmer, S. Sehgal, and P. O’Neill, “A voice-input voice-output communication aid for people with severe speech impairment,” IEEE Transactions on neural systems and rehabilitation engineering, vol. 21, no. 1, pp. 23–31, 2013.
[8] R. Patel, “Acoustic characteristics of the question-statement contrast in severe dysarthria due to cerebral palsy,” Journal of Speech, Language, and Hearing Research, vol. 46, no. 6, pp. 1401–1415, 2003.
[9] F. Miller and S. J. Bachrach, Cerebral palsy: A complete guide for caregiving. JHU Press, 2017.
[10] D. R. Beukelman, S. Fager, L. Ball, and A. Dietz, “Aac for adults with acquired neurological conditions: A review,” Augmentative and alternative communication, vol. 23, no. 3, pp. 230–242, 2007.
[11] G. Pullin and S. Hennig, “17 ways to say yes: Toward nuanced tone of voice in aac and speech technology,” Augmentative and Alternative Communication, vol. 31, no. 2, pp. 170–180, 2015.
[12] A. Karpf, The Human Voice: The Story of a Remarkable Talent. Bloomsbury, 2007.
[13] P. B. Dasgupta, “Detection and analysis of human emotions through voice and speech pattern processing,” arXiv preprint arXiv:1710.10198, 2017.
[14] D. A. Sauter, F. Eisner, A. J. Calder, and S. K. Scott, “Perceptual cues in nonverbal vocal expressions of emotion,” Quarterly Journal of Experimental Psychology, vol. 63, no. 11, pp. 2251–2272, 2010.
[15] R. Patel, “Prosodic control in severe dysarthria: Preserved ability to mark the question-statement contrast,” Journal of Speech, Language, and Hearing Research, vol. 45, no. 5, p. 858, 2002.
[16] R. Patel, “Phonatory control in adults with cerebral palsy and severe dysarthria,” Augmentative and Alternative Communication, vol. 18, no. 1, pp. 2–10, 2002.
[17] P. Ekman, E. R. Sorenson, and W. V. Friesen, “Pan-cultural elements in facial displays of emotion,” Science, vol. 164, no. 3875, pp. 86–88, 1969.
[18] R. P. Abelson and V. Sermat, “Multidimensional scaling of facial expressions.” Journal of experimental psychology, vol. 63, no. 6, p. 546, 1962.
[19] K. R. Scherer and G. Ceschi, “Lost luggage: a field study of emotion–antecedent appraisal,” Motivation and emotion, vol. 21, no. 3, pp. 211–235, 1997.
[20] I. S. Engberg, A. V. Hansen, O. Andersen, and P. Dalsgaard, “Design, recording and verification of a danish emotional speech database,” in Fifth European Conference on Speech Communication and Technology, 1997.
[21] F. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, and B. Weiss, “A database of german emotional speech,” in Ninth European Conference on Speech Communication and Technology, 2005.
[22] O. Martin, I. Kotsia, B. Macq, and I. Pitas, “The enterface05 audio-visual emotion database,” in Data Engineering Workshops, 2006. Proceedings. 22nd International Conference on. IEEE, 2006, pp. 8–8.
[23] P. Jackson and S. Haq, “Surrey audio-visual expressed emotion (savee) database,” Apr 2011.
[Online]. Available: www.ee.surrey.ac.uk/Personal/P.Jackson/SAVEE/
[24] S. Yacoub, S. Simske, X. Lin, and J. Burns, “Recognition of emotions in interactive voice response systems,” in Eighth European conference on speech communication and technology, 2003.
[25] M. Lugger and B. Yang, “The relevance of voice quality features in speaker independent emotion recognition,” in Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, vol. 4. IEEE, 2007, pp. IV–17.