Online Collaborative Learning System Using Speech Technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Online Collaborative Learning System Using Speech Technology

Authors: Sid-Ahmed. Selouani, Tang-Ho Lê, Chadia Moghrabi, Benoit Lanteigne, Jean Roy

Abstract:

A Web-based learning tool, the Learn IN Context (LINC) system, designed and being used in some institution-s courses in mixed-mode learning, is presented in this paper. This mode combines face-to-face and distance approaches to education. LINC can achieve both collaborative and competitive learning. In order to provide both learners and tutors with a more natural way to interact with e-learning applications, a conversational interface has been included in LINC. Hence, the components and essential features of LINC+, the voice enhanced version of LINC, are described. We report evaluation experiments of LINC/LINC+ in a real use context of a computer programming course taught at the Université de Moncton (Canada). The findings show that when the learning material is delivered in the form of a collaborative and voice-enabled presentation, the majority of learners seem to be satisfied with this new media, and confirm that it does not negatively affect their cognitive load.

Keywords: E-leaning, Knowledge Network, Speech recognition, Speech synthesis.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1330273

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662

References:


[1] Najjar, L.J. (1996). Multimedia information and learning. Journal of Educational Multimedia and Hypermedia, 5(1), pp.129-150.
[1] Alty, J.L. (2002). Dual Coding Theory and Education: Some Media Experiments to Examine the Effects of Different Media on Learning, in the Proceedings of EDMEDIA2002: World Conference on Educational Multimedia, Hypermedia & Telecommunications, Denver, Colorado, 42-47, USA.
[2] Bosworth, K. (1994). Developing collaborative skills in college students; .New Directions for Teaching and Learning, 59, 25-31.
[3] Deng, L. & Huang, X. (2004). Challenges in adopting speech recognition, Communications of the ACM. 47(1), 69-75.
[4] Sproat, R.W. (1995). Text-to-speech synthesis, AT&T Technical Journal, No 74, pp. 35-44.
[5] Fukada, T., Yoshimura, T., and Sagisaka, Y. (1999) Automatic generation of multiple pronunciations based on neural networks, Speech Communication, vol.27, pp. 63-73.
[6] Allen, J.B. (1994). How do humans process and recognize speech? IEEE Transactions on Speech and Audio Processing. 2:4, 567-577.
[7] O'Shaugnessy, D. (2001). Speech communication: Human and machine, IEEE Press.
[8] Jelinek, F. (1997). Statistical methods for speech recognition, MIT Press.
[9] L&H TTS3000, Nuance: trade mark: http://www.nuance.com/
[10] IBM Viavoice outloud, Text-To-Speech downloadable for many languages http://www.306.ibm.com/software/voice/viavoice/dev/msagent.html