Search results for: mouthing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2

Search results for: mouthing

2 Mouthing Patterns in Indian Sign Language

Authors: Neha Kulshreshtha

Abstract:

This paper examines the patterns of 'Mouthing', a non-manual marker, and its distribution in Indian Sign Language (ISL). Linguistic research in Indian Sign Language is an emerging field where much is needed to be done. The little research which has happened focuses on the structure of ISL in terms of physical or manual markers, therefore a study of mouthing patterns would give an insight into the distribution of this particular non-manual marker. Data has been collected with the help of native ISL users through various techniques in which natural signs can be captured, for example, storytelling, informal conversations etc. The aim of the study is to find out the various situations where mouthing is used. Sometimes, the mouthing is not actually the articulation of the word as spoken in the local languages. The paper aims to find out whether the mouthing patterns in ISL are influenced by any local language or they are independent of any influence from the local language or both. Mouthing patterns have been studied in many sign languages and an investigation into ISL will reveal whether it falls in pattern with the other sign languages.

Keywords: Indian sign language, mouthing, non-manual marker, spoken language influence

Procedia PDF Downloads 214
1 Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language

Authors: Marie Alaghband, Niloofar Yousefi, Ivan Garibay

Abstract:

Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image’s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.

Keywords: annotated facial expression dataset, gesture recognition, sequenced facial expression dataset, sign language recognition

Procedia PDF Downloads 125