Search results for: children speech
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3717

Search results for: children speech

3717 A Two-Stage Adaptation towards Automatic Speech Recognition System for Malay-Speaking Children

Authors: Mumtaz Begum Mustafa, Siti Salwah Salim, Feizal Dani Rahman

Abstract:

Recently, Automatic Speech Recognition (ASR) systems were used to assist children in language acquisition as it has the ability to detect human speech signal. Despite the benefits offered by the ASR system, there is a lack of ASR systems for Malay-speaking children. One of the contributing factors for this is the lack of continuous speech database for the target users. Though cross-lingual adaptation is a common solution for developing ASR systems for under-resourced language, it is not viable for children as there are very limited speech databases as a source model. In this research, we propose a two-stage adaptation for the development of ASR system for Malay-speaking children using a very limited database. The two stage adaptation comprises the cross-lingual adaptation (first stage) and cross-age adaptation. For the first stage, a well-known speech database that is phonetically rich and balanced, is adapted to the medium-sized Malay adults using supervised MLLR. The second stage adaptation uses the speech acoustic model generated from the first adaptation, and the target database is a small-sized database of the target users. We have measured the performance of the proposed technique using word error rate, and then compare them with the conventional benchmark adaptation. The two stage adaptation proposed in this research has better recognition accuracy as compared to the benchmark adaptation in recognizing children’s speech.

Keywords: Automatic Speech Recognition System, children speech, adaptation, Malay

Procedia PDF Downloads 357
3716 TeleMe Speech Booster: Web-Based Speech Therapy and Training Program for Children with Articulation Disorders

Authors: C. Treerattanaphan, P. Boonpramuk, P. Singla

Abstract:

Frequent, continuous speech training has proven to be a necessary part of a successful speech therapy process, but constraints of traveling time and employment dispensation become key obstacles especially for individuals living in remote areas or for dependent children who have working parents. In order to ameliorate speech difficulties with ample guidance from speech therapists, a website has been developed that supports speech therapy and training for people with articulation disorders in the standard Thai language. This web-based program has the ability to record speech training exercises for each speech trainee. The records will be stored in a database for the speech therapist to investigate, evaluate, compare and keep track of all trainees’ progress in detail. Speech trainees can request live discussions via video conference call when needed. Communication through this web-based program facilitates and reduces training time in comparison to walk-in training or appointments. This type of training also allows people with articulation disorders to practice speech lessons whenever or wherever is convenient for them, which can lead to a more regular training processes.

Keywords: web-based remote training program, Thai speech therapy, articulation disorders, speech booster

Procedia PDF Downloads 342
3715 Recognition by the Voice and Speech Features of the Emotional State of Children by Adults and Automatically

Authors: Elena E. Lyakso, Olga V. Frolova, Yuri N. Matveev, Aleksey S. Grigorev, Alexander S. Nikolaev, Viktor A. Gorodnyi

Abstract:

The study of the children’s emotional sphere depending on age and psychoneurological state is of great importance for the design of educational programs for children and their social adaptation. Atypical development may be accompanied by violations or specificities of the emotional sphere. To study characteristics of the emotional state reflection in the voice and speech features of children, the perceptual study with the participation of adults and the automatic recognition of speech were conducted. Speech of children with typical development (TD), with Down syndrome (DS), and with autism spectrum disorders (ASD) aged 6-12 years was recorded. To obtain emotional speech in children, model situations were created, including a dialogue between the child and the experimenter containing questions that can cause various emotional states in the child and playing with a standard set of toys. The questions and toys were selected, taking into account the child’s age, developmental characteristics, and speech skills. For the perceptual experiment by adults, test sequences containing speech material of 30 children: TD, DS, and ASD were created. The listeners were 100 adults (age 19.3 ± 2.3 years). The listeners were tasked with determining the children’s emotional state as “comfort – neutral – discomfort” while listening to the test material. Spectrographic analysis of speech signals was conducted. For automatic recognition of the emotional state, 6594 speech files containing speech material of children were prepared. Automatic recognition of three states, “comfort – neutral – discomfort,” was performed using automatically extracted from the set of acoustic features - the Geneva Minimalistic Acoustic Parameter Set (GeMAPS) and the extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS). The results showed that the emotional state is worse determined by the speech of TD children (comfort – 58% of correct answers, discomfort – 56%). Listeners better recognized discomfort in children with ASD and DS (78% of answers) than comfort (70% and 67%, respectively, for children with DS and ASD). The neutral state is better recognized by the speech of children with ASD (67%) than by the speech of children with DS (52%) and TD children (54%). According to the automatic recognition data using the acoustic feature set GeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.687; children with DS – 0.725; TD children – 0.641. When using the acoustic feature set eGeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.671; children with DS – 0.717; TD children – 0.631. The use of different models showed similar results, with better recognition of emotional states by the speech of children with DS than by the speech of children with ASD. The state of comfort is automatically determined better by the speech of TD children (precision – 0.546) and children with ASD (0.523), discomfort – children with DS (0.504). The data on the specificities of recognition by adults of the children’s emotional state by their speech may be used in recruitment for working with children with atypical development. Automatic recognition data can be used to create alternative communication systems and automatic human-computer interfaces for social-emotional learning. Acknowledgment: This work was financially supported by the Russian Science Foundation (project 18-18-00063).

Keywords: autism spectrum disorders, automatic recognition of speech, child’s emotional speech, Down syndrome, perceptual experiment

Procedia PDF Downloads 156
3714 Analysis of Linguistic Disfluencies in Bilingual Children’s Discourse

Authors: Sheena Christabel Pravin, M. Palanivelan

Abstract:

Speech disfluencies are common in spontaneous speech. The primary purpose of this study was to distinguish linguistic disfluencies from stuttering disfluencies in bilingual Tamil–English (TE) speaking children. The secondary purpose was to determine whether their disfluencies are mediated by native language dominance and/or on an early onset of developmental stuttering at childhood. A detailed study was carried out to identify the prosodic and acoustic features that uniquely represent the disfluent regions of speech. This paper focuses on statistical modeling of repetitions, prolongations, pauses and interjections in the speech corpus encompassing bilingual spontaneous utterances from school going children – English and Tamil. Two classifiers including Hidden Markov Models (HMM) and the Multilayer Perceptron (MLP), which is a class of feed-forward artificial neural network, were compared in the classification of disfluencies. The results of the classifiers document the patterns of disfluency in spontaneous speech samples of school-aged children to distinguish between Children Who Stutter (CWS) and Children with Language Impairment CLI). The ability of the models in classifying the disfluencies was measured in terms of F-measure, Recall, and Precision.

Keywords: bi-lingual, children who stutter, children with language impairment, hidden markov models, multi-layer perceptron, linguistic disfluencies, stuttering disfluencies

Procedia PDF Downloads 177
3713 Setswana Speech Rhythm Development in High-Socioeconomic Status Setswana-English Bilingual Children

Authors: Boikanyego Sebina

Abstract:

The present study investigates the effects of socioeconomic status (SES) and bilingualism on the Setswana speech rhythm of Batswana (citizens) children aged 6-7 years with typical development born and residing in Botswana. Botswana is a country in which there is a diglossic Setswana/English language setting, where English is the dominant high-status language in educational and public contexts. Generally, children from low SES have lower linguistic and cognitive profiles than their age-matched peers from high SES. A greater understanding of these variables would allow educators to distinguish between underdeveloped language skills in children due to impairment and environmental issues for them to successfully enroll children in language development enhancement programs specific to the child’s needs. There are 20 participants: 10 high SES private English-medium educated early sequential Setswana-English bilingual children, taught full-time in English (L2) from the age of 3 years, and for whom English has become dominant; and 10 low SES children who are educated in public schools for whom English is considered a learner language, i.e., L1 Setswana is dominant. The aim is to see whether SES and bilingualism, have had an effect on the Setswana speech rhythm of children in either group. The study primarily uses semi-spontaneous speech based on the telling of the wordless picture storybook. A questionnaire is used to elicit the language use pattern of the children and that of their parents, as well as the education level of the parents and the school the children attend. A comparison of the rhythm shows that children from high SES have a lower durational variability than those from low SES. The findings of the study are that the low durational variability by children from high SES may suggest an underdeveloped rhythm. In conclusion, the results of the present study are against the notion that children from high SES outperform those from low SES in linguistic development.

Keywords: bilingualism, Setswana English, socio-economic status, speech-rhythm

Procedia PDF Downloads 13
3712 Preliminary Study of the Phonological Development in Three and Four Year Old Bulgarian Children

Authors: Tsvetomira Braynova, Miglena Simonska

Abstract:

The article presents the results of research on phonological processes in three and four-year-old children. For the purpose of the study, an author's test was developed and conducted among 120 children. The study included three areas of research - at the level of words (96 words), at the level of sentence repetition (10 sentences) and at the level of generating own speech from a picture (15 pictures). The test also gives us additional information about the articulation errors of the assessed children. The main purpose of the icing is to analyze all phonological processes that occur at this age in Bulgarian children and to identify which are typical and atypical for this age. The results show that the most common phonology errors that children make are: sound substitution, an elision of sound, metathesis of sound, elision of a syllable, and elision of consonants clustered in a syllable. All examined children were identified with the articulatory disorder from type bilabial lambdacism. Measuring the correlation between the average length of repeated speech and the average length of generated speech, the analysis proves that the more words a child can repeat in part “repeated speech,” the more words they can be expected to generate in part “generating sentence.” The results of this study show that the task of naming a word provides sufficient and representative information to assess the child's phonology.

Keywords: assessment, phonology, articulation, speech-language development

Procedia PDF Downloads 140
3711 Childhood Apraxia of Speech and Autism: Interaction Influences and Treatment

Authors: Elad Vashdi

Abstract:

It is common to find speech deficit among children diagnosed with Autism. It can be found in the clinical field and recently in research. One of the DSM-V criteria suggests a speech delay (Delay in, or total lack of, the development of spoken language), but doesn't explain the cause of it. A common perception among professionals and families is that the inability to talk results from the autism. Autism is a name for a syndrome which just describes a phenomenon and is defined behaviorally. Since it is not based yet on a physiological gold standard, one can not conclude the nature of a deficit based on the name of the syndrome. A wide retrospective research (n=270) which included children with motor speech difficulties was conducted in Israel. The study analyzed entry evaluations in a private clinic during the years 2006-2013. The data was extracted from the reports. High percentage of children diagnosed with Autism (60%) was found. This result demonstrates the high relationship between Autism and motor speech problem. It also supports recent findings in research of Childhood apraxia of speech (CAS) occurrence among children with ASD. Only small percentage of the participants in this research (10%) were diagnosed with CAS even though their verbal deficits well fitted the guidelines for CAS diagnosis set by ASHA in 2007. This fact raises questions regarding the diagnostic procedure in Israel. The understanding that CAS might highly exist within Autism and can have a remarkable influence on the course of early development should be a guiding tool within the diagnosis procedure. CAS can explain the nature of the speech problem among some of the autistic children and guide the treatment in a more accurate way. Calculating the prevalence of CAS which includes the comorbidity with ASD reveals new numbers and suggests treating differently the CAS population.

Keywords: childhood apraxia of speech, Autism, treatment, speech

Procedia PDF Downloads 245
3710 Complications and Outcomes of Cochlear Implantation in Children Younger than 12 Months: A Multicenter Study

Authors: Alimohamad Asghari, Ahmad Daneshi, Mohammad Farhadi, Arash Bayat, Mohammad Ajalloueyan, Marjan Mirsalehi, Mohsen Rajati, Seyed Basir Hashemi, Nader Saki, Ali Omidvari

Abstract:

Evidence suggests that Cochlear Implantation (CI) is a beneficial approach for auditory and speech skills improvement in children with severe to profound hearing loss. However, it remains controversial if implantation in children <12 months is safe and effective compared to older children. The present study aimed to determine whether children's ages affect surgical complications and auditory and speech development. The current multicenter study enrolled 86 children who underwent CI surgery at <12 months of age (group A) and 362 children who underwent implantation between 12 and 24 months of age (group B). The Categories of Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) scores were determined pre-impanation, and "one-year" and "two-year" post-implantation. Four complications (overall rate: 4.65%; three minor) occurred in group A and 12 complications (overall rate: 4.41%; nine minor) occurred in group B. We found no statistically significant difference in the complication rates between the groups (p>0.05). The mean SIR and CAP scores improved over time following CI activation in both groups. However, we did not find significant differences in CAP and SIR scores between the groups across different time points. Cochlear implantation is a safe and efficient procedure in children younger than 12 months, providing substantial auditory and speech benefits comparable to children undergoing implantation at 12 to 24 months of age. Furthermore, surgical complications in younger children are similar to those of children undergoing the CI at an older age.

Keywords: cochlear implant, Infant, complications, outcome

Procedia PDF Downloads 69
3709 A Comparative Study on Vowel Articulation in Malayalam Speaking Children Using Cochlear Implant

Authors: Deepthy Ann Joy, N. Sreedevi

Abstract:

Hearing impairment (HI) at an early age, identified before the onset of language development can reduce the negative effect on speech and language development of children. Early rehabilitation is very important in the improvement of speech production in children with HI. Other than conventional hearing aids, Cochlear Implants are being used in the rehabilitation of children with HI. However, delay in acquisition of speech and language milestones persist in children with Cochlear Implant (CI). Delay in speech milestones are reflected through speech sound errors. These errors reflect the temporal and spectral characteristics of speech. Hence, acoustical analysis of the speech sounds will provide a better representation of speech production skills in children with CI. The present study aimed at investigating the acoustic characteristics of vowels in Malayalam speaking children with a cochlear implant. The participants of the study consisted of 20 Malayalam speaking children in the age range of four and seven years. The experimental group consisted of 10 children with CI, and the control group consisted of 10 typically developing children. Acoustic analysis was carried out for 5 short (/a/, /i/, /u/, /e/, /o/) and 5 long vowels (/a:/, /i:/, /u:/, /e:/, /o:/) in word-initial position. The responses were recorded and analyzed for acoustic parameters such as Vowel duration, Ratio of the duration of a short and long vowel, Formant frequencies (F₁ and F₂) and Formant Centralization Ratio (FCR) computed using the formula (F₂u+F₂a+F₁i+F₁u)/(F₂i+F₁a). Findings of the present study indicated that the values for vowel duration were higher in experimental group compared to the control group for all the vowels except for /u/. Ratio of duration of short and long vowel was also found to be higher in experimental group compared to control group except for /i/. Further F₁ for all vowels was found to be higher in experimental group with variability noticed in F₂ values. FCR was found be higher in experimental group, indicating vowel centralization. Further, the results of independent t-test revealed no significant difference across the parameters in both the groups. It was found that the spectral and temporal measures in children with CI moved towards normal range. The result emphasizes the significance of early rehabilitation in children with hearing impairment. The role of rehabilitation related aspects are also discussed in detail which can be clinically incorporated for the betterment of speech therapeutic services in children with CI.

Keywords: acoustics, cochlear implant, Malayalam, vowels

Procedia PDF Downloads 108
3708 The Effect of Speech-Shaped Noise and Speaker’s Voice Quality on First-Grade Children’s Speech Perception and Listening Comprehension

Authors: I. Schiller, D. Morsomme, A. Remacle

Abstract:

Children’s ability to process spoken language develops until the late teenage years. At school, where efficient spoken language processing is key to academic achievement, listening conditions are often unfavorable. High background noise and poor teacher’s voice represent typical sources of interference. It can be assumed that these factors particularly affect primary school children, because their language and literacy skills are still low. While it is generally accepted that background noise and impaired voice impede spoken language processing, there is an increasing need for analyzing impacts within specific linguistic areas. Against this background, the aim of the study was to investigate the effect of speech-shaped noise and imitated dysphonic voice on first-grade primary school children’s speech perception and sentence comprehension. Via headphones, 5 to 6-year-old children, recruited within the French-speaking community of Belgium, listened to and performed a minimal-pair discrimination task and a sentence-picture matching task. Stimuli were randomly presented according to four experimental conditions: (1) normal voice / no noise, (2) normal voice / noise, (3) impaired voice / no noise, and (4) impaired voice / noise. The primary outcome measure was task score. How did performance vary with respect to listening condition? Preliminary results will be presented with respect to speech perception and sentence comprehension and carefully interpreted in the light of past findings. This study helps to support our understanding of children’s language processing skills under adverse conditions. Results shall serve as a starting point for probing new measures to optimize children’s learning environment.

Keywords: impaired voice, sentence comprehension, speech perception, speech-shaped noise, spoken language processing

Procedia PDF Downloads 157
3707 Auditory and Language Skills Development after Cochlear Implantation in Children with Multiple Disabilities

Authors: Tamer Mesallam, Medhat Yousef, Ayna Almasaad

Abstract:

BACKGROUND: Cochlear implantation (CI) in children with additional disabilities can be a fundamental and supportive intervention. Although, there may be some positive impacts of CI on children with multiple disabilities such as better outcomes of communication skills, development, and quality of life, the families of those children complain from the post-implant habilitation efforts that considered as a burden. OBJECTIVE: To investigate the outcomes of CI children with different co-disabilities through using the Meaningful Auditory Integration Scale (MAIS) and the Meaningful Use of Speech Scale (MUSS) as outcome measurement tools. METHODS: The study sample comprised 25 hearing-impaired children with co-disability who received cochlear implantation. Age and gender-matched control group of 25 cochlear-implanted children without any other disability has been also included. The participants' auditory skills and speech outcomes were assessed using MAIS and MUSS tests. RESULTS: There was a statistically significant difference in the different outcomes measure between the two groups. However, the outcomes of some multiple disabilities subgroups were comparable to the control group. Around 40% of the participants with co-disabilities experienced advancement in their methods of communication from behavior to oral mode. CONCLUSION: Cochlear-implanted children with multiple disabilities showed variable degrees of auditory and speech outcomes. The degree of benefits depends on the type of the co-disability. Long-term follow-up is recommended for those children.

Keywords: children with disabilities, Cochlear implants, hearing impairment, language development

Procedia PDF Downloads 83
3706 Play-Based Approaches to Stimulate Language

Authors: Sherri Franklin-Guy

Abstract:

The emergence of language in young children has been well-documented and play-based activities that support its continued development have been utilized in the clinic-based setting. Speech-language pathologists have long used such activities to stimulate the production of language in children with speech and language disorders via modeling and elicitation tasks. This presentation will examine the importance of play in the development of language in young children, including social and pragmatic communication. Implications for clinicians and educators will be discussed.

Keywords: language development, language stimulation, play-based activities, symbolic play

Procedia PDF Downloads 202
3705 Clinical Profile of Oral Sensory Abilities in Developmental Dysarthria

Authors: Swapna N., Deepthy Ann Joy

Abstract:

One of the major causes of communication disorders in pediatric population is Motor speech disorders. These disorders which affect the motor aspects of speech articulators can have an adverse effect on the communication abilities of children in their developmental period. The motor aspects are dependent on the sensory abilities of children with motor speech disorders. Hence, oral sensorimotor evaluation is an important component in the assessment of children with motor speech disorders. To our knowledge, the importance of oral motor examination has been well established, yet the sensory assessment of the oral structures has received less focus. One of the most common motor speech disorders seen in children is developmental dysarthria. The present study aimed to assess the orosensory aspects in children with developmental dysarthria (CDD). The control group consisted of 240 children in the age range of four and eight years which was divided into four subgroups (4-4.11, 5-5.11, 6-6.11 and 7-7.11 years). The experimental group consisted of 15 children who were diagnosed with developmental dysarthria secondary to cerebral palsy who belonged in the age range of four and eight years. The oro-sensory aspects such as response to touch, temperature, taste, texture, and orofacial sensitivity were evaluated and profiled. For this purpose, the authors used the ‘Oral Sensorimotor Evaluation Protocol- Children’ which was developed by the authors. The oro-sensory section of the protocol was administered and the clinical profile of oro-sensory abilities of typically developing children and CDD was obtained for each of the sensory abilities. The oro-sensory abilities of speech articulators such as lips, tongue, palate, jaw, and cheeks were assessed in detail and scored. The results indicated that experimental group had poorer scores on oro-sensory aspects such as light static touch, kinetic touch, deep pressure, vibration and double simultaneous touch. However, it was also found that the experimental group performed similar to control group on few aspects like temperature, taste, texture and orofacial sensitivity. Apart from the oro-motor abilities which has received utmost interest, the variation in the oro-sensory abilities of experimental and control group is highlighted and discussed in the present study. This emphasizes the need for assessing the oro-sensory abilities in children with developmental dysarthria in addition to oro-motor abilities.

Keywords: cerebral palsy, developmental dysarthria, orosensory assessment, touch

Procedia PDF Downloads 129
3704 Prosody Generation in Neutral Speech Storytelling Application Using Tilt Model

Authors: Manjare Chandraprabha A., S. D. Shirbahadurkar, Manjare Anil S., Paithne Ajay N.

Abstract:

This paper proposes Intonation Modeling for Prosody generation in Neutral speech for Marathi (language spoken in Maharashtra, India) story telling applications. Nowadays audio story telling devices are very eminent for children. In this paper, we proposed tilt model for stressed words in Marathi for speech modification. Tilt model predicts modification in tone of neutral speech. GMM is used to identify stressed words for modification.

Keywords: tilt model, fundamental frequency, statistical parametric speech synthesis, GMM

Procedia PDF Downloads 352
3703 Influence of Auditory Visual Information in Speech Perception in Children with Normal Hearing and Cochlear Implant

Authors: Sachin, Shantanu Arya, Gunjan Mehta, Md. Shamim Ansari

Abstract:

The cross-modal influence of visual information on speech perception can be illustrated by the McGurk effect which is an illusion of hearing of syllable /ta/ when a listener listens one syllable, e.g.: /pa/ while watching a synchronized video recording of syllable, /ka/. The McGurk effect is an excellent tool to investigate multisensory integration in speech perception in both normal hearing and hearing impaired populations. As the visual cue is unaffected by noise, individuals with hearing impairment rely more than normal listeners on the visual cues.However, when non congruent visual and auditory cues are processed together, audiovisual interaction seems to occur differently in normal and persons with hearing impairment. Therefore, this study aims to observe the audiovisual interaction in speech perception in Cochlear Implant users compares the same with normal hearing children. Auditory stimuli was routed through calibrated Clinical audiometer in sound field condition, and visual stimuli were presented on laptop screen placed at a distance of 1m at 0 degree azimuth. Out of 4 presentations, if 3 responses were a fusion, then McGurk effect was considered to be present. The congruent audiovisual stimuli /pa/ /pa/ and /ka/ /ka/ were perceived correctly as ‘‘pa’’ and ‘‘ka,’’ respectively by both the groups. For the non- congruent stimuli /da/ /pa/, 23 children out of 35 with normal hearing and 9 children out of 35 with cochlear implant had a fusion of sounds i.e. McGurk effect was present. For the non-congruent stimulus /pa/ /ka/, 25 children out of 35 with normal hearing and 8 children out of 35 with cochlear implant had fusion of sounds.The children who used cochlear implants for less than three years did not exhibit fusion of sound i.e. McGurk effect was absent in this group of children. To conclude, the results demonstrate that consistent fusion of visual with auditory information for speech perception is shaped by experience with bimodal spoken language during early life. When auditory experience with speech is mediated by cochlear implant, the likelihood of acquiring bimodal fusion is increased and it greatly depends on the age of implantation. All the above results strongly support the need for screening children for hearing capabilities and providing cochlear implants and aural rehabilitation as early as possible.

Keywords: cochlear implant, congruent stimuli, mcgurk effect, non-congruent stimuli

Procedia PDF Downloads 272
3702 Low-Income African-American Fathers' Gendered Relationships with Their Children: A Study Examining the Impact of Child Gender on Father-Child Interactions

Authors: M. Lim Haslip

Abstract:

This quantitative study explores the correlation between child gender and father-child interactions. The author analyzes data from videotaped interactions between African-American fathers and their boy or girl toddler to explain how African-American fathers and toddlers interact with each other and whether these interactions differ by child gender. The purpose of this study is to investigate the research question: 'How, if at all, do fathers’ speech and gestures differ when interacting with their two-year-old sons versus daughters during free play?' The objectives of this study are to describe how child gender impacts African-American fathers’ verbal communication, examine how fathers gesture and speak to their toddler by gender, and to guide interventions for low-income African-American families and their children in early language development. This study involves a sample of 41 low-income African-American fathers and their 24-month-old toddlers. The videotape data will be used to observe 10-minute father-child interactions during free play. This study uses the already transcribed and coded data provided by Dr. Meredith Rowe, who did her study on the impact of African-American fathers’ verbal input on their children’s language development. The Child Language Data Exchange System (CHILDES program), created to study conversational interactions, was used for transcription and coding of the videotape data. The findings focus on the quantity of speech, diversity of speech, complexity of speech, and the quantity of gesture to inform the vocabulary usage, number of spoken words, length of speech, and the number of object pointings observed during father-toddler interactions in a free play setting. This study will help intervention and prevention scientists understand early language development in the African-American population. It will contribute to knowledge of the role of African-American fathers’ interactions on their children’s language development. It will guide interventions for the early language development of African-American children.

Keywords: parental engagement, early language development, African-American families, quantity of speech, diversity of speech, complexity of speech and the quantity of gesture

Procedia PDF Downloads 77
3701 EEG and ABER Abnormalities in Children with Speech and Language Delay

Authors: Bharati Mehta, Manish Parakh, Bharti Bhandari, Sneha Ambwani

Abstract:

Speech and language delay (SLD) is seen commonly as a co-morbidity in children having severe resistant focal and generalized, syndromic and symptomatic epilepsies. It is however not clear whether epilepsy contributes to or is a mere association in the pathogenesis of SLD. Also, it is acknowledged that Auditory Brainstem Evoked Responses (ABER), besides used for evaluating hearing threshold, also aid in prognostication of neurological disorders and abnormalities in the hearing pathway in the brainstem. There is no circumscribed or surrogate neurophysiologic laboratory marker to adjudge the extent of SLD. The current study was designed to evaluate the abnormalities in Electroencephalography (EEG) and ABER in children with SLD who do not have an overt hearing deficit or autism. 94 children of age group 2-8 years with predominant SLD and without any gross motor developmental delay, head injury, gross hearing disorder, cleft lip/palate and autism were selected. Standard video Electroencephalography using the 10:20 international system and ABER after click stimulus with intensities 110 db until 40 db was performed in all children. EEG was abnormal in 47.9% (n= 45; 36 boys and 9 girls) children. In the children with abnormal EEG, 64.5% (n=29) had an abnormal background, 57.8% (n=27) had presence of generalized interictal epileptiform discharges (IEDs), 20% (n=9) had focal epileptiform discharges exclusively from left side and 33.3% (n=15) had multifocal IEDs occurring both in isolation or associated with generalised abnormalities. In ABER, surprisingly, the peak latencies for waves I, III & V, inter-peak latencies I-III & I-V, III-V and wave amplitude ratio V/I, were found within normal limits in both ears of all the children. Thus in the current study it is certain that presence of generalized IEDs in EEG are seen in higher frequency with SLD and focal IEDs are seen exclusively in left hemisphere in these children. It may be possible that even with generalized EEG abnormalities present in these children, left hemispheric abnormalities as a part of this generalized dysfunction may be responsible for the speech and language dysfunction. The current study also emphasizes that ABER may not be routinely recommended as diagnostic or prognostic tool in children with SLD without frank hearing deficit or autism, thus reducing the burden on electro physiologists, laboratories and saving time and financial resources.

Keywords: ABER, EEG, speech, language delay

Procedia PDF Downloads 483
3700 Robust Noisy Speech Identification Using Frame Classifier Derived Features

Authors: Punnoose A. K.

Abstract:

This paper presents an approach for identifying noisy speech recording using a multi-layer perception (MLP) trained to predict phonemes from acoustic features. Characteristics of the MLP posteriors are explored for clean speech and noisy speech at the frame level. Appropriate density functions are used to fit the softmax probability of the clean and noisy speech. A function that takes into account the ratio of the softmax probability density of noisy speech to clean speech is formulated. These phoneme independent scoring is weighted using a phoneme-specific weightage to make the scoring more robust. Simple thresholding is used to identify the noisy speech recording from the clean speech recordings. The approach is benchmarked on standard databases, with a focus on precision.

Keywords: noisy speech identification, speech pre-processing, noise robustness, feature engineering

Procedia PDF Downloads 87
3699 Imprecise Vowel Articulation in Down Syndrome: An Acoustic Study

Authors: Anitha Naittee Abraham, N. Sreedevi

Abstract:

Individuals with Down syndrome (DS) have relatively better expressive language compared to other individuals with intellectual disabilities. Reduced speech intelligibility is one of the major concerns of this group of individuals due to their anatomical and physiological differences. The study investigated the vowel articulation of Malayalam speaking children with DS in the age range of 5-10 years. The vowel production of 10 children with DS was compared with typically developing children in the same age range. Vowels were extracted from 3 words with the corner vowels /a/, /i/ and /u/ in the word-initial position, using Praat (version 5.3.23) software. Acoustic analysis was based on vowel space area (VSA), Formant centralization ration (FCR) and F2i/F2u. The findings revealed increased formant values for the control group except for F2a and F2u. Also, the experimental group had higher FCR, lower VSA, and F2i/F2u values suggestive of imprecise vowel articulation due to restricted tongue movements. The results of the independent t-test revealed a significant difference in F1a, F2i, F2u, VSA, FCR and F2i/F2u values between the experimental and control group. These findings support the fact that children with DS have imprecise vowel articulation that interferes with the overall speech intelligibility. Hence it is essential to target the oromotor skills to enhance the speech intelligibility which in turn benefit in the social and vocational domains of these individuals.

Keywords: Down syndrome, FCR, vowel articulation, vowel space

Procedia PDF Downloads 144
3698 Speech and LanguageTherapists’ Advices for Multilingual Children with Developmental Language Disorders

Authors: Rudinë Fetahaj, Flaka Isufi, Kristina Hansson

Abstract:

While evidence shows that in most European countries’ multilingualism is rising, unfortunately, the focus of Speech and Language Therapy (SLT) is still monolingualism. Furthermore, there is sparse information on how the needs of multilingual children with language disorders such as Developmental Language Disorder (DLD) are being met and which factors affect the intervention approach of SLTs when treating DLD. This study aims to examine the relationship and correlation between the number of languages SLTs speak, years of experience, and length of education with the advice they give to parents of multilingual children with DLD regarding which language to be spoken. This is a cross-sectional study where a survey was completed online by 2608 SLTs across Europe and data has been used from a 2017 COST-action project. IBM-SPSS-28 was used where descriptive analysis, correlation and Kruskal-Wallis test were performed.SLTs mainly advise the parents of multilingual children with DLD to speak their native language at home. Besides years of experience, language status and the level of education showed to have no association with the type of advice SLTs give. Results showed a non-significant moderate positive correlation between SLTs years of experience and their advice regarding the native language, whereas language status and length of education showed no correlation with the advice SLTs give to parents.

Keywords: quantitative study, developmental language disorders, multilingualism, speech and language therapy, children, European context

Procedia PDF Downloads 52
3697 An Analysis of Illocutioary Act in Martin Luther King Jr.'s Propaganda Speech Entitled 'I Have a Dream'

Authors: Mahgfirah Firdaus Soberatta

Abstract:

Language cannot be separated from human life. Humans use language to convey ideas, thoughts, and feelings. We can use words for different things for example like asserted, advising, promise, give opinions, hopes, etc. Propaganda is an attempt which seeks to obtain stable behavior to adopt everyone to his everyday life. It also controls the thoughts and attitudes of individuals in social settings permanent. In this research, the writer will discuss about the speech act in a propaganda speech delivered by Martin Luther King Jr. in Washington at Lincoln Memorial on August 28, 1963. 'I Have a Dream' is a public speech delivered by American civil rights activist MLK, he calls from an end to racism in USA. In this research, the writer uses Searle theory to analyze the types of illocutionary speech act that used by Martin Luther King Jr. in his propaganda speech. In this research, the writer uses a qualitative method described in descriptive, because the research wants to describe and explain the types of illocutionary speech acts used by Martin Luther King Jr. in his propaganda speech. The findings indicate that there are five types of speech acts in Martin Luther King Jr. speech. MLK also used direct speech and indirect speech in his propaganda speech. However, direct speech is the dominant speech act that MLK used in his propaganda speech. It is hoped that this research is useful for the readers to enrich their knowledge in a particular field of pragmatic speech acts.

Keywords: speech act, propaganda, Martin Luther King Jr., speech

Procedia PDF Downloads 407
3696 Effect of Early Therapeutic Intervention for the Children With Autism Spectrum Disorders: A Quasi Experimental Design

Authors: Sultana Razia

Abstract:

The purpose of this study was to investigate the effect of early therapeutic intervention for the children with autism spectrum disorder. Participants were 63 children with autism spectrum disorder from Autism Corner in a selected rehabilitation center of Bangladesh. The hypothesis of the study was that participants would demonstrate significant improvement in social skills, speech and sensory skills following a 3-month intensive therapeutic protocol. This study included children who are at age of 18-month to 36-month and who were taking occupational therapy and speech and language therapy from the autism center. They were primarily screened using M-CHAT; however, children with other physical disability or medical conditions excluded. 3-months interventions of 6 sessions per week are a minimum of 45-minutes long per session, one to one interaction followed by parent-led structured home-based therapy was provided. The results indicated that early intensive therapeutic intervention improve understanding, social skills and sensory skills. It can be concluded that therapeutic early intervention a positive effect on Autism Spectrum Disorder.

Keywords: M-CHAT, ASD, sensory cheeklist, OT

Procedia PDF Downloads 16
3695 Effect of Classroom Acoustic Factors on Language and Cognition in Bilinguals and Children with Mild to Moderate Hearing Loss

Authors: Douglas MacCutcheon, Florian Pausch, Robert Ljung, Lorna Halliday, Stuart Rosen

Abstract:

Contemporary classrooms are increasingly inclusive of children with mild to moderate disabilities and children from different language backgrounds (bilinguals, multilinguals), but classroom environments and standards have not yet been adapted adequately to meet these challenges brought about by this inclusivity. Additionally, classrooms are becoming noisier as a learner-centered as opposed to teacher-centered teaching paradigm is adopted, which prioritizes group work and peer-to-peer learning. Challenging listening conditions with distracting sound sources and background noise are known to have potentially negative effects on children, particularly those that are prone to struggle with speech perception in noise. Therefore, this research investigates two groups vulnerable to these environmental effects, namely children with a mild to moderate hearing loss (MMHLs) and sequential bilinguals learning in their second language. In the MMHL study, this group was assessed on speech-in-noise perception, and a number of receptive language and cognitive measures (auditory working memory, auditory attention) and correlations were evaluated. Speech reception thresholds were found to be predictive of language and cognitive ability, and the nature of correlations is discussed. In the bilinguals study, sequential bilingual children’s listening comprehension, speech-in-noise perception, listening effort and release from masking was evaluated under a number of different ecologically valid acoustic scenarios in order to pinpoint the extent of the ‘native language benefit’ for Swedish children learning in English, their second language. Scene manipulations included target-to-distractor ratios and introducing spatially separated noise. This research will contribute to the body of findings from which educational institutions can draw when designing or adapting educational environments in inclusive schools.

Keywords: sequential bilinguals, classroom acoustics, mild to moderate hearing loss, speech-in-noise, release from masking

Procedia PDF Downloads 299
3694 A Profile of the Patients at the Hearing and Speech Clinic at the University of Jordan: A Retrospective Study

Authors: Maisa Haj-Tas, Jehad Alaraifi

Abstract:

The significance of the study: This retrospective study examined the speech and language profiles of patients who received clinical services at the University of Jordan Hearing and Speech Clinic (UJ-HSC) from 2009 to 2014. The UJ-HSC clinic is located in the capital Amman and was established in the late 1990s. It is the first hearing and speech clinic in Jordan and one of first speech and hearing clinics in the Middle East. This clinic provides services to an annual average of 2000 patients who are diagnosed with different communication disorders. Examining the speech and language profiles of patients in this clinic could provide an insight about the most common disorders seen in patients who attend similar clinics in Jordan. It could also provide information about community awareness of the role of speech therapists in the management of speech and language disorders. Methodology: The researchers examined the clinical records of 1140 patients (797 males and 343 females) who received clinical services at the UJ-HSC between the years 2009 and 2014 for the purpose of data analysis for this study. The main variables examined in the study were disorder type and gender. Participants were divided into four age groups: children, adolescents, adults, and older adults. The examined disorders were classified as either speech disorders, language disorders, or dysphagia (i.e., swallowing problems). The disorders were further classified as childhood language impairments, articulation disorders, stuttering, cluttering, voice disorders, aphasia, and dysphagia. Results: The results indicated that the prevalence for language disorders was the highest (50.7%) followed by speech disorders (48.3%), and dysphagia (0.9%). The majority of patients who were seen at the JU-HSC were diagnosed with childhood language impairments (47.3%) followed consecutively by articulation disorders (21.1%), stuttering (16.3%), voice disorders (12.1%), aphasia (2.2%), dysphagia (0.9%), and cluttering (0.2%). As for gender, the majority of patients seen at the clinic were males in all disorders except for voice disorders and cluttering. Discussion: The results of the present study indicate that the majority of examined patients were diagnosed with childhood language impairments. Based on this result, the researchers suggest that there seems to be a high prevalence of childhood language impairments among children in Jordan compared to other types of speech and language disorders. The researchers also suggest that there is a need for further examination of the actual prevalence data on speech and language disorders in Jordan. The fact that many of the children seen at the UJ-HSC were brought to the clinic either as a result of parental concern or teacher referral indicates that there seems to an increased awareness among parents and teachers about the services speech pathologists can provide about assessment and treatment of childhood speech and language disorders. The small percentage of other disorders (i.e., stuttering, cluttering, dysphasia, aphasia, and voice disorders) seen at the UJ-HSC may indicate a little awareness by the local community about the role of speech pathologists in the assessment and treatment of these disorders.

Keywords: clinic, disorders, language, profile, speech

Procedia PDF Downloads 283
3693 Effects of Therapeutic Horseback Riding in Speech and Communication Skills of Children with Autism

Authors: Aristi Alopoudi, Sofia Beloka, Vassiliki Pliogou

Abstract:

Autism is a complex neuro-developmental disorder with a variety of difficulties in many aspects such as social interaction, communication skills and verbal communication (speech). The aim of this study was to examine the impact of therapeutic horseback riding in improving the verbal and communication skills of children diagnosed with autism during 16 sessions. The researcher examined whether the expression of speech, the use of vocabulary, semantics, pragmatics, echolalia and communication skills were influenced by the therapeutic horseback riding when we increase the frequency of the sessions. The researcher observed two subjects of primary-school aged, in a two case observation design, with autism during 16 therapeutic horseback riding sessions (one riding session per week). Compared to baseline, at the end of the 16th therapeutic session, therapeutic horseback riding increased both verbal skills such as vocabulary, semantics, pragmatics, formation of sentences and communication skills such as eye contact, greeting, participation in dialogue and spontaneous speech. It was noticeable that echolalia remained stable. Increased frequency of therapeutic horseback riding was beneficial for significant improvement in verbal and communication skills. More specifically, from the first to the last riding session there was a great increase of vocabulary, semantics, and formation of sentences. Pragmatics reached a lower level than semantics but the same as the right usage of the first person (for example, I make a hug) and echolalia used for that. A great increase of spontaneous speech was noticed. The eye contact was presented in a lower level, and there was a slow but important raise at the greeting as well as the participation in dialogue. Last but not least; this is a first study conducted in therapeutic horseback riding studying the verbal communication and communication skills in autistic children. According to the references, therapeutic horseback riding is a therapy with a variety of benefits, thus; this research made clear that in the benefits of this therapy there should be included the improvement of verbal speech and communication.

Keywords: Autism, communication skills, speech, therapeutic horseback riding

Procedia PDF Downloads 234
3692 The Online Advertising Speech that Effect to the Thailand Internet User Decision Making

Authors: Panprae Bunyapukkna

Abstract:

This study investigated figures of speech used in fragrance advertising captions on the Internet. The objectives of the study were to find out the frequencies of figures of speech in fragrance advertising captions and the types of figures of speech most commonly applied in captions. The relation between figures of speech and fragrance was also examined in order to analyze how figures of speech were used to represent fragrance. Thirty-five fragrance advertisements were randomly selected from the Internet. Content analysis was applied in order to consider the relation between figures of speech and fragrance. The results showed that figures of speech were found in almost every fragrance advertisement except one advertisement of Lancôme. Thirty-four fragrance advertising captions used at least one kind of figure of speech. Metaphor was most frequently found and also most frequently applied in fragrance advertising captions, followed by alliteration, rhyme, simile and personification, and hyperbole respectively.

Keywords: advertising speech, fragrance advertisements, figures of speech, metaphor

Procedia PDF Downloads 204
3691 Frequency of Consonant Production Errors in Children with Speech Sound Disorder: A Retrospective-Descriptive Study

Authors: Amulya P. Rao, Prathima S., Sreedevi N.

Abstract:

Speech sound disorders (SSD) encompass the major concern in younger population of India with highest prevalence rate among the speech disorders. Children with SSD if not identified and rehabilitated at the earliest, are at risk for academic difficulties. This necessitates early identification using screening tools assessing the frequently misarticulated speech sounds. The literature on frequently misarticulated speech sounds is ample in English and other western languages targeting individuals with various communication disorders. Articulation is language specific, and there are limited studies reporting the same in Kannada, a Dravidian Language. Hence, the present study aimed to identify the frequently misarticulated consonants in Kannada and also to examine the error type. A retrospective, descriptive study was carried out using secondary data analysis of 41 participants (34-phonetic type and 7-phonemic type) with SSD in the age range 3-to 12-years. All the consonants of Kannada were analyzed by considering three words for each speech sound from the Kannada Diagnostic Photo Articulation test (KDPAT). Picture naming task was carried out, and responses were audio recorded. The recorded data were transcribed using IPA 2018 broad transcription. A criterion of 2/3 or 3/3 error productions was set to consider the speech sound to be an error. Number of error productions was calculated for each consonant in each participant. Then, the percentage of participants meeting the criteria were documented for each consonant to identify the frequently misarticulated speech sound. Overall results indicated that velar /k/ (48.78%) and /g/ (43.90%) were frequently misarticulated followed by voiced retroflex /ɖ/ (36.58%) and trill /r/ (36.58%). The lateral retroflex /ɭ/ was misarticulated by 31.70% of the children with SSD. Dentals (/t/, /n/), bilabials (/p/, /b/, /m/) and labiodental /v/ were produced correctly by all the participants. The highly misarticulated velars /k/ and /g/ were frequently substituted by dentals /t/ and /d/ respectively or omitted. Participants with SSD-phonemic type had multiple substitutions for one speech sound whereas, SSD-phonetic type had consistent single sound substitutions. Intra- and inter-judge reliability for 10% of the data using Cronbach’s Alpha revealed good reliability (0.8 ≤ α < 0.9). Analyzing a larger sample by replicating such studies will validate the present study results.

Keywords: consonant, frequently misarticulated, Kannada, SSD

Procedia PDF Downloads 89
3690 Development of Non-Intrusive Speech Evaluation Measure Using S-Transform and Light-Gbm

Authors: Tusar Kanti Dash, Ganapati Panda

Abstract:

The evaluation of speech quality and intelligence is critical to the overall effectiveness of the Speech Enhancement Algorithms. Several intrusive and non-intrusive measures are employed to calculate these parameters. Non-Intrusive Evaluation is most challenging as, very often, the reference clean speech data is not available. In this paper, a novel non-intrusive speech evaluation measure is proposed using audio features derived from the Stockwell transform. These features are used with the Light Gradient Boosting Machine for the effective prediction of speech quality and intelligibility. The proposed model is analyzed using noisy and reverberant speech from four databases, and the results are compared with the standard Intrusive Evaluation Measures. It is observed from the comparative analysis that the proposed model is performing better than the standard Non-Intrusive models.

Keywords: non-Intrusive speech evaluation, S-transform, light GBM, speech quality, and intelligibility

Procedia PDF Downloads 222
3689 Annexation (Al-Iḍāfah) in Thariq bin Ziyad’s Speech

Authors: Annisa D. Febryandini

Abstract:

Annexation is a typical construction that commonly used in Arabic language. The use of the construction appears in Arabic speech such as the speech of Thariq bin Ziyad. The speech as one of the most famous speeches in the history of Islam uses many annexations. This qualitative research paper uses the secondary data by library method. Based on the data, this paper concludes that the speech has two basic structures with some variations and has some grammatical relationship. Different from the other researches that identify the speech in sociology field, the speech in this paper will be analyzed in linguistic field to take a look at the structure of its annexation as well as the grammatical relationship.

Keywords: annexation, Thariq bin Ziyad, grammatical relationship, Arabic syntax

Procedia PDF Downloads 276
3688 Motor Speech Profile of Marathi Speaking Adults and Children

Authors: Anindita Banik, Anjali Kant, Aninda Duti Banik, Arun Banik

Abstract:

Speech is a complex, dynamic unique motor activity through which we express thoughts and emotions and respond to and control our environment. The aim was based to compare select Motor Speech parameters and their sub parameters across typical Marathi speaking adults and children. The subjects included a total of 300 divided into Group I, II, III including males and females. Subjects included were reported of no significant medical history and had a rating of 0-1 on GRBAS scale. The recordings were obtained utilizing three stimuli for the acoustic analysis of Diadochokinetic rate (DDK), Second Formant Transition, Voice and Tremor and its sub parameters. And these aforementioned parameters were acoustically analyzed in Motor Speech Profile software in VisiPitch IV. The statistical analyses were done by applying descriptive statistics and Two- Way ANOVA.The results obtained showed statistically significant difference across age groups and gender for the aforementioned parameters and its sub parameters.In DDK, for avp (ms) there was a significant difference only across age groups. However, for avr (/s) there was a significant difference across age groups and gender. It was observed that there was an increase in rate with an increase in age groups. The second formant transition sub parameter F2 magn (Hz) also showed a statistically significant difference across both age groups and gender. There was an increase in mean value with an increase in age. Females had a higher mean when compared to males. For F2 rate (/s) a statistically significant difference was observed across age groups. There was an increase in mean value with increase in age. It was observed for Voice and Tremor MFTR (%) that a statistically significant difference was present across age groups and gender. Also for RATR (Hz) there was statistically significant difference across both age groups and gender. In other words, the values of MFTR and RATR increased with an increase in age. Thus, this study highlights the variation of the motor speech parameters amongst the typical population which would be beneficial for comparison with the individuals with motor speech disorders for assessment and management.

Keywords: adult, children, diadochokinetic rate, second formant transition, tremor, voice

Procedia PDF Downloads 270