Search results for: hearing aid output speech
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2916

Search results for: hearing aid output speech

2916 Efficacy of a Wiener Filter Based Technique for Speech Enhancement in Hearing Aids

Authors: Ajish K. Abraham

Abstract:

Hearing aid is the most fundamental technology employed towards rehabilitation of persons with sensory neural hearing impairment. Hearing in noise is still a matter of major concern for many hearing aid users and thus continues to be a challenging issue for the hearing aid designers. Several techniques are being currently used to enhance the speech at the hearing aid output. Most of these techniques, when implemented, result in reduction of intelligibility of the speech signal. Thus the dissatisfaction of the hearing aid user towards comprehending the desired speech amidst noise is prevailing. Multichannel Wiener Filter is widely implemented in binaural hearing aid technology for noise reduction. In this study, Wiener filter based noise reduction approach is experimented for a single microphone based hearing aid set up. This method checks the status of the input speech signal in each frequency band and then selects the relevant noise reduction procedure. Results showed that the Wiener filter based algorithm is capable of enhancing speech even when the input acoustic signal has a very low Signal to Noise Ratio (SNR). Performance of the algorithm was compared with other similar algorithms on the basis of improvement in intelligibility and SNR of the output, at different SNR levels of the input speech. Wiener filter based algorithm provided significant improvement in SNR and intelligibility compared to other techniques.

Keywords: hearing aid output speech, noise reduction, SNR improvement, Wiener filter, speech enhancement

Procedia PDF Downloads 224
2915 Influence of Loudness Compression on Hearing with Bone Anchored Hearing Implants

Authors: Anja Kurz, Marc Flynn, Tobias Good, Marco Caversaccio, Martin Kompis

Abstract:

Bone Anchored Hearing Implants (BAHI) are routinely used in patients with conductive or mixed hearing loss, e.g. if conventional air conduction hearing aids cannot be used. New sound processors and new fitting software now allow the adjustment of parameters such as loudness compression ratios or maximum power output separately. Today it is unclear, how the choice of these parameters influences aided speech understanding in BAHI users. In this prospective experimental study, the effect of varying the compression ratio and lowering the maximum power output in a BAHI were investigated. Twelve experienced adult subjects with a mixed hearing loss participated in this study. Four different compression ratios (1.0; 1.3; 1.6; 2.0) were tested along with two different maximum power output settings, resulting in a total of eight different programs. Each participant tested each program during two weeks. A blinded Latin square design was used to minimize bias. For each of the eight programs, speech understanding in quiet and in noise was assessed. For speech in quiet, the Freiburg number test and the Freiburg monosyllabic word test at 50, 65, and 80 dB SPL were used. For speech in noise, the Oldenburg sentence test was administered. Speech understanding in quiet and in noise was improved significantly in the aided condition in any program, when compared to the unaided condition. However, no significant differences were found between any of the eight programs. In contrast, on a subjective level there was a significant preference for medium compression ratios of 1.3 to 1.6 and higher maximum power output.

Keywords: Bone Anchored Hearing Implant, baha, compression, maximum power output, speech understanding

Procedia PDF Downloads 352
2914 Techniques to Characterize Subpopulations among Hearing Impaired Patients and Its Impact for Hearing Aid Fitting

Authors: Vijaya K. Narne, Gerard Loquet, Tobias Piechowiak, Dorte Hammershoi, Jesper H. Schmidt

Abstract:

BEAR, which stands for better hearing rehabilitation is a large-scale project in Denmark designed and executed by three national universities, three hospitals, and the hearing aid industry with the aim to improve hearing aid fitting. A total of 1963 hearing impaired people were included and were segmented into subgroups based on hearing-loss, demographics, audiological and questionnaires data (i.e., the speech, spatial and qualities of hearing scale [SSQ-12] and the International Outcome Inventory for Hearing-Aids [IOI-HA]). With the aim to provide a better hearing-aid fit to individual patients, we applied modern machine learning techniques with traditional audiograms rule-based systems. Results show that age, speech discrimination scores, and audiogram configurations were evolved as important parameters in characterizing sub-population from the data-set. The attempt to characterize sub-population reveal a clearer picture about the individual hearing difficulties encountered and the benefits derived from more individualized hearing aids.

Keywords: hearing loss, audiological data, machine learning, hearing aids

Procedia PDF Downloads 124
2913 Speech Identification Test for Individuals with High-Frequency Sloping Hearing Loss in Telugu

Authors: S. B. Rathna Kumar, Sandya K. Varudhini, Aparna Ravichandran

Abstract:

Telugu is a south central Dravidian language spoken in Andhra Pradesh, a southern state of India. The available speech identification tests in Telugu have been developed to determine the communication problems of individuals having a flat frequency hearing loss. These conventional speech audiometric tests would provide redundant information when used on individuals with high-frequency sloping hearing loss because of better hearing sensitivity in the low- and mid-frequency regions. Hence, conventional speech identification tests do not indicate the true nature of the communication problem of individuals with high-frequency sloping hearing loss. It is highly possible that a person with a high-frequency sloping hearing loss may get maximum scores if conventional speech identification tests are used. Hence, there is a need to develop speech identification test materials that are specifically designed to assess the speech identification performance of individuals with high-frequency sloping hearing loss. The present study aimed to develop speech identification test for individuals with high-frequency sloping hearing loss in Telugu. Individuals with high-frequency sloping hearing loss have difficulty in perception of voiceless consonants whose spectral energy is above 1000 Hz. Hence, the word lists constructed with phonemes having mid- and high-frequency spectral energy will estimate speech identification performance better for such individuals. The phonemes /k/, /g/, /c/, /ṭ/ /t/, /p/, /s/, /ś/, /ṣ/ and /h/are preferred for the construction of words as these phonemes have spectral energy distributed in the frequencies above 1000 KHz predominantly. The present study developed two word lists in Telugu (each word list contained 25 words) for evaluating speech identification performance of individuals with high-frequency sloping hearing loss. The performance of individuals with high-frequency sloping hearing loss was evaluated using both conventional and high-frequency word lists under recorded voice condition. The results revealed that the developed word lists were found to be more sensitive in identifying the true nature of the communication problem of individuals with high-frequency sloping hearing loss.

Keywords: speech identification test, high-frequency sloping hearing loss, recorded voice condition, Telugu

Procedia PDF Downloads 392
2912 Effect of Noise Reduction Algorithms on Temporal Splitting of Speech Signal to Improve Speech Perception for Binaural Hearing Aids

Authors: Rajani S. Pujar, Pandurangarao N. Kulkarni

Abstract:

Increased temporal masking affects the speech perception in persons with sensorineural hearing impairment especially under adverse listening conditions. This paper presents a cascaded scheme, which employs a noise reduction algorithm as well as temporal splitting of the speech signal. Earlier investigations have shown that by splitting the speech temporally and presenting alternate segments to the two ears help in reducing the effect of temporal masking. In this technique, the speech signal is processed by two fading functions, complementary to each other, and presented to left and right ears for binaural dichotic presentation. In the present study, half cosine signal is used as a fading function with crossover gain of 6 dB for the perceptual balance of loudness. Temporal splitting is combined with noise reduction algorithm to improve speech perception in the background noise. Two noise reduction schemes, namely spectral subtraction and Wiener filter are used. Listening tests were conducted on six normal-hearing subjects, with sensorineural loss simulated by adding broadband noise to the speech signal at different signal-to-noise ratios (∞, 3, 0, and -3 dB). Objective evaluation using PESQ was also carried out. The MOS score for VCV syllable /asha/ for SNR values of ∞, 3, 0, and -3 dB were 5, 4.46, 4.4 and 4.05 respectively, while the corresponding MOS scores for unprocessed speech were 5, 1.2, 0.9 and 0.65, indicating significant improvement in the perceived speech quality for the proposed scheme compared to the unprocessed speech.

Keywords: MOS, PESQ, spectral subtraction, temporal splitting, wiener filter

Procedia PDF Downloads 300
2911 Auditory Function in MP3 Users and Association with Hidden Hearing Loss

Authors: Nana Saralidze, Nino Sharashenidze, Zurab Kevanishvili

Abstract:

Hidden hearing loss may occur in humans exposed to prolonged high-level sound. It is the loss of ability to hear high-level background noise while having normal hearing in quiet. We compared the hearing of people who regularly listen 3 hours and more to personal music players and those who do not. Forty participants aged 18-30 years were divided into two groups: regular users of music players and people who had never used them. And the third group – elders aged 50-55 years, had 15 participants. Pure-tone audiometry (125-16000 Hz), auditory brainstem response (ABR) (70dB SPL), and ability to identify speech in noise (4-talker babble with a 65-dB signal-to-noise ratio at 80 dB) were measured in all participants. All participants had normal pure-tone audiometry (all thresholds < 25 dB HL). A significant difference between groups was observed in that regular users of personal audio systems correctly identified 53% of words, whereas the non-users identified 74% and the elder group – 63%. This contributes evidence supporting the presence of a hidden hearing loss in humans and demonstrates that speech-in-noise audiometry is an effective method and can be considered as the GOLD standard for detecting hidden hearing loss.

Keywords: mp3 player, hidden hearing loss, speech audiometry, pure tone audiometry

Procedia PDF Downloads 36
2910 Influence of Auditory Visual Information in Speech Perception in Children with Normal Hearing and Cochlear Implant

Authors: Sachin, Shantanu Arya, Gunjan Mehta, Md. Shamim Ansari

Abstract:

The cross-modal influence of visual information on speech perception can be illustrated by the McGurk effect which is an illusion of hearing of syllable /ta/ when a listener listens one syllable, e.g.: /pa/ while watching a synchronized video recording of syllable, /ka/. The McGurk effect is an excellent tool to investigate multisensory integration in speech perception in both normal hearing and hearing impaired populations. As the visual cue is unaffected by noise, individuals with hearing impairment rely more than normal listeners on the visual cues.However, when non congruent visual and auditory cues are processed together, audiovisual interaction seems to occur differently in normal and persons with hearing impairment. Therefore, this study aims to observe the audiovisual interaction in speech perception in Cochlear Implant users compares the same with normal hearing children. Auditory stimuli was routed through calibrated Clinical audiometer in sound field condition, and visual stimuli were presented on laptop screen placed at a distance of 1m at 0 degree azimuth. Out of 4 presentations, if 3 responses were a fusion, then McGurk effect was considered to be present. The congruent audiovisual stimuli /pa/ /pa/ and /ka/ /ka/ were perceived correctly as ‘‘pa’’ and ‘‘ka,’’ respectively by both the groups. For the non- congruent stimuli /da/ /pa/, 23 children out of 35 with normal hearing and 9 children out of 35 with cochlear implant had a fusion of sounds i.e. McGurk effect was present. For the non-congruent stimulus /pa/ /ka/, 25 children out of 35 with normal hearing and 8 children out of 35 with cochlear implant had fusion of sounds.The children who used cochlear implants for less than three years did not exhibit fusion of sound i.e. McGurk effect was absent in this group of children. To conclude, the results demonstrate that consistent fusion of visual with auditory information for speech perception is shaped by experience with bimodal spoken language during early life. When auditory experience with speech is mediated by cochlear implant, the likelihood of acquiring bimodal fusion is increased and it greatly depends on the age of implantation. All the above results strongly support the need for screening children for hearing capabilities and providing cochlear implants and aural rehabilitation as early as possible.

Keywords: cochlear implant, congruent stimuli, mcgurk effect, non-congruent stimuli

Procedia PDF Downloads 276
2909 COVID-19’s Effect on Pre-Existing Hearing Loss

Authors: Jonathan A. Mikhail, Arsenio Paez

Abstract:

It is not uncommon for a viral infection to cause hearing loss. Many viral infections are associated with sudden-onset, often unilateral, idiopathic sensorineural hearing loss. We conducted an exploratory study with thirty patients with pre-existing hearing loss between 50 and 64 to evaluate if COVID-19 was associated with exacerbated hearing loss. We hypothesized that hearing loss would be exacerbated by COVID-19 infection in patients with pre-existing hearing loss. A statistically significant paired T-test between pure tone averages (PTAs) at the patient’s original diagnosis and a current, updated audiometric assessment indicated a regression in hearing (p-value < .001) sensitivity following the contraction of COVID-19. Speech reception thresholds (SRTs) and word recognition scores (WRSs) were also considered, as well as the participants' gender. SRTs between each ear exhibited a statistically significant change (p-value of .002 and p-value < .001). WRSs did not show statistically significant differences (p-value of .290 and p-value of .098). A non-statistically significant Two-Way ANOVA was performed to evaluate gender’s potential role in exacerbated hearing loss and proved to be statistically insignificant (p-value of .214). This study discusses practical implications for clinical and educational pursuits in understanding COVID-19's effect on the auditory system and the need to evaluate the deadly virus further.

Keywords: audiology, COVID-19, sensorineural hearing loss, otology, auditory research

Procedia PDF Downloads 42
2908 Lip Localization Technique for Myanmar Consonants Recognition Based on Lip Movements

Authors: Thein Thein, Kalyar Myo San

Abstract:

Lip reading system is one of the different supportive technologies for hearing impaired, or elderly people or non-native speakers. For normal hearing persons in noisy environments or in conditions where the audio signal is not available, lip reading techniques can be used to increase their understanding of spoken language. Hearing impaired persons have used lip reading techniques as important tools to find out what was said by other people without hearing voice. Thus, visual speech information is important and become active research area. Using visual information from lip movements can improve the accuracy and robustness of a speech recognition system and the need for lip reading system is ever increasing for every language. However, the recognition of lip movement is a difficult task because of the region of interest (ROI) is nonlinear and noisy. Therefore, this paper proposes method to detect the accurate lips shape and to localize lip movement towards automatic lip tracking by using the combination of Otsu global thresholding technique and Moore Neighborhood Tracing Algorithm. Proposed method shows how accurate lip localization and tracking which is useful for speech recognition. In this work of study and experiments will be carried out the automatic lip localizing the lip shape for Myanmar consonants using the only visual information from lip movements which is useful for visual speech of Myanmar languages.

Keywords: lip reading, lip localization, lip tracking, Moore neighborhood tracing algorithm

Procedia PDF Downloads 325
2907 Influence of Hearing Aids on Non-medically Treatable Deafness

Authors: Donatien Niragira

Abstract:

The progress of technology creates new expectations for patients. The world of deafness is no exception. In recent years, there have been considerable advances in the field of technologies aimed at assisting failing hearing. According to the usual medical vocabulary, hearing aids are actually orthotics. They do not replace an organ but compensate for a functional impairment. The Amplifier Hearing amplification is useful for a large number of people with hearing loss. Hearing aids restore speech audibility. However, their benefits vary depending on the quality of residual hearing. The hearing aid is not a "cure" for deafness. It cannot correct all affected hearing abilities. It should be considered as an aid to communication. The urge to judge from the audiogram alone should be resisted here, as audiometry only indicates the ability to detect non-verbal sounds. To prevent hearing aids from ending up in the drawer, it is important to ensure that the patient's disability situations justify the use of this type of orthosis. If the problems of receptive Pre-fitting counseling are crucial: the person with hearing loss must be informed of the advantages and disadvantages of amplification in his or her case. Their expectations must be realistic. They also need to be aware that the adaptation process requires a good deal of patience and perseverance. They should be informed about the various models and types of hearing aids, including all the aesthetic, functional and financial considerations. If the person's motivation "survives" pre-fitting counseling, we are in the presence of a good candidate for amplification. In addition to its relevance, it shows that the results found in this study significantly improve the quality of audibility in the patient, from where this technology must be made accessible everywhere in the world.

Keywords: auditives protheses, hearing, aids, no medicaly treatable deafnes

Procedia PDF Downloads 18
2906 Influence of Hearing Aids on Non-Medically Treatable Deafness

Authors: Niragira Donatien

Abstract:

The progress of technology creates new expectations for patients. The world of deafness is no exception. In recent years, there have been considerable advances in the field of technologies aimed at assisting failing hearing. According to the usual medical vocabulary, hearing aids are actually orthotics. They do not replace an organ but compensate for a functional impairment. The amplifier hearing amplification is useful for a large number of people with hearing loss. Hearing aids restore speech audibility. However, their benefits vary depending on the quality of residual hearing. The hearing aid is not a "cure" for deafness. It cannot correct all affected hearing abilities. It should be considered as an aid to communicate who the best candidates for hearing aids are. The urge to judge from the audiogram alone should be resisted here, as audiometry only indicates the ability to detect non-verbal sounds. To prevent hearing aids from ending up in the drawer, it is important to ensure that the patient's disability situations justify the use of this type of orthosis. If the problems of receptive pre-fitting counselling are crucial, the person with hearing loss must be informed of the advantages and disadvantages of amplification in his or her case. Their expectations must be realistic. They also need to be aware that the adaptation process requires a good deal of patience and perseverance. They should be informed about the various models and types of hearing aids, including all the aesthetic, functional, and financial considerations. If the person's motivation "survives" pre-fitting counselling, we are in the presence of a good candidate for amplification. In addition to its relevance, hearing aids raise other questions: Should one or both ears be fitted? In short, all these questions show that the results found in this study significantly improve the quality of audibility in the patient, from where this technology must be made accessible everywhere in the world. So we want to progress with the technology.

Keywords: audiology, influence, hearing, madicaly, treatable

Procedia PDF Downloads 18
2905 A Profile of the Patients at the Hearing and Speech Clinic at the University of Jordan: A Retrospective Study

Authors: Maisa Haj-Tas, Jehad Alaraifi

Abstract:

The significance of the study: This retrospective study examined the speech and language profiles of patients who received clinical services at the University of Jordan Hearing and Speech Clinic (UJ-HSC) from 2009 to 2014. The UJ-HSC clinic is located in the capital Amman and was established in the late 1990s. It is the first hearing and speech clinic in Jordan and one of first speech and hearing clinics in the Middle East. This clinic provides services to an annual average of 2000 patients who are diagnosed with different communication disorders. Examining the speech and language profiles of patients in this clinic could provide an insight about the most common disorders seen in patients who attend similar clinics in Jordan. It could also provide information about community awareness of the role of speech therapists in the management of speech and language disorders. Methodology: The researchers examined the clinical records of 1140 patients (797 males and 343 females) who received clinical services at the UJ-HSC between the years 2009 and 2014 for the purpose of data analysis for this study. The main variables examined in the study were disorder type and gender. Participants were divided into four age groups: children, adolescents, adults, and older adults. The examined disorders were classified as either speech disorders, language disorders, or dysphagia (i.e., swallowing problems). The disorders were further classified as childhood language impairments, articulation disorders, stuttering, cluttering, voice disorders, aphasia, and dysphagia. Results: The results indicated that the prevalence for language disorders was the highest (50.7%) followed by speech disorders (48.3%), and dysphagia (0.9%). The majority of patients who were seen at the JU-HSC were diagnosed with childhood language impairments (47.3%) followed consecutively by articulation disorders (21.1%), stuttering (16.3%), voice disorders (12.1%), aphasia (2.2%), dysphagia (0.9%), and cluttering (0.2%). As for gender, the majority of patients seen at the clinic were males in all disorders except for voice disorders and cluttering. Discussion: The results of the present study indicate that the majority of examined patients were diagnosed with childhood language impairments. Based on this result, the researchers suggest that there seems to be a high prevalence of childhood language impairments among children in Jordan compared to other types of speech and language disorders. The researchers also suggest that there is a need for further examination of the actual prevalence data on speech and language disorders in Jordan. The fact that many of the children seen at the UJ-HSC were brought to the clinic either as a result of parental concern or teacher referral indicates that there seems to an increased awareness among parents and teachers about the services speech pathologists can provide about assessment and treatment of childhood speech and language disorders. The small percentage of other disorders (i.e., stuttering, cluttering, dysphasia, aphasia, and voice disorders) seen at the UJ-HSC may indicate a little awareness by the local community about the role of speech pathologists in the assessment and treatment of these disorders.

Keywords: clinic, disorders, language, profile, speech

Procedia PDF Downloads 287
2904 The Usefulness and Future of Hearing Aids Technologies and Their Impact on Hearing

Authors: Amirreza Razzaghipour Sorkhab

Abstract:

Hearing loss is one of the greatest common chronic health situations of older people. Hearing aids are the common treatment, and they recover the quality of life in older adults. Even so, comparatively few older adults with simple, mild to moderate, adult-onset, sensorineural hearing loss use hearing aids. It shouldn’t be expected that more expensive hearing aids always produce better outcomes. Given the importance of quality pledge, approaches of quantifying hearing aid fitting achievement are needed. Studies showed an important reduction in handicap following 3 weeks of hearing aid use, signifying the feasibility of using the Hearing Hindrance Inventory for the Elderly as an outcome measure for hearing aid success after a brief interval of hearing aid use. The results showed important development of the quality of life after three months of using a hearing aid in all members and improvement of their most important problems, i.e., the communication and exchange of data. Hearing loss can impair the conversation of information and so decreases the quality of life. Hearing aids have progressivemeaningfully over the past decade, chiefly due to the growing of digital technology. The next decade should see an even greater number of innovations to hearing aid technology. Development in digital hearing aids will be driven by investigate advances in the next fields such as wireless technology, hearing science, and cognitive scienceMoreover, emerging trends such as connectivity and individuation will also drive new technology. We hope that the advancement of technology will be enough to meet the needs of people with hearing aids.

Keywords: hearing loss, hearing aid, hearing aid technology, health

Procedia PDF Downloads 65
2903 The Convolution Recurrent Network of Using Residual LSTM to Process the Output of the Downsampling for Monaural Speech Enhancement

Authors: Shibo Wei, Ting Jiang

Abstract:

Convolutional-recurrent neural networks (CRN) have achieved much success recently in the speech enhancement field. The common processing method is to use the convolution layer to compress the feature space by multiple upsampling and then model the compressed features with the LSTM layer. At last, the enhanced speech is obtained by deconvolution operation to integrate the global information of the speech sequence. However, the feature space compression process may cause the loss of information, so we propose to model the upsampling result of each step with the residual LSTM layer, then join it with the output of the deconvolution layer and input them to the next deconvolution layer, by this way, we want to integrate the global information of speech sequence better. The experimental results show the network model (RES-CRN) we introduce can achieve better performance than LSTM without residual and overlaying LSTM simply in the original CRN in terms of scale-invariant signal-to-distortion ratio (SI-SNR), speech quality (PESQ), and intelligibility (STOI).

Keywords: convolutional-recurrent neural networks, speech enhancement, residual LSTM, SI-SNR

Procedia PDF Downloads 168
2902 Speech Enhancement Using Kalman Filter in Communication

Authors: Eng. Alaa K. Satti Salih

Abstract:

Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals.

Keywords: autoregressive process, Kalman filter, Matlab, noise speech

Procedia PDF Downloads 312
2901 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement

Authors: Pogula Rakesh, T. Kishore Kumar

Abstract:

Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.

Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss

Procedia PDF Downloads 446
2900 Phonological Characteristics of Severe to Profound Hearing Impaired Children

Authors: Akbar Darouie, Mamak Joulaie

Abstract:

In regard of phonological skills development importance and its influence on other aspects of language, this study has been performed. Determination of some phonological indexes in children with hearing impairment and comparison with hearing children was the objective. A sample of convenience was selected from a rehabilitation center and a kindergarten in Karaj, Iran. Participants consisted of 12 hearing impaired and 12 hearing children (age range: 5 years and 6 months to 6 years and 6 months old). Hearing impaired children suffered from severe to profound hearing loss while three of them were cochlear implanted and the others were wearing hearing aids. Conversational speech of these children was recorded and 50 first utterances were selected to analyze. Percentage of consonant correct (PCC) and vowel correct (PVC), initial and final consonant omission error, cluster consonant omission error and syllabic structure variety were compared in two groups. Data were analyzed with t test (version 16th SPSS). Comparison between PCC and PVC averages in two groups showed a significant difference (P< 0/01). There was a significant difference about final consonant emission error (P<0/001) and initial consonant emission error (P<0/01) too. Also, the differences between two groups on cluster consonant omission were significant (P<0/001). Therefore, some changes were seen in syllabic structures in children with hearing impairment compared to typical group. This study demonstrates some phonological differences in Farsi language between two groups of children. Therefore, it seems, in clinical practices we must notice this issue.

Keywords: hearing impairment, phonology, vowel, consonant

Procedia PDF Downloads 216
2899 Duration of Isolated Vowels in Infants with Cochlear Implants

Authors: Paris Binos

Abstract:

The present work investigates developmental aspects of the duration of isolated vowels in infants with normal hearing compared to those who received cochlear implants (CIs) before two years of age. Infants with normal hearing produced shorter vowel duration since this find related with more mature production abilities. First isolated vowels are transparent during the protophonic stage as evidence of an increased motor and linguistic control. Vowel duration is a crucial factor for the transition of prelexical speech to normal adult speech. Despite current knowledge of data for infants with normal hearing more research is needed to unravel productions skills in early implanted children. Thus, isolated vowel productions by two congenitally hearing-impaired Greek infants (implantation ages 1:4-1:11; post-implant ages 0:6-1:3) were recorded and sampled for six months after implantation with a Nucleus-24. The results compared with the productions of three normal hearing infants (chronological ages 0:8-1:1). Vegetative data and vocalizations masked by external noise or sounds were excluded. Participants had no other disabilities and had unknown deafness etiology. Prior to implantation the infants had an average unaided hearing loss of 95-110 dB HL while the post-implantation PTA decreased to 10-38 dB HL. The current research offers a methodology for the processing of the prelinguistic productions based on a combination of acoustical and auditory analyses. Based on the current methodological framework, duration measured through spectrograms based on wideband analysis, from the voicing onset to the end of the vowel. The end marked by two co-occurring events: 1) The onset of aperiodicity with a rapid change in amplitude in the waveform and 2) a loss in formant’s energy. Cut-off levels of significance were set at 0.05 for all tests. Bonferroni post hoc tests indicated that difference was significant between the mean duration of vowels of infants wearing CIs and their normal hearing peers. Thus, the mean vowel duration of CIs measured longer compared to the normal hearing peers (0.000). The current longitudinal findings contribute to the existing data for the performance of children wearing CIs at a very young age and enrich also the data of the Greek language. The above described weakness for CI’s performance is a challenge for future work in speech processing and CI’s processing strategies.

Keywords: cochlear implant, duration, spectrogram, vowel

Procedia PDF Downloads 231
2898 The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition

Authors: Fawaz S. Al-Anzi, Dia AbuZeina

Abstract:

Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.

Keywords: speech recognition, acoustic features, mel frequency, cepstral coefficients

Procedia PDF Downloads 227
2897 EEG and ABER Abnormalities in Children with Speech and Language Delay

Authors: Bharati Mehta, Manish Parakh, Bharti Bhandari, Sneha Ambwani

Abstract:

Speech and language delay (SLD) is seen commonly as a co-morbidity in children having severe resistant focal and generalized, syndromic and symptomatic epilepsies. It is however not clear whether epilepsy contributes to or is a mere association in the pathogenesis of SLD. Also, it is acknowledged that Auditory Brainstem Evoked Responses (ABER), besides used for evaluating hearing threshold, also aid in prognostication of neurological disorders and abnormalities in the hearing pathway in the brainstem. There is no circumscribed or surrogate neurophysiologic laboratory marker to adjudge the extent of SLD. The current study was designed to evaluate the abnormalities in Electroencephalography (EEG) and ABER in children with SLD who do not have an overt hearing deficit or autism. 94 children of age group 2-8 years with predominant SLD and without any gross motor developmental delay, head injury, gross hearing disorder, cleft lip/palate and autism were selected. Standard video Electroencephalography using the 10:20 international system and ABER after click stimulus with intensities 110 db until 40 db was performed in all children. EEG was abnormal in 47.9% (n= 45; 36 boys and 9 girls) children. In the children with abnormal EEG, 64.5% (n=29) had an abnormal background, 57.8% (n=27) had presence of generalized interictal epileptiform discharges (IEDs), 20% (n=9) had focal epileptiform discharges exclusively from left side and 33.3% (n=15) had multifocal IEDs occurring both in isolation or associated with generalised abnormalities. In ABER, surprisingly, the peak latencies for waves I, III & V, inter-peak latencies I-III & I-V, III-V and wave amplitude ratio V/I, were found within normal limits in both ears of all the children. Thus in the current study it is certain that presence of generalized IEDs in EEG are seen in higher frequency with SLD and focal IEDs are seen exclusively in left hemisphere in these children. It may be possible that even with generalized EEG abnormalities present in these children, left hemispheric abnormalities as a part of this generalized dysfunction may be responsible for the speech and language dysfunction. The current study also emphasizes that ABER may not be routinely recommended as diagnostic or prognostic tool in children with SLD without frank hearing deficit or autism, thus reducing the burden on electro physiologists, laboratories and saving time and financial resources.

Keywords: ABER, EEG, speech, language delay

Procedia PDF Downloads 485
2896 A Comparative Study on Vowel Articulation in Malayalam Speaking Children Using Cochlear Implant

Authors: Deepthy Ann Joy, N. Sreedevi

Abstract:

Hearing impairment (HI) at an early age, identified before the onset of language development can reduce the negative effect on speech and language development of children. Early rehabilitation is very important in the improvement of speech production in children with HI. Other than conventional hearing aids, Cochlear Implants are being used in the rehabilitation of children with HI. However, delay in acquisition of speech and language milestones persist in children with Cochlear Implant (CI). Delay in speech milestones are reflected through speech sound errors. These errors reflect the temporal and spectral characteristics of speech. Hence, acoustical analysis of the speech sounds will provide a better representation of speech production skills in children with CI. The present study aimed at investigating the acoustic characteristics of vowels in Malayalam speaking children with a cochlear implant. The participants of the study consisted of 20 Malayalam speaking children in the age range of four and seven years. The experimental group consisted of 10 children with CI, and the control group consisted of 10 typically developing children. Acoustic analysis was carried out for 5 short (/a/, /i/, /u/, /e/, /o/) and 5 long vowels (/a:/, /i:/, /u:/, /e:/, /o:/) in word-initial position. The responses were recorded and analyzed for acoustic parameters such as Vowel duration, Ratio of the duration of a short and long vowel, Formant frequencies (F₁ and F₂) and Formant Centralization Ratio (FCR) computed using the formula (F₂u+F₂a+F₁i+F₁u)/(F₂i+F₁a). Findings of the present study indicated that the values for vowel duration were higher in experimental group compared to the control group for all the vowels except for /u/. Ratio of duration of short and long vowel was also found to be higher in experimental group compared to control group except for /i/. Further F₁ for all vowels was found to be higher in experimental group with variability noticed in F₂ values. FCR was found be higher in experimental group, indicating vowel centralization. Further, the results of independent t-test revealed no significant difference across the parameters in both the groups. It was found that the spectral and temporal measures in children with CI moved towards normal range. The result emphasizes the significance of early rehabilitation in children with hearing impairment. The role of rehabilitation related aspects are also discussed in detail which can be clinically incorporated for the betterment of speech therapeutic services in children with CI.

Keywords: acoustics, cochlear implant, Malayalam, vowels

Procedia PDF Downloads 112
2895 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model

Authors: T. Thein, S. Kalyar Myo

Abstract:

Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.

Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)

Procedia PDF Downloads 259
2894 Thoughts Regarding Interprofessional Work between Nurses and Speech-Language-Hearing Therapists in Cancer Rehabilitation: An Approach for Dysphagia

Authors: Akemi Nasu, Keiko Matsumoto

Abstract:

Rehabilitation for cancer requires setting up individual goals for each patient and an approach that properly fits the stage of cancer when putting into practice. In order to cope with the daily changes in the patients' condition, the establishment of a good cooperative relationship between the nurses and the physiotherapists, occupational therapists, and speech-language-hearing therapists (therapists) becomes essential. This study will focus on the present situation of the cooperation between nurses and therapists, especially the speech-language-hearing therapists, and aim to elucidate what develops there. A semi-structured interview was conducted targeted at a physical therapist having practical experience in working in collaboration with nurses. The contents of the interview were transcribed and converted to data, and the data was encoded and categorized with sequentially increasing degrees of abstraction to conduct a qualitative explorative factor analysis of the data. When providing ethical explanations, particular care was taken to ensure that participants would not be subjected to any disadvantages as a result of participating in the study. In addition, they were also informed that their privacy would be ensured and that they have the right to decline to participate in the study. In addition, they were also informed that the results of the study would be announced publicly at an applicable nursing academic conference. This study has been approved following application to the ethical committee of the university with which the researchers are affiliated. The survey participant is a female speech-language-hearing therapist in her forties. As a result of the analysis, 6 categories were extracted consisting of 'measures to address appetite and aspiration pneumonia prevention', 'limitation of the care a therapist alone could provide', 'the all-inclusive patient- supportive care provided by nurses', 'expand the beneficial cooperation with nurses', 'providing education for nurses on the swallowing function utilizing videofluoroscopic examination of swallowing', 'enhancement of communication including conferences'. In order to improve the team performance, and for the teamwork competency necessary for the provision of safer care, mutual support is essential. As for the cooperation between nurses and therapists, this survey indicates that the maturing of the cooperation between professionals in order to improve nursing professionals' knowledge and enhance communication will lead to an improvement in the quality of the rehabilitation for cancer.

Keywords: cancer rehabilitation, nurses, speech-language-hearing therapists, interprofessional work

Procedia PDF Downloads 109
2893 Hearing Conservation Aspects of Soldier’s Exposure to Harmfull Noise within Military Armored Vehicles

Authors: Fink Nir

Abstract:

Soldiers within armored vehicles are exposed to continuous noise reaching levels as high as 120 dB. The use of hearing protection devices (HPD) may attenuate noise by as 25 dB, but attenuated noise reaching the ear is still harmful and may result in hearing loss. Hearing conservation programs in the military suggest methods to manage the harmful effects of noise. These include noise absorption within vehicles, evaluating HPD's performance, limiting time exposure, and providing guidance.

Keywords: armored vehicle noise, hearing loss, hearing protection devices, military noise, noise attenuation

Procedia PDF Downloads 112
2892 Healthcare in COVID-19 and It’s Impact on Children with Cochlear Implants

Authors: Amirreza Razzaghipour, Mahdi Khalili

Abstract:

References from the World Health Organization and the Center for Disease Control for deceleration the spread of the Novel COVID-19, comprises social estrangement, frequent handwashing, and covering your mouth when around others. As hearing healthcare specialists, the influence of existenceinvoluntary to boundary social interactions on persons with hearing impairment was significant for us to understand. We found ourselves delaying cochlear implant (CI) surgeries. All children, and chiefly those with hearing loss, are susceptible to reductions in spoken communication. Hearing plans, such as cochlear implants, provide children with hearing loss access to spoken communication and provision language development. when provided early and used consistently, these supplies help children with hearing loss to engage in spoken connections. Cochlear implant (CI) is a standard medical-surgical treatment for bilateral severe to profound hearing loss with no advantage with the hearing aid. Hearing is one of the most important senses in humans. Pediatric hearing loss establishes one of the most important public health challenges. Children with hearing loss are recognized early and habilitated via hearing aids or with cochlear implants (CIs). Suitable care and maintenance as well as continuous auditory verbal therapy (AVT) are also essential in reaching for the successful attainment of language acquisition. Children with hearing loss posture important challenges to their parents, particularly when there is limited admission to their hearing care providers. The disruption in the routine of their hearing and therapy follow-up services has had substantial effects on the children as well as their parents.

Keywords: healthcare, covid-19, cochlear implants, spoken communication, hearing loss

Procedia PDF Downloads 119
2891 Robust Noisy Speech Identification Using Frame Classifier Derived Features

Authors: Punnoose A. K.

Abstract:

This paper presents an approach for identifying noisy speech recording using a multi-layer perception (MLP) trained to predict phonemes from acoustic features. Characteristics of the MLP posteriors are explored for clean speech and noisy speech at the frame level. Appropriate density functions are used to fit the softmax probability of the clean and noisy speech. A function that takes into account the ratio of the softmax probability density of noisy speech to clean speech is formulated. These phoneme independent scoring is weighted using a phoneme-specific weightage to make the scoring more robust. Simple thresholding is used to identify the noisy speech recording from the clean speech recordings. The approach is benchmarked on standard databases, with a focus on precision.

Keywords: noisy speech identification, speech pre-processing, noise robustness, feature engineering

Procedia PDF Downloads 94
2890 An Analysis of Illocutioary Act in Martin Luther King Jr.'s Propaganda Speech Entitled 'I Have a Dream'

Authors: Mahgfirah Firdaus Soberatta

Abstract:

Language cannot be separated from human life. Humans use language to convey ideas, thoughts, and feelings. We can use words for different things for example like asserted, advising, promise, give opinions, hopes, etc. Propaganda is an attempt which seeks to obtain stable behavior to adopt everyone to his everyday life. It also controls the thoughts and attitudes of individuals in social settings permanent. In this research, the writer will discuss about the speech act in a propaganda speech delivered by Martin Luther King Jr. in Washington at Lincoln Memorial on August 28, 1963. 'I Have a Dream' is a public speech delivered by American civil rights activist MLK, he calls from an end to racism in USA. In this research, the writer uses Searle theory to analyze the types of illocutionary speech act that used by Martin Luther King Jr. in his propaganda speech. In this research, the writer uses a qualitative method described in descriptive, because the research wants to describe and explain the types of illocutionary speech acts used by Martin Luther King Jr. in his propaganda speech. The findings indicate that there are five types of speech acts in Martin Luther King Jr. speech. MLK also used direct speech and indirect speech in his propaganda speech. However, direct speech is the dominant speech act that MLK used in his propaganda speech. It is hoped that this research is useful for the readers to enrich their knowledge in a particular field of pragmatic speech acts.

Keywords: speech act, propaganda, Martin Luther King Jr., speech

Procedia PDF Downloads 408
2889 Hearing Aids Maintenance Training for Hearing-Impaired Preschool Children with the Help of Motion Graphic Tools

Authors: M. Mokhtarzadeh, M. Taheri Qomi, M. Nikafrooz, A. Atashafrooz

Abstract:

The purpose of the present study was to investigate the effectiveness of using motion graphics as a learning medium on training hearing aids maintenance skills to hearing-impaired children. The statistical population of this study consisted of all children with hearing loss in Ahvaz city, at age 4 to 7 years old. As the sample, 60, whom were selected by multistage random sampling, were randomly assigned to two groups; experimental (30 children) and control (30 children) groups. The research method was experimental and the design was pretest-posttest with the control group. The intervention consisted of a 2-minute motion graphics clip to train hearing aids maintenance skills. Data were collected using a 9-question researcher-made questionnaire. The data were analyzed by using one-way analysis of covariance. Results showed that the training of hearing aids maintenance skills with motion graphics was significantly effective for those children. The results of this study can be used by educators, teachers, professionals, and parents to train children with disabilities or normal students.

Keywords: hearing aids, hearing aids maintenance skill, hearing impaired children, motion graphics

Procedia PDF Downloads 116
2888 Effect of Classroom Acoustic Factors on Language and Cognition in Bilinguals and Children with Mild to Moderate Hearing Loss

Authors: Douglas MacCutcheon, Florian Pausch, Robert Ljung, Lorna Halliday, Stuart Rosen

Abstract:

Contemporary classrooms are increasingly inclusive of children with mild to moderate disabilities and children from different language backgrounds (bilinguals, multilinguals), but classroom environments and standards have not yet been adapted adequately to meet these challenges brought about by this inclusivity. Additionally, classrooms are becoming noisier as a learner-centered as opposed to teacher-centered teaching paradigm is adopted, which prioritizes group work and peer-to-peer learning. Challenging listening conditions with distracting sound sources and background noise are known to have potentially negative effects on children, particularly those that are prone to struggle with speech perception in noise. Therefore, this research investigates two groups vulnerable to these environmental effects, namely children with a mild to moderate hearing loss (MMHLs) and sequential bilinguals learning in their second language. In the MMHL study, this group was assessed on speech-in-noise perception, and a number of receptive language and cognitive measures (auditory working memory, auditory attention) and correlations were evaluated. Speech reception thresholds were found to be predictive of language and cognitive ability, and the nature of correlations is discussed. In the bilinguals study, sequential bilingual children’s listening comprehension, speech-in-noise perception, listening effort and release from masking was evaluated under a number of different ecologically valid acoustic scenarios in order to pinpoint the extent of the ‘native language benefit’ for Swedish children learning in English, their second language. Scene manipulations included target-to-distractor ratios and introducing spatially separated noise. This research will contribute to the body of findings from which educational institutions can draw when designing or adapting educational environments in inclusive schools.

Keywords: sequential bilinguals, classroom acoustics, mild to moderate hearing loss, speech-in-noise, release from masking

Procedia PDF Downloads 305
2887 From “Deafness” to “Aphasia”: A Study of Social Retreat of Families With Hearing-Impaired Children. Interviews with 12 Mothers of Hearing-Impaired Children in Beijing

Authors: Yang Jiayue, Liang Junwen

Abstract:

Case interviews with 12 mothers of hearing-impaired children in Beijing found that families with hard-of-hearing children make all-out efforts for rescue and rehabilitation until they exhaust family resources, they travel to work places, homes, and rehabilitation institutions every day, and social relations gradually shrink inward. In the stigmatized social environment, children with hearing impairment find it difficult to integrate into regular communication and gradually return to a silent world; their parents also suffer from social discrimination and affiliate stigma, and they gradually reduce contact with others. Finally, families with hearing-impaired children become invisible people in society; the "deafness" of children leads families to "loss of voice."

Keywords: family with hearing-impaired children, social retreat, social exclusion, stigma

Procedia PDF Downloads 43