Search results for: speech act
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 774

Search results for: speech act

594 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children

Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura

Abstract:

Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.

Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification

Procedia PDF Downloads 301
593 Part of Speech Tagging Using Statistical Approach for Nepali Text

Authors: Archit Yajnik

Abstract:

Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.

Keywords: hidden markov model, natural language processing, POS tagging, viterbi algorithm

Procedia PDF Downloads 326
592 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 63
591 The Influence of Neural Synchrony on Auditory Middle Latency and Late Latency Responses and Its Correlation with Audiological Profile in Individuals with Auditory Neuropathy

Authors: P. Renjitha, P. Hari Prakash

Abstract:

Auditory neuropathy spectrum disorder (ANSD) is an auditory disorder with normal cochlear outer hair cell function and disrupted auditory nerve function. It results in unique clinical characteristic with absent auditory brainstem response (ABR), absent acoustic reflex and the presence of otoacoustic emissions (OAE) and cochlear microphonics. The lesion site could be at cochlear inner hair cells, the synapse between the inner hair cells and type I auditory nerve fibers, and/or the auditory nerve itself. But the literatures on synchrony at higher auditory system are sporadic and are less understood. It might be interesting to see if there is a recovery of neural synchrony at higher auditory centers. Also, does the level at which the auditory system recovers with adequate synchrony to the extent of observable evoke response potentials (ERPs) can predict speech perception? In the current study, eight ANSD participants and healthy controls underwent detailed audiological assessment including ABR, auditory middle latency response (AMLR), and auditory late latency response (ALLR). AMLR was recorded for clicks and ALLR was evoked using 500Hz and 2 kHz tone bursts. Analysis revealed that the participant could be categorized into three groups. Group I (2/8) where ALLR was present only for 2kHz tone burst. Group II (4/8), where AMLR was absent and ALLR was seen for both the stimuli. Group III (2/8) consisted individuals with identifiable AMLR and ALLR for all the stimuli. The highest speech identification sore observed in ANSD group was 30% and hence considered having poor speech perception. Overall test result indicates that the site of neural synchrony recovery could be varying across individuals with ANSD. Some individuals show recovery of neural synchrony at the thalamocortical level while others show the same only at the cortical level. Within ALLR itself there could be variation across stimuli again could be related to neural synchrony. Nevertheless, none of these patterns could possible explain the speech perception ability of the individuals. Hence, it could be concluded that neural synchrony as measured by evoked potentials could not be a good clinical predictor speech perception.

Keywords: auditory late latency response, auditory middle latency response, auditory neuropathy spectrum disorder, correlation with speech identification score

Procedia PDF Downloads 149
590 A Stylistic Analysis of the Short Story ‘The Escape’ by Qaisra Shahraz

Authors: Huma Javed

Abstract:

Stylistics is a broad term that is concerned with both literature and linguistics, due to which the significance of the stylistics increases. This research aims to analyze Qaisra Shahraz's short story ‘The Escape’ from the stylistic analysis viewpoint. The focus of this study is on three aspects grammar category, lexical category, and figure of speech of the short story. The research designs for this article are both explorative and descriptive. The analysis of the data shows that the writer has used more nouns in the story as compared to other lexical items, which suggests that story has a descriptive style rather than narrative.

Keywords: The Escape, stylistics, grammatical category, lexical category, figure of speech

Procedia PDF Downloads 237
589 Imprecise Vowel Articulation in Down Syndrome: An Acoustic Study

Authors: Anitha Naittee Abraham, N. Sreedevi

Abstract:

Individuals with Down syndrome (DS) have relatively better expressive language compared to other individuals with intellectual disabilities. Reduced speech intelligibility is one of the major concerns of this group of individuals due to their anatomical and physiological differences. The study investigated the vowel articulation of Malayalam speaking children with DS in the age range of 5-10 years. The vowel production of 10 children with DS was compared with typically developing children in the same age range. Vowels were extracted from 3 words with the corner vowels /a/, /i/ and /u/ in the word-initial position, using Praat (version 5.3.23) software. Acoustic analysis was based on vowel space area (VSA), Formant centralization ration (FCR) and F2i/F2u. The findings revealed increased formant values for the control group except for F2a and F2u. Also, the experimental group had higher FCR, lower VSA, and F2i/F2u values suggestive of imprecise vowel articulation due to restricted tongue movements. The results of the independent t-test revealed a significant difference in F1a, F2i, F2u, VSA, FCR and F2i/F2u values between the experimental and control group. These findings support the fact that children with DS have imprecise vowel articulation that interferes with the overall speech intelligibility. Hence it is essential to target the oromotor skills to enhance the speech intelligibility which in turn benefit in the social and vocational domains of these individuals.

Keywords: Down syndrome, FCR, vowel articulation, vowel space

Procedia PDF Downloads 185
588 Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe

Authors: Babatunde Olumide Olawale, Oyebode Olumide Oyediran

Abstract:

The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe.

Keywords: access control, multimodal biometrics, pattern recognition, security safe

Procedia PDF Downloads 334
587 Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals

Authors: Surbhi Mathur, J. M. Vyas

Abstract:

Although the rapid development of forensic speaker recognition technology has been conducted, there are still many problems to be solved. The biggest problem arises when the cases involving disguised voice samples come across for the purpose of examination and identification. Such type of voice samples of anonymous callers is frequently encountered in crimes involving kidnapping, blackmailing, hoax extortion and many more, where the speaker makes a deliberate effort to manipulate their natural voice in order to conceal their identity due to the fear of being caught. Voice disguise causes serious damage to the natural vocal parameters of the speakers and thus complicates the process of identification. The sole objective of this doctoral project is to find out the possibility of rendering definite opinions in cases involving disguised speech by experimentally determining the effects of different disguise forms on personal identification and percentage rate of speaker recognition for various voice disguise techniques such as raised pitch, lower pitch, increased nasality, covering the mouth, constricting tract, obstacle in mouth etc by analyzing and comparing the amount of phonetic and acoustic variation in of artificial (disguised) and natural sample of an individual, by auditory as well as spectrographic analysis.

Keywords: forensic, speaker recognition, voice, speech, disguise, identification

Procedia PDF Downloads 368
586 Human Computer Interaction Using Computer Vision and Speech Processing

Authors: Shreyansh Jain Jeetmal, Shobith P. Chadaga, Shreyas H. Srinivas

Abstract:

Internet of Things (IoT) is seen as the next major step in the ongoing revolution in the Information Age. It is predicted that in the near future billions of embedded devices will be communicating with each other to perform a plethora of tasks with or without human intervention. One of the major ongoing hotbed of research activity in IoT is Human Computer Interaction (HCI). HCI is used to facilitate communication between an intelligent system and a user. An intelligent system typically comprises of a system consisting of various sensors, actuators and embedded controllers which communicate with each other to monitor data collected from the environment. Communication by the user to the system is typically done using voice. One of the major ongoing applications of HCI is in home automation as a personal assistant. The prime objective of our project is to implement a use case of HCI for home automation. Our system is designed to detect and recognize the users and personalize the appliances in the house according to their individual preferences. Our HCI system is also capable of speaking with the user when certain commands are spoken such as searching on the web for information and controlling appliances. Our system can also monitor the environment in the house such as air quality and gas leakages for added safety.

Keywords: human computer interaction, internet of things, computer vision, sensor networks, speech to text, text to speech, android

Procedia PDF Downloads 362
585 The Effect of the Base Computer Method on Repetitive Behaviors and Communication Skills

Authors: Hoorieh Darvishi, Rezaei

Abstract:

Introduction: This study investigates the efficacy of computer-based interventions for children with Autism Spectrum Disorder , specifically targeting communication deficits and repetitive behaviors. The research evaluates novel software applications designed to enhance narrative capabilities and sensory integration through structured, progressive intervention protocols Method: The study evaluated two intervention software programs designed for children with autism, focusing on narrative speech and sensory integration. Twelve children aged 5-11 participated in the two-month intervention, attending three 45-minute weekly sessions, with pre- and post-tests measuring speech, communication, and behavioral outcomes. The narrative speech software incorporated 14 stories using the Cohen model. It progressively reduced software assistance as children improved their storytelling abilities, ultimately enabling independent narration. The process involved story comprehension questions and guided story completion exercises. The sensory integration software featured approximately 100 exercises progressing from basic classification to complex cognitive tasks. The program included attention exercises, auditory memory training (advancing from single to four-syllable words), problem-solving, decision-making, reasoning, working memory, and emotion recognition activities. Each module was accompanied by frequency and pitch-adjusted music that child enjoys it to enhance learning through multiple sensory channels (visual, auditory, and tactile). Conclusion: The results indicated that the use of these software programs significantly improved communication and narrative speech scores in children, while also reducing scores related to repetitive behaviors. Findings: These findings highlight the positive impact of computer-based interventions on enhancing communication skills and reducing repetitive behaviors in children with autism.

Keywords: autism, narrative speech, persian, SI, repetitive behaviors, communication

Procedia PDF Downloads 9
584 Developing Communicative Skills in Foreign Languages by Video Tasks

Authors: Ekaterina G. Lipatova

Abstract:

The developing potential of a video task in teaching foreign languages involves the opportunities to improve four aspects of speech production process: listening, reading, speaking and writing. A video represents the sequence of actions, realized in the pictures logically connected and verbalized speech flow that simplifies and stimulates the process of perception. In this connection listening skills of students are developed effectively as well as their intellectual properties such as synthesizing, analyzing and generalizing the information. In terms of teaching capacity, a video task, in our opinion, is more stimulating than a traditional listening, since it involves the student into the plot of the communicative situation, emotional background and potentially makes them react to the gist in the cognitive and communicative ways. To be an effective method of teaching the video task should be structured in the way of psycho-linguistic characteristics of speech production process, in other words, should include three phases: before-watching, while-watching and after-watching. The system of tasks provided to each phase might involve the situations on reflecting to the video content in the forms of filling-the-gap tasks, multiple choice, True-or-False tasks (reading skills), exercises on expressing the opinion, project fulfilling (writing and speaking skills). In the before-watching phase we offer the students to adjust their perception mechanism to the topic and the problem of the chosen video by such task as “what do you know about such a problem?”, “is it new for you?”, “have you ever faced the situation of…?”. Then we proceed with the lexical and grammatical analysis of language units that form the body of a speech sample to lessen the perception and develop the student’s lexicon. The goal of while-watching phase is to build the student’s awareness about the problem presented in the video and challenge their inner attitude towards what they have seen by identifying the mistakes in the statements about the video content or making the summary, justifying their understanding. Finally, we move on to development of their speech skills within the communicative situation they observed and learnt by stimulating them to search the similar ideas in their backgrounds and represent them orally or in the written form or express their own opinion on the problem. It is compulsory to highlight, that a video task should contain the urgent, valid and interesting event related to the future profession of the student, since it will help to activate cognitive, emotional, verbal and ethic capacity of students. Also, logically structured video tasks are easily integrated into the system of e-learning and can provide the opportunity for the students to work with the foreign language on their own.

Keywords: communicative situation, perception mechanism, speech production process, speech skills

Procedia PDF Downloads 245
583 Analyzing Speech Acts in Reddit Posts of Formerly Incarcerated Youths

Authors: Yusra Ibrahim

Abstract:

This study explores the online discourse of justice-involved youth on Reddit, focusing on how anonymity and asynchronicity influence their ability to share and reflect on their incarceration experiences within the "Ask Me Anything" (AMA) community. The study utilizes a quantitative analysis of speech acts to examine the varied communication patterns exhibited by youths and commenters across two AMA threads. The results indicate that, although Reddit is not specifically designed for formerly incarcerated youths, its features provide a supportive environment for them to share their incarceration experiences with non-incarcerated individuals. The level of empathy and support from the audience varies based on the audience’s perspectives on incarceration and related traumatic experiences. Additionally, the study identifies a reciprocal relationship where youths benefit from community support while offering insights into the juvenile justice system and helping the audience understand the experience of incarceration. The study also reveals cultural shocks in physical and digital environments that youth experience after release and when using social media platforms and the internet. The study has implications for juvenile justice personnel, policymakers, and researchers in the juvenile justice system.

Keywords: juvenile justice, online discourse, reddit AMA, anonymity, speech acts taxonomy, reintegration, online community support

Procedia PDF Downloads 42
582 Leadership Effectiveness Compared among Three Cultures Using Voice Pitches

Authors: Asena Biber, Ates Gul Ergun, Seda Bulut

Abstract:

Based on the literature, there are large numbers of studies investigating the relationship between culture and leadership effectiveness. Although giving effective speeches is vital characteristic for a leader to be perceived as effective, to our knowledge, there is no research study the determinants of perceived effective leader speech. The aim of this study is to find the effects of both culture and voice pitch on perceptions of leader's speech effectiveness. Our hypothesis is that people from high power distance countries will perceive leaders' speech effective when the leader's voice pitch is high, comparing with people from relatively low power distance countries. The participants of the study were 36 undergraduate students (12 Pakistanis, 12 Nigerians, and 12 Turks) who are studying in Turkey. National power distance scores of Nigerians ranked as first, Turks ranked as second and Pakistanis ranked as third. There are two independent variables in this study; three nationality groups that representing three levels of power distance and voice pitch of the leader which is manipulated as high and low levels. Researchers prepared an audio to manipulate high and low conditions of voice pitch. A professional whose native language is English read the predetermined speech in high and low voice pitch conditions. Voice pitch was measured using Hertz (Hz) and Decibel (dB). Each nationality group (Pakistan, Nigeria, and Turkey) were divided into groups of six students who listened to either the low or high pitch conditions in the cubicles of the laboratory. It was expected from participants to listen to the audio and fill in the questionnaire which was measuring the leadership effectiveness on a response scale ranging from 1 to 5. To determine the effects of nationality and voice pitch on perceived effectiveness of leader' voice pitch, 3 (Pakistani, Nigerian, and Turk) x 2 (low voice pitch and high voice pitch) two way between subjects analysis of variances was carried out. The results indicated that there was no significant main effect of voice pitch and interaction effect on perceived effectiveness of the leader’s voice pitch. However, there was a significant main effect of nationality on perceived effectiveness of the leader's voice pitch. Based on the results of Turkey’s HSD post-hoc test, only the perceived effectiveness of the leader's speech difference between Pakistanis and Nigerians was statistically significant. The results show that the hypothesis of this study was not supported. As limitations of the study, it is of importance to mention that the sample size should be bigger. Also, the language of the questionnaire and speech should be in the participant’s native language in further studies.

Keywords: culture, leadership effectiveness, power distance, voice pitch

Procedia PDF Downloads 182
581 The Feminine Speech and the Ritual of Death in Albania

Authors: Aida Lamaj

Abstract:

Death is an inevitable phenomenon in our life, in the same way, are also the ritual of death accompanied by the dirge and the keening performed by men. Keening is a phenomenon common among all peoples, the instances in which the ritual of death and keening coincide, as a special phenomenon of its, are numerous given the fact that keening is an outcome of an extremely special emotional state. However, even during the ritual of death, every people try to display through words its qualities, a multitude of characteristics preserved and transmitted with fanaticism from one generation to the other. The ritual of death constitutes an important element of our tradition and at the same time a material always interesting to be studied in minute details. In this study, we have tried to limit ourselves to the feminine speech, since keening, in general in Albania has been carried out by women. Differences and similarities among keening on the national scale, from the diachronic and synchronic point of view, can be seen clearly if we compare the Albanian creations in different regions. The similarities and differences within the Albanian culture serve as a typical paradigm to study how the ancient elements of outlook that the Albanians have had on death, history, and the social organization in these regions have been preserved and transmitted and above all, in what way these feelings have been clothed from the linguistic point of view, the typologies of keening and of all of the ritual of death, which clearly shows archaic forms as well as new developments. These data have been gathered not only by conducting various surveys but also by observing closely the linguistic behavior of women in Albania during the ritual of death. The study has encompassed the popular lyric poetry as well as new entries, whereas from the geographic point of view we focus mainly in the Southern regions, although examples from other regions where Albanian speaking people live are also present. The main results of the study show that women use much more than men dialect form, peripheral language elements and descriptive elements during their speech in the ritual of death.

Keywords: feminine speech in Albania, linguistic characteristics of the dirge, ritual of death, the typologies of keening

Procedia PDF Downloads 162
580 Effect of Classroom Acoustic Factors on Language and Cognition in Bilinguals and Children with Mild to Moderate Hearing Loss

Authors: Douglas MacCutcheon, Florian Pausch, Robert Ljung, Lorna Halliday, Stuart Rosen

Abstract:

Contemporary classrooms are increasingly inclusive of children with mild to moderate disabilities and children from different language backgrounds (bilinguals, multilinguals), but classroom environments and standards have not yet been adapted adequately to meet these challenges brought about by this inclusivity. Additionally, classrooms are becoming noisier as a learner-centered as opposed to teacher-centered teaching paradigm is adopted, which prioritizes group work and peer-to-peer learning. Challenging listening conditions with distracting sound sources and background noise are known to have potentially negative effects on children, particularly those that are prone to struggle with speech perception in noise. Therefore, this research investigates two groups vulnerable to these environmental effects, namely children with a mild to moderate hearing loss (MMHLs) and sequential bilinguals learning in their second language. In the MMHL study, this group was assessed on speech-in-noise perception, and a number of receptive language and cognitive measures (auditory working memory, auditory attention) and correlations were evaluated. Speech reception thresholds were found to be predictive of language and cognitive ability, and the nature of correlations is discussed. In the bilinguals study, sequential bilingual children’s listening comprehension, speech-in-noise perception, listening effort and release from masking was evaluated under a number of different ecologically valid acoustic scenarios in order to pinpoint the extent of the ‘native language benefit’ for Swedish children learning in English, their second language. Scene manipulations included target-to-distractor ratios and introducing spatially separated noise. This research will contribute to the body of findings from which educational institutions can draw when designing or adapting educational environments in inclusive schools.

Keywords: sequential bilinguals, classroom acoustics, mild to moderate hearing loss, speech-in-noise, release from masking

Procedia PDF Downloads 324
579 Assessment of the Occupancy’s Effect on Speech Intelligibility in Al-Madinah Holy Mosque

Authors: Wasim Orfali, Hesham Tolba

Abstract:

This research investigates the acoustical characteristics of Al-Madinah Holy Mosque. Extensive field measurements were conducted in different locations of Al-Madinah Holy Mosque to characterize its acoustic characteristics. The acoustical characteristics are usually evaluated by the use of objective parameters in unoccupied rooms due to practical considerations. However, under normal conditions, the room occupancy can vary such characteristics due to the effect of the additional sound absorption present in the room or by the change in signal-to-noise ratio. Based on the acoustic measurements carried out in Al-Madinah Holy Mosque with and without occupancy, and the analysis of such measurements, the existence of acoustical deficiencies has been confirmed.

Keywords: Al-Madinah Holy Mosque, mosque acoustics, speech intelligibility, worship sound

Procedia PDF Downloads 177
578 A Sociolinguistic Study of the Outcomes of Arabic-French Contact in the Algerian Dialect Tlemcen Speech Community as a Case Study

Authors: R. Rahmoun-Mrabet

Abstract:

It is acknowledged that our style of speaking changes according to a wide range of variables such as gender, setting, the age of both the addresser and the addressee, the conversation topic, and the aim of the interaction. These differences in style are noticeable in monolingual and multilingual speech communities. Yet, they are more observable in speech communities where two or more codes coexist. The linguistic situation in Algeria reflects a state of bilingualism because of the coexistence of Arabic and French. Nevertheless, like all Arab countries, it is characterized by diglossia i.e. the concomitance of Modern Standard Arabic (MSA) and Algerian Arabic (AA), the former standing for the ‘high variety’ and the latter for the ‘low variety’. The two varieties are derived from the same source but are used to fulfil distinct functions that is, MSA is used in the domains of religion, literature, education and formal settings. AA, on the other hand, is used in informal settings, in everyday speech. French has strongly affected the Algerian language and culture because of the historical background of Algeria, thus, what can easily be noticed in Algeria is that everyday speech is characterized by code-switching from dialectal Arabic and French or by the use of borrowings. Tamazight is also very present in many regions of Algeria and is the mother tongue of many Algerians. Yet, it is not used in the west of Algeria, where the study has been conducted. The present work, which was directed in the speech community of Tlemcen-Algeria, aims at depicting some of the outcomes of the contact of Arabic with French such as code-switching, borrowing and interference. The question that has been asked is whether Algerians are aware of their use of borrowings or not. Three steps are followed in this research; the first one is to depict the sociolinguistic situation in Algeria and to describe the linguistic characteristics of the dialect of Tlemcen, which are specific to this city. The second one is concerned with data collection. Data have been collected from 57 informants who were given questionnaires and who have then been classified according to their age, gender and level of education. Information has also been collected through observation, and note taking. The third step is devoted to analysis. The results obtained reveal that most Algerians are aware of their use of borrowings. The present work clarifies how words are borrowed from French, and then adapted to Arabic. It also illustrates the way in which singular words inflect into plural. The results expose the main characteristics of borrowing as opposed to code-switching. The study also clarifies how interference occurs at the level of nouns, verbs and adjectives.

Keywords: bilingualism, borrowing, code-switching, interference, language contact

Procedia PDF Downloads 276
577 Psycholinguistic Analysis on Stuttering Treatment through Systemic Functional Grammar in Tom Hooper’s The King’s Speech

Authors: Nurvita Wijayanti

Abstract:

The movie titled The King’s Speech is based on a true story telling an English king suffers from stuttering and how he gets the treatment from the therapist, so that he can reduce the high frequency on stuttering. The treatment uses the unique approach implying the linguistic principles. This study shows how the language works significantly in order to treat the stuttering sufferer using psychological approach. Therefore, the linguistic study is done to analyze the treatment activity. Halliday’s Systemic Functional Grammar is used as the main approach in this study along with qualitative descriptive method. The study finds that the therapist though using the orthodox approach applies the psycholinguistic method to overcome the king’s stuttering.

Keywords: psycholinguistics, stuttering, systemic functional grammar, treatment

Procedia PDF Downloads 252
576 The Role of Media Relations in the Brand Image: Case Study in Three Brands of the Automobile Industry

Authors: Rosa Sobreira, Paula Arriscado

Abstract:

Marketers are aware that media relations is an important touch point, which is also cheaper, to bring their products and their brands to the consumer. They recognize the role of journalists as moderators and transformers of public opinion, and they realize their influence on brand image. And also, they know that readers, listeners, viewers and internet users "believe" more what they read, hear and see in the news than in an advertisement. The study is focused on the automotive industry and analyses the news published about three brands that share industrial facilities and components. We wanted to understand the role of the information created by the brand`s media team in the journalists’ work, and the impact on management, activation and differentiation of brands and their products` attributes and benefits. Based on a qualitative methodology, the analysis focused on press news, making comparison between media coverage and their “narratives” about the three cars from different brands. The results point to the fact that journalists easily integrate speech from the marks on their products. In the case of this study, we found that apart from the description of the many similarities between the three cars, the average speech also "struggled" for revealing the attributes that differentiate them. This interpretation of the results helps us to understand the "marriage" between branding and media. We believe also this paper let us to understand how journalists, through news, join the speech of the brands.

Keywords: brand management, media relations, differentiation, positioning

Procedia PDF Downloads 225
575 Auditory Function in MP3 Users and Association with Hidden Hearing Loss

Authors: Nana Saralidze, Nino Sharashenidze, Zurab Kevanishvili

Abstract:

Hidden hearing loss may occur in humans exposed to prolonged high-level sound. It is the loss of ability to hear high-level background noise while having normal hearing in quiet. We compared the hearing of people who regularly listen 3 hours and more to personal music players and those who do not. Forty participants aged 18-30 years were divided into two groups: regular users of music players and people who had never used them. And the third group – elders aged 50-55 years, had 15 participants. Pure-tone audiometry (125-16000 Hz), auditory brainstem response (ABR) (70dB SPL), and ability to identify speech in noise (4-talker babble with a 65-dB signal-to-noise ratio at 80 dB) were measured in all participants. All participants had normal pure-tone audiometry (all thresholds < 25 dB HL). A significant difference between groups was observed in that regular users of personal audio systems correctly identified 53% of words, whereas the non-users identified 74% and the elder group – 63%. This contributes evidence supporting the presence of a hidden hearing loss in humans and demonstrates that speech-in-noise audiometry is an effective method and can be considered as the GOLD standard for detecting hidden hearing loss.

Keywords: mp3 player, hidden hearing loss, speech audiometry, pure tone audiometry

Procedia PDF Downloads 74
574 An Exploratory Study of the Effects of Head Movement on Engagement within a Telepresence Environment

Authors: B. S. Bamoallem, A. J. Wodehouse, G. M. Mair

Abstract:

Communication takes place not only through speech, but also by means of gestures such as facial expressions, gaze, head movements, hand movements and body posture, and though there has been rapid development, communication platforms still lack this type of behavior. We believe communication platforms need to fully achieve this verbal and non-verbal behavior in order to make interactions more engaging and more efficient. In this study we decided to focus our research on the head rather than any other body part as it is a rich source of information for speech-related movement Thus we aim to investigate the value of incorporating head movements into the use of telepresence robots as communication platforms; this will be done by investigating a system that reproduces head movement manually as closely as possible.

Keywords: engagement, nonverbal behaviours, head movements, face-to-face interaction, telepresence robot

Procedia PDF Downloads 455
573 Efficacy of Phonological Awareness Intervention for People with Language Impairment

Authors: I. Wardana Ketut, I. Suparwa Nyoman

Abstract:

This study investigated the form and characteristic of speech sound produced by three Balinese subjects who have recovered from aphasia as well as intervened their language impairment on side of linguistic and neuronal aspects of views. The failure of judging the speech sound was caused by impairment of motor cortex that indicated there were lesions in left hemispheric language zone. Sound articulation phenomena were in the forms of phonemes deletion, replacement or assimilation in individual words and meaning building for anomic aphasia. Therefore, the Balinese sound patterns were stimulated by showing pictures to the subjects and recorded to recognize what individual consonants or vowels they unclearly produced and to find out how the sound disorder occurred. The physiology of sound production by subject’s speech organs could not only show the accuracy of articulation but also any level of severity the lesion they suffered from. The subjects’ speech sounds were investigated, classified and analyzed to know how poor the lingual units were and observed to clarify weaknesses of sound characters occurred either for place or manner of articulation. Many fricative and stopped consonants were replaced by glottal or palatal sounds because the cranial nerve, such as facial, trigeminal, and hypoglossal underwent impairment after the stroke. The phonological intervention was applied through a technique called phonemic articulation drill and the examination was conducted to know any change has been obtained. The finding informed that some weak articulation turned into clearer sound and simple meaning of language has been conveyed. The hierarchy of functional parts of brain played important role of language formulation and processing. From this finding, it can be clearly emphasized that this study supports the role of right hemisphere in recovery from aphasia is associated with functional brain reorganization.

Keywords: aphasia, intervention, phonology, stroke

Procedia PDF Downloads 196
572 Testifying in Court as a Victim of Crime for Persons with Little or No Functional Speech: Vocabulary Implications

Authors: Robyn White, Juan Bornman, Ensa Johnson

Abstract:

People with disabilities are at a high risk of becoming victims of crime. Individuals with little or no functional speech (LNFS) face an even higher risk. One way of reducing the risk of remaining a victim of crime is to face the alleged perpetrator in court as a witness – therefore it is important for a person with LNFS who has been a victim of crime to have the required vocabulary to testify in court. The aim of this study was to identify and describe the core and fringe legal vocabulary required by illiterate victims of crime, who have little or no functional speech, to testify in court as witnesses. A mixed-method, the exploratory sequential design consisting of two distinct phases was used to address the aim of the research. The first phase was of a qualitative nature and included two different data sources, namely in-depth semi-structured interviews and focus group discussions. The overall aim of this phase was to identify and describe core and fringe legal vocabulary and to develop a measurement instrument based on these results. Results from Phase 1 were used in Phase 2, the quantitative phase, during which the measurement instrument (a custom-designed questionnaire) was socially validated. The results produced six distinct vocabulary categories that represent the legal core vocabulary and 99 words that represent the legal fringe vocabulary. The findings suggested that communication boards should be individualised to the individual and the specific crime. It is believed that the vocabulary lists developed in this study act as a valid and reliable springboard from which communication boards can be developed. Recommendations were therefore made to develop an Alternative and Augmentative Communication Resource Tool Kit to assist the legal justice system.

Keywords: augmentative and alternative communication, person with little or no functional speech, sexual crimes, testifying in court, victim of crime, witness competency

Procedia PDF Downloads 480
571 Speech and LanguageTherapists’ Advices for Multilingual Children with Developmental Language Disorders

Authors: Rudinë Fetahaj, Flaka Isufi, Kristina Hansson

Abstract:

While evidence shows that in most European countries’ multilingualism is rising, unfortunately, the focus of Speech and Language Therapy (SLT) is still monolingualism. Furthermore, there is sparse information on how the needs of multilingual children with language disorders such as Developmental Language Disorder (DLD) are being met and which factors affect the intervention approach of SLTs when treating DLD. This study aims to examine the relationship and correlation between the number of languages SLTs speak, years of experience, and length of education with the advice they give to parents of multilingual children with DLD regarding which language to be spoken. This is a cross-sectional study where a survey was completed online by 2608 SLTs across Europe and data has been used from a 2017 COST-action project. IBM-SPSS-28 was used where descriptive analysis, correlation and Kruskal-Wallis test were performed.SLTs mainly advise the parents of multilingual children with DLD to speak their native language at home. Besides years of experience, language status and the level of education showed to have no association with the type of advice SLTs give. Results showed a non-significant moderate positive correlation between SLTs years of experience and their advice regarding the native language, whereas language status and length of education showed no correlation with the advice SLTs give to parents.

Keywords: quantitative study, developmental language disorders, multilingualism, speech and language therapy, children, European context

Procedia PDF Downloads 81
570 Refusal Speech Acts in French Learners of Mandarin Chinese

Authors: Jui-Hsueh Hu

Abstract:

This study investigated various models of refusal speech acts among three target groups: French learners of Mandarin Chinese (FM), Taiwanese native Mandarin speakers (TM), and native French speakers (NF). The refusal responses were analyzed in terms of their options, frequencies, and sequences and the contents of their semantic formulas. This study also examined differences in refusal strategies, as determined by social status and social distance, among the three groups. The difficulties of refusal speech acts encountered by FM were then generalized. The results indicated that Mandarin instructors of NF should focus on the different reasons for the pragmatic failure of French learners and should assist these learners in mastering refusal speech acts that rely on abundant cultural information. In this study, refusal policies were mainly classified according to the research of Beebe et al. (1990). Discourse completion questionnaires were collected from TM, FM, and NF, and their responses were compared to determine how refusal policies differed among the groups. This study not only emphasized the dissimilarities of refusal strategies between native Mandarin speakers and second-language Mandarin learners but also used NF as a control group. The results of this study demonstrated that regarding overall strategies, FM were biased toward NF in terms of strategy choice, order, and content, resulting in pragmatic transfer under the influence of social factors such as 'social status' and 'social distance,' strategy choices of FM were still closer to those of NF, and the phenomenon of pragmatic transfer of FM was revealed. Regarding the refusal difficulties among the three groups, the F-test in the analysis of variance revealed statistical significance was achieved for Role Playing Items 13 and 14 (P < 0.05). A difference was observed in the average number of refusal difficulties between the participants. However, after multiple comparisons, it was found that item 13 (unrecognized heterosexual junior colleague requesting contacts) was significantly more difficult for NF than for TM and FM; item 14 (contacts requested by an unrecognized classmate of the opposite sex) was significantly more difficult to refuse for NF than for TM. This study summarized the pragmatic language errors that most FM often perform, including the misuse or absence of modal words, hedging expressions, and empty words at the end of sentences, as the reasons for pragmatic failures. The common social pragmatic failures of FM include inaccurately applying the level of directness and formality.

Keywords: French Mandarin, interlanguage refusal, pragmatic transfer, speech acts

Procedia PDF Downloads 254
569 Comparison Study of Machine Learning Classifiers for Speech Emotion Recognition

Authors: Aishwarya Ravindra Fursule, Shruti Kshirsagar

Abstract:

In the intersection of artificial intelligence and human-centered computing, this paper delves into speech emotion recognition (SER). It presents a comparative analysis of machine learning models such as K-Nearest Neighbors (KNN),logistic regression, support vector machines (SVM), decision trees, ensemble classifiers, and random forests, applied to SER. The research employs four datasets: Crema D, SAVEE, TESS, and RAVDESS. It focuses on extracting salient audio signal features like Zero Crossing Rate (ZCR), Chroma_stft, Mel Frequency Cepstral Coefficients (MFCC), root mean square (RMS) value, and MelSpectogram. These features are used to train and evaluate the models’ ability to recognize eight types of emotions from speech: happy, sad, neutral, angry, calm, disgust, fear, and surprise. Among the models, the Random Forest algorithm demonstrated superior performance, achieving approximately 79% accuracy. This suggests its suitability for SER within the parameters of this study. The research contributes to SER by showcasing the effectiveness of various machine learning algorithms and feature extraction techniques. The findings hold promise for the development of more precise emotion recognition systems in the future. This abstract provides a succinct overview of the paper’s content, methods, and results.

Keywords: comparison, ML classifiers, KNN, decision tree, SVM, random forest, logistic regression, ensemble classifiers

Procedia PDF Downloads 45
568 The Mirage of Progress? a Longitudinal Study of Japanese Students’ L2 Oral Grammar

Authors: Robert Long, Hiroaki Watanabe

Abstract:

This longitudinal study examines the grammatical errors of Japanese university students’ dialogues with a native speaker over an academic year. The L2 interactions of 15 Japanese speakers were taken from the JUSFC2018 corpus (April/May 2018) and the JUSFC2019 corpus (January/February). The corpora were based on a self-introduction monologue and a three-question dialogue; however, this study examines the grammatical accuracy found in the dialogues. Research questions focused on a possible significant difference in grammatical accuracy from the first interview session in 2018 and the second one the following year, specifically regarding errors in clauses per 100 words, global errors and local errors, and with specific errors related to parts of speech. The investigation also focused on which forms showed the least improvement or had worsened? Descriptive statistics showed that error-free clauses/errors per 100 words decreased slightly while clauses with errors/100 words increased by one clause. Global errors showed a significant decline, while local errors increased from 97 to 158 errors. For errors related to parts of speech, a t-test confirmed there was a significant difference between the two speech corpora with more error frequency occurring in the 2019 corpus. This data highlights the difficulty in having students self-edit themselves.

Keywords: clause analysis, global vs. local errors, grammatical accuracy, L2 output, longitudinal study

Procedia PDF Downloads 132
567 Functional Outcome of Speech, Voice and Swallowing Following Excision of Glomus Jugulare Tumor

Authors: B. S. Premalatha, Kausalya Sahani

Abstract:

Background: Glomus jugulare tumors arise within the jugular foramen and are commonly seen in females particularly on the left side. Surgical excision of the tumor may cause lower cranial nerve deficits. Cranial nerve involvement produces hoarseness of voice, slurred speech, and dysphagia along with other physical symptoms, thereby affecting the quality of life of individuals. Though oncological clearance is mainly emphasized on while treating these individuals, little importance is given to their communication, voice and swallowing problems, which play a crucial part in daily functioning. Objective: To examine the functions of voice, speech and swallowing outcomes of the subjects, following excision of glomus jugulare tumor. Methods: Two female subjects aged 56 and 62 years had come with a complaint of change in voice, inability to swallow and reduced clarity of speech following surgery for left glomus jugulare tumor were participants of the study. Their surgical information revealed multiple cranial nerve palsies involving the left facial, left superior and recurrent branches of the vagus nerve, left pharyngeal, left soft palate, left hypoglossal and vestibular nerves. Functional outcomes of voice, speech and swallowing were evaluated by perceptual and objective assessment procedures. Assessment included the examination of oral structures and functions, dysarthria by Frenchey dysarthria assessment, cranial nerve functions and swallowing functions. MDVP and Dr. Speech software were used to evaluate acoustic parameters of voice and quality of voice respectively. Results: The study revealed that both the subjects, subsequent to excision of glomus jugulare tumor, showed a varied picture of affected oral structure and functions, articulation, voice and swallowing functions. The cranial nerve assessment showed impairment of the vagus, hypoglossal, facial and glossopharyngeal nerves. Voice examination indicated vocal cord paralysis associated with breathy quality of voice, weak voluntary cough, reduced pitch and loudness range, and poor respiratory support. Perturbation parameters as jitter, shimmer were affected along with s/z ratio indicative of voice fold pathology. Reduced MPD(Maximum Phonation Duration) of vowels indicated that disturbed coordination between respiratory and laryngeal systems. Hypernasality was found to be a prominent feature which reduced speech intelligibility. Imprecise articulation was seen in both the subjects as the hypoglossal nerve was affected following surgery. Injury to vagus, hypoglossal, gloss pharyngeal and facial nerves disturbed the function of swallowing. All the phases of swallow were affected. Aspiration was observed before and during the swallow, confirming the oropharyngeal dysphagia. All the subsystems were affected as per Frenchey Dysarthria Assessment signifying the diagnosis of flaccid dysarthria. Conclusion: There is an observable communication and swallowing difficulty seen following excision of glomus jugulare tumor. Even with complete resection, extensive rehabilitation may be necessary due to significant lower cranial nerve dysfunction. The finding of the present study stresses the need for involvement of as speech and swallowing therapist for pre-operative counseling and assessment of functional outcomes.

Keywords: functional outcome, glomus jugulare tumor excision, multiple cranial nerve impairment, speech and swallowing

Procedia PDF Downloads 252
566 An Early Attempt of Artificial Intelligence-Assisted Language Oral Practice and Assessment

Authors: Paul Lam, Kevin Wong, Chi Him Chan

Abstract:

Constant practicing and accurate, immediate feedback are the keys to improving students’ speaking skills. However, traditional oral examination often fails to provide such opportunities to students. The traditional, face-to-face oral assessment is often time consuming – attending the oral needs of one student often leads to the negligence of others. Hence, teachers can only provide limited opportunities and feedback to students. Moreover, students’ incentive to practice is also reduced by their anxiety and shyness in speaking the new language. A mobile app was developed to use artificial intelligence (AI) to provide immediate feedback to students’ speaking performance as an attempt to solve the above-mentioned problems. Firstly, it was thought that online exercises would greatly increase the learning opportunities of students as they can now practice more without the needs of teachers’ presence. Secondly, the automatic feedback provided by the AI would enhance students’ motivation to practice as there is an instant evaluation of their performance. Lastly, students should feel less anxious and shy compared to directly practicing oral in front of teachers. Technically, the program made use of speech-to-text functions to generate feedback to students. To be specific, the software analyzes students’ oral input through certain speech-to-text AI engine and then cleans up the results further to the point that can be compared with the targeted text. The mobile app has invited English teachers for the pilot use and asked for their feedback. Preliminary trials indicated that the approach has limitations. Many of the users’ pronunciation were automatically corrected by the speech recognition function as wise guessing is already integrated into many of such systems. Nevertheless, teachers have confidence that the app can be further improved for accuracy. It has the potential to significantly improve oral drilling by giving students more chances to practice. Moreover, they believe that the success of this mobile app confirms the potential to extend the AI-assisted assessment to other language skills, such as writing, reading, and listening.

Keywords: artificial Intelligence, mobile learning, oral assessment, oral practice, speech-to-text function

Procedia PDF Downloads 103
565 Auditory and Language Skills Development after Cochlear Implantation in Children with Multiple Disabilities

Authors: Tamer Mesallam, Medhat Yousef, Ayna Almasaad

Abstract:

BACKGROUND: Cochlear implantation (CI) in children with additional disabilities can be a fundamental and supportive intervention. Although, there may be some positive impacts of CI on children with multiple disabilities such as better outcomes of communication skills, development, and quality of life, the families of those children complain from the post-implant habilitation efforts that considered as a burden. OBJECTIVE: To investigate the outcomes of CI children with different co-disabilities through using the Meaningful Auditory Integration Scale (MAIS) and the Meaningful Use of Speech Scale (MUSS) as outcome measurement tools. METHODS: The study sample comprised 25 hearing-impaired children with co-disability who received cochlear implantation. Age and gender-matched control group of 25 cochlear-implanted children without any other disability has been also included. The participants' auditory skills and speech outcomes were assessed using MAIS and MUSS tests. RESULTS: There was a statistically significant difference in the different outcomes measure between the two groups. However, the outcomes of some multiple disabilities subgroups were comparable to the control group. Around 40% of the participants with co-disabilities experienced advancement in their methods of communication from behavior to oral mode. CONCLUSION: Cochlear-implanted children with multiple disabilities showed variable degrees of auditory and speech outcomes. The degree of benefits depends on the type of the co-disability. Long-term follow-up is recommended for those children.

Keywords: children with disabilities, Cochlear implants, hearing impairment, language development

Procedia PDF Downloads 119