Search results for: fluent speech
971 Two-Dimensional Modeling of Spent Nuclear Fuel Using FLUENT
Authors: Imane Khalil, Quinn Pratt
Abstract:
In a nuclear reactor, an array of fuel rods containing stacked uranium dioxide pellets clad with zircalloy is the heat source for a thermodynamic cycle of energy conversion from heat to electricity. After fuel is used in a nuclear reactor, the assemblies are stored underwater in a spent nuclear fuel pool at the nuclear power plant while heat generation and radioactive decay rates decrease before it is placed in packages for dry storage or transportation. A computational model of a Boiling Water Reactor spent fuel assembly is modeled using FLUENT, the computational fluid dynamics package. Heat transfer simulations were performed on the two-dimensional 9x9 spent fuel assembly to predict the maximum cladding temperature for different input to the FLUENT model. Uncertainty quantification is used to predict the heat transfer and the maximum temperature profile inside the assembly.Keywords: spent nuclear fuel, conduction, heat transfer, uncertainty quantification
Procedia PDF Downloads 220970 Automatic Assignment of Geminate and Epenthetic Vowel for Amharic Text-to-Speech System
Authors: Tadesse Anberbir, Felix Bankole, Tomio Takara, Girma Mamo
Abstract:
In the development of a text-to-speech synthesizer, automatic derivation of correct pronunciation from the grapheme form of a text is a central problem. Particularly deriving phonological features which are not shown in orthography is challenging. In the Amharic language, geminates and epenthetic vowels are very crucial for proper pronunciation but neither is shown in orthography. In this paper, we proposed and integrated a morphological analyzer into an Amharic Text-to-Speech system, mainly to predict geminates and epenthetic vowel positions, and prepared a duration modeling method. Amharic Text-to-Speech system (AmhTTS) is a parametric and rule-based system that adopts a cepstral method and uses a source filter model for speech production and a Log Magnitude Approximation (LMA) filter as the vocal tract filter. The naturalness of the system after employing the duration modeling was evaluated by sentence listening test and we achieved an average Mean Opinion Score (MOS) 3.4 (68%) which is moderate. By modeling the duration of geminates and controlling the locations of epenthetic vowel, we are able to synthesize good quality speech. Our system is mainly suitable to be customized for other Ethiopian languages with limited resources.Keywords: Amharic, gemination, speech synthesis, morphology, epenthesis
Procedia PDF Downloads 87969 Systemic Functional Grammar Analysis of Barack Obama's Second Term Inaugural Speech
Authors: Sadiq Aminu, Ahmed Lamido
Abstract:
This research studies Barack Obama’s second inaugural speech using Halliday’s Systemic Functional Grammar (SFG). SFG is a text grammar which describes how language is used, so that the meaning of the text can be better understood. The primary source of data in this research work is Barack Obama’s second inaugural speech which was obtained from the internet. The analysis of the speech was based on the ideational and textual metafunctions of Systemic Functional Grammar. Specifically, the researcher analyses the Process Types and Participants (ideational) and the Theme/Rheme (textual). It was found that material process (process of doing) was the most frequently used ‘Process type’ and ‘We’ which refers to the people of America was the frequently used ‘Theme’. Application of the SFG theory, therefore, gives a better meaning to Barack Obama’s speech.Keywords: ideational, metafunction, rheme, textual, theme
Procedia PDF Downloads 159968 An Automatic Speech Recognition Tool for the Filipino Language Using the HTK System
Authors: John Lorenzo Bautista, Yoon-Joong Kim
Abstract:
This paper presents the development of a Filipino speech recognition tool using the HTK System. The system was trained from a subset of the Filipino Speech Corpus developed by the DSP Laboratory of the University of the Philippines-Diliman. The speech corpus was both used in training and testing the system by estimating the parameters for phonetic HMM-based (Hidden-Markov Model) acoustic models. Experiments on different mixture-weights were incorporated in the study. The phoneme-level word-based recognition of a 5-state HMM resulted in an average accuracy rate of 80.13 for a single-Gaussian mixture model, 81.13 after implementing a phoneme-alignment, and 87.19 for the increased Gaussian-mixture weight model. The highest accuracy rate of 88.70% was obtained from a 5-state model with 6 Gaussian mixtures.Keywords: Filipino language, Hidden Markov Model, HTK system, speech recognition
Procedia PDF Downloads 480967 Automatic Speech Recognition Systems Performance Evaluation Using Word Error Rate Method
Authors: João Rato, Nuno Costa
Abstract:
The human verbal communication is a two-way process which requires a mutual understanding that will result in some considerations. This kind of communication, also called dialogue, besides the supposed human agents it can also be performed between human agents and machines. The interaction between Men and Machines, by means of a natural language, has an important role concerning the improvement of the communication between each other. Aiming at knowing the performance of some speech recognition systems, this document shows the results of the accomplished tests according to the Word Error Rate evaluation method. Besides that, it is also given a set of information linked to the systems of Man-Machine communication. After this work has been made, conclusions were drawn regarding the Speech Recognition Systems, among which it can be mentioned their poor performance concerning the voice interpretation in noisy environments.Keywords: automatic speech recognition, man-machine conversation, speech recognition, spoken dialogue systems, word error rate
Procedia PDF Downloads 322966 Multi-Granularity Feature Extraction and Optimization for Pathological Speech Intelligibility Evaluation
Authors: Chunying Fang, Haifeng Li, Lin Ma, Mancai Zhang
Abstract:
Speech intelligibility assessment is an important measure to evaluate the functional outcomes of surgical and non-surgical treatment, speech therapy and rehabilitation. The assessment of pathological speech plays an important role in assisting the experts. Pathological speech usually is non-stationary and mutational, in this paper, we describe a multi-granularity combined feature schemes, and which is optimized by hierarchical visual method. First of all, the difference granularity level pathological features are extracted which are BAFS (Basic acoustics feature set), local spectral characteristics MSCC (Mel s-transform cepstrum coefficients) and nonlinear dynamic characteristics based on chaotic analysis. Latterly, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96-dimensions.The experimental results denote that new features by support vector machine (SVM) has the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation.Keywords: pathological speech, multi-granularity feature, MSCC (Mel s-transform cepstrum coefficients), F-score, radar chart
Procedia PDF Downloads 283965 Status of Communication and Swallowing Therapy in Patient with a Tracheostomy
Authors: Ya-Hui Wang
Abstract:
Lower speech therapy rate of tracheostomized patient was noted in comparison with previous researches. This study is aim to shed light on the referral status of speech therapy in those patients in Taiwan. This study developed an analysis for the size and key characteristics of the population of tracheostomized in-patient in the Taiwan. Method: We analyzed National Healthcare Insurance data (The Collaboration Center of Health Information Application, CCHIA) from Jan 1 2010 to Dec 31 2010. Result: over ages 3, number of tracheostomized in-patient is directly proportional to age. A high service loading was observed in North region in comparison with other regions. Only 4.87% of the tracheostomized in-patients were referred for speech therapy, and 1.9% for swallow examination, 2.5% for communication evaluation.Keywords: refer, speech therapy, training, rehabilitation
Procedia PDF Downloads 440964 Visual Speech Perception of Arabic Emphatics
Authors: Maha Saliba Foster
Abstract:
Speech perception has been recognized as a bi-sensory process involving the auditory and visual channels. Compared to the auditory modality, the contribution of the visual signal to speech perception is not very well understood. Studying how the visual modality affects speech recognition can have pedagogical implications in second language learning, as well as clinical application in speech therapy. The current investigation explores the potential effect of speech visual cues on the perception of Arabic emphatics (AEs). The corpus consists of 36 minimal pairs each containing two contrasting consonants, an AE versus a non-emphatic (NE). Movies of four Lebanese speakers were edited to allow perceivers to have partial view of facial regions: lips only, lips-cheeks, lips-chin, lips-cheeks-chin, lips-cheeks-chin-neck. In the absence of any auditory information and relying solely on visual speech, perceivers were above chance at correctly identifying AEs or NEs across vowel contexts; moreover, the models were able to predict the probability of perceivers’ accuracy in identifying some of the COIs produced by certain speakers; additionally, results showed an overlap between the measurements selected by the computer and those selected by human perceivers. The lack of significant face effect on the perception of AEs seems to point to the lips, present in all of the videos, as the most important and often sufficient facial feature for emphasis recognition. Future investigations will aim at refining the analyses of visual cues used by perceivers by using Principal Component Analysis and including time evolution of facial feature measurements.Keywords: Arabic emphatics, machine learning, speech perception, visual speech perception
Procedia PDF Downloads 306963 Design and Analysis of a Clustered Nozzle Configuration and Comparison of Its Thrust
Authors: Abdul Hadi Butt, Asfandyar Arshad
Abstract:
The purpose of this paper is to study the thrust variation in different configurations of clustered nozzles. It involves the design and analysis of clustered configuration of nozzles using Ansys fluent. Clustered nozzles with different configurations are simulated and compared on basis of effective exhaust thrust. Mixing length for the flow interaction is also calculated. Further clustered configurations are analyzed over different altitudes. An optimum value of the thrust among different configurations is proposed at the end of comparisons.Keywords: CD nozzle, cluster, thrust, fluent, ANSYS
Procedia PDF Downloads 401962 Speech Perception by Monolingual and Bilingual Dravidian Speakers under Adverse Listening Conditions
Authors: S. B. Rathna Kumar, Sale Kranthi, Sandya K. Varudhini
Abstract:
The precise perception of spoken language is influenced by several variables, including the listeners’ native language, distance between speaker and listener, reverberation and background noise. When noise is present in an acoustic environment, it masks the speech signal resulting in reduction in the redundancy of the acoustic and linguistic cues of speech. There is strong evidence that bilinguals face difficulty in speech perception for their second language compared with monolingual speakers under adverse listening conditions such as presence of background noise. This difficulty persists even for speakers who are highly proficient in their second language and is greater in those who have learned the second language later in life. The present study aimed to assess the performance of monolingual (Telugu speaking) and bilingual (Tamil as first language and Telugu as second language) speakers on Telugu speech perception task under quiet and noisy environments. The results indicated that both the groups performed similar in both quiet and noisy environments. The findings of the present study are not in accordance with the findings of previous studies which strongly report poorer speech perception in adverse listening conditions such as noise with bilingual speakers for their second language compared with monolinguals.Keywords: monolingual, bilingual, second language, speech perception, quiet, noise
Procedia PDF Downloads 389961 Dual-Channel Multi-Band Spectral Subtraction Algorithm Dedicated to a Bilateral Cochlear Implant
Authors: Fathi Kallel, Ahmed Ben Hamida, Christian Berger-Vachon
Abstract:
In this paper, a Speech Enhancement Algorithm based on Multi-Band Spectral Subtraction (MBSS) principle is evaluated for Bilateral Cochlear Implant (BCI) users. Specifically, dual-channel noise power spectral estimation algorithm using Power Spectral Densities (PSD) and Cross Power Spectral Densities (CPSD) of the observed signals is studied. The enhanced speech signal is obtained using Dual-Channel Multi-Band Spectral Subtraction ‘DC-MBSS’ algorithm. For performance evaluation, objective speech assessment test relying on Perceptual Evaluation of Speech Quality (PESQ) score is performed to fix the optimal number of frequency bands needed in DC-MBSS algorithm. In order to evaluate the speech intelligibility, subjective listening tests are assessed with 3 deafened BCI patients. Experimental results obtained using French Lafon database corrupted by an additive babble noise at different Signal-to-Noise Ratios (SNR) showed that DC-MBSS algorithm improves speech understanding for single and multiple interfering noise sources.Keywords: speech enhancement, spectral substracion, noise estimation, cochlear impalnt
Procedia PDF Downloads 549960 The Combination of the Mel Frequency Cepstral Coefficients, Perceptual Linear Prediction, Jitter and Shimmer Coefficients for the Improvement of Automatic Recognition System for Dysarthric Speech
Authors: Brahim Fares Zaidi
Abstract:
Our work aims to improve our Automatic Recognition System for Dysarthria Speech based on the Hidden Models of Markov and the Hidden Markov Model Toolkit to help people who are sick. With pronunciation problems, we applied two techniques of speech parameterization based on Mel Frequency Cepstral Coefficients and Perceptual Linear Prediction and concatenated them with JITTER and SHIMMER coefficients in order to increase the recognition rate of a dysarthria speech. For our tests, we used the NEMOURS database that represents speakers with dysarthria and normal speakers.Keywords: ARSDS, HTK, HMM, MFCC, PLP
Procedia PDF Downloads 108959 Freedom of Speech, Dissent and the Right to be Governed By Consensus are Inherent Rights Under Classical Islamic Law
Authors: Ziyad Motala
Abstract:
It is often proclaimed by leasers in Muslim majority countries that Islamic Law does not permit dissent against a ruler. This paper will evaluate and discuss freedom of speech and dissent as found in concrete prophetic examples during the time of the Prophet Muhammad. It will further look at the examples and practices during the time of the four Noble Caliphs, the immediate successors to the Prophet Muhammad. It will argue that the positivist position of absolute obedience to a ruler is inconsistent with the prophetic tradition. The examples of the Prophet and his immediate four successors (whose lessons Sunni Islam considers to be a source of Islamic Law) demonstrates among the earliest example of freedom of speech and dissent in human history. That tradition frowned upon an inert and uninvolved citizenry. It will conclude with lessons for modern day Muslim majority countries arguing with empirical evidence that freedom of speech, dissent and the right to be governed by consensus versus coercion are fundamental requisites of Islamic law.Keywords: islamic law, demoracy, freedom of speech, right to dissent
Procedia PDF Downloads 75958 Formulating a Definition of Hate Speech: From Divergence to Convergence
Authors: Avitus A. Agbor
Abstract:
Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.Keywords: hate speech, international human rights law, international criminal law, freedom of expression
Procedia PDF Downloads 76957 Effect Analysis of an Improved Adaptive Speech Noise Reduction Algorithm in Online Communication Scenarios
Authors: Xingxing Peng
Abstract:
With the development of society, there are more and more online communication scenarios such as teleconference and online education. In the process of conference communication, the quality of voice communication is a very important part, and noise may cause the communication effect of participants to be greatly reduced. Therefore, voice noise reduction has an important impact on scenarios such as voice calls. This research focuses on the key technologies of the sound transmission process. The purpose is to maintain the audio quality to the maximum so that the listener can hear clearer and smoother sound. Firstly, to solve the problem that the traditional speech enhancement algorithm is not ideal when dealing with non-stationary noise, an adaptive speech noise reduction algorithm is studied in this paper. Traditional noise estimation methods are mainly used to deal with stationary noise. In this chapter, we study the spectral characteristics of different noise types, especially the characteristics of non-stationary Burst noise, and design a noise estimator module to deal with non-stationary noise. Noise features are extracted from non-speech segments, and the noise estimation module is adjusted in real time according to different noise characteristics. This adaptive algorithm can enhance speech according to different noise characteristics, improve the performance of traditional algorithms to deal with non-stationary noise, so as to achieve better enhancement effect. The experimental results show that the algorithm proposed in this chapter is effective and can better adapt to different types of noise, so as to obtain better speech enhancement effect.Keywords: speech noise reduction, speech enhancement, self-adaptation, Wiener filter algorithm
Procedia PDF Downloads 59956 Analysis of Interleaving Scheme for Narrowband VoIP System under Pervasive Environment
Authors: Monica Sharma, Harjit Pal Singh, Jasbinder Singh, Manju Bala
Abstract:
In Voice over Internet Protocol (VoIP) system, the speech signal is degraded when passed through the network layers. The speech signal is processed through the best effort policy based IP network, which leads to the network degradations including delay, packet loss and jitter. The packet loss is the major issue of the degradation in the VoIP signal quality; even a single lost packet may generate audible distortion in the decoded speech signal. In addition to these network degradations, the quality of the speech signal is also affected by the environmental noises and coder distortions. The signal quality of the VoIP system is improved through the interleaving technique. The performance of the system is evaluated for various types of noises at different network conditions. The performance of the enhanced VoIP signal is evaluated using perceptual evaluation of speech quality (PESQ) measurement for narrow band signal.Keywords: VoIP, interleaving, packet loss, packet size, background noise
Procedia PDF Downloads 479955 Voice Commands Recognition of Mentor Robot in Noisy Environment Using HTK
Authors: Khenfer-Koummich Fatma, Hendel Fatiha, Mesbahi Larbi
Abstract:
this paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a man-machine interface with a voice recognition system that allows the operator to tele-operate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands spoken in two languages: French and Arabic. The recognition rate obtained is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equal to 30 db, the Arabic speech recognition rate is 69% and 80% for French speech recognition rate. This can be explained by the ability of phonetic context of each speech when the noise is added.Keywords: voice command, HMM, TIMIT, noise, HTK, Arabic, speech recognition
Procedia PDF Downloads 382954 Speech Rhythm Variation in Languages and Dialects: F0, Natural and Inverted Speech
Authors: Imen Ben Abda
Abstract:
Languages have been classified into different rhythm classes. 'Stress-timed' languages are exemplified by English, 'syllable-timed' languages by French and 'mora-timed' languages by Japanese. However, to our best knowledge, acoustic studies have not been unanimous in strictly establishing which rhythm category a given language belongs to and failed to show empirical evidence for isochrony. Perception seems to be a good approach to categorize languages into different rhythm classes. This study, within the scope of experimental phonetics, includes an account of different perceptual experiments using cues from natural and inverted speech, as well as pitch extracted from speech data. It is an attempt to categorize speech rhythm over a large set of Arabic (Tunisian, Algerian, Lebanese and Moroccan) and English dialects (Welsh, Irish, Scottish and Texan) as well as other languages such as Chinese, Japanese, French, and German. Listeners managed to classify the different languages and dialects into different rhythm classes using suprasegmental cues mainly rhythm and pitch (F0). They also perceived rhythmic differences even among languages and dialects belonging to the same rhythm class. This may show that there are different subclasses within very broad rhythmic typologies.Keywords: F0, inverted speech, mora-timing, rhythm variation, stress-timing, syllable-timing
Procedia PDF Downloads 527953 Effects of Exposing Learners to Speech Acts in the German Teaching Material Schritte International: The Case of Requests
Authors: Wan-Lin Tsai
Abstract:
Speech act of requests is an important issue in the field of language learning and teaching because we cannot avoid making requesting in our daily life. This study examined whether or not the subjects who were freshmen and majored in German at Wenzao University of Languages were able to use the linguistic forms which they had learned from their course book Schritte International to make appropriate requests through dialogue completed tasks (DCT). The results revealed that the majority of the subjects were unable to use the forms to make appropriate requests in German due to the lack of explicit instructions. Furthermore, Chinese interference was observed in students' productions. Explicit instructions in speech acts are strongly recommended.Keywords: Chinese interference, German pragmatics, German teaching, make appropriate requests in German, speech act of requesting
Procedia PDF Downloads 466952 The Speech Acts of Selected Classroom Encounters: Analyzing the Speech Acts of a Career Technology Lesson
Authors: Michael Amankwaa Adu
Abstract:
This study investigates the speech acts employed by a Career Technology teacher during classroom interactions in a junior high school. While much research exists on speech acts in language teaching, little attention has been given to technical subjects. This has created a gap in understanding how teachers of non-language subjects utilize speech acts in classroom communication. This study aims to analyze the types and frequencies of speech acts used by a Career Technology teacher during three key classroom encounters: lesson introduction, content delivery, and classroom management. Using a mixed-methods approach, the study examines 113 utterances from the teacher's lesson, categorizing them into four primary speech act types: directives, assertives, expressives, and commissives. Directives emerged as the most dominant form, accounting for 59.3% of the utterances, followed by assertives (20.4%), expressives (14.2%), and commissives (6.2%). No declarations were observed. The study demonstrates how the teacher uses directives to manage student behavior and assertives to reinforce information. Expressives are used sparingly but play a role in motivating or disciplining students, while commissives help establish classroom rules and set expectations. The findings contribute to understanding classroom interaction strategies in non-language subjects, offering insights that could inform teacher training and curriculum development. The study underscores the importance of effective communication in technical subjects and suggests ways in which language teaching techniques might be integrated into other subject areas.Keywords: classroom management, directives, speech acts, technical subjects., assertives
Procedia PDF Downloads 21951 Childhood Apraxia of Speech and Autism: Interaction Influences and Treatment
Authors: Elad Vashdi
Abstract:
It is common to find speech deficit among children diagnosed with Autism. It can be found in the clinical field and recently in research. One of the DSM-V criteria suggests a speech delay (Delay in, or total lack of, the development of spoken language), but doesn't explain the cause of it. A common perception among professionals and families is that the inability to talk results from the autism. Autism is a name for a syndrome which just describes a phenomenon and is defined behaviorally. Since it is not based yet on a physiological gold standard, one can not conclude the nature of a deficit based on the name of the syndrome. A wide retrospective research (n=270) which included children with motor speech difficulties was conducted in Israel. The study analyzed entry evaluations in a private clinic during the years 2006-2013. The data was extracted from the reports. High percentage of children diagnosed with Autism (60%) was found. This result demonstrates the high relationship between Autism and motor speech problem. It also supports recent findings in research of Childhood apraxia of speech (CAS) occurrence among children with ASD. Only small percentage of the participants in this research (10%) were diagnosed with CAS even though their verbal deficits well fitted the guidelines for CAS diagnosis set by ASHA in 2007. This fact raises questions regarding the diagnostic procedure in Israel. The understanding that CAS might highly exist within Autism and can have a remarkable influence on the course of early development should be a guiding tool within the diagnosis procedure. CAS can explain the nature of the speech problem among some of the autistic children and guide the treatment in a more accurate way. Calculating the prevalence of CAS which includes the comorbidity with ASD reveals new numbers and suggests treating differently the CAS population.Keywords: childhood apraxia of speech, Autism, treatment, speech
Procedia PDF Downloads 275950 Speech Motor Processing and Animal Sound Communication
Authors: Ana Cleide Vieira Gomes Guimbal de Aquino
Abstract:
Sound communication is present in most vertebrates, from fish, mainly in species that live in murky waters, to some species of reptiles, anuran amphibians, birds, and mammals, including primates. There are, in fact, relevant similarities between human language and animal sound communication, and among these similarities are the vocalizations called calls. The first specific call in human babies is crying, which has a characteristic prosodic contour and is motivated most of the time by the need for food and by affecting the puppy-caregiver interaction, with a view to communicating the necessities and food requests and guaranteeing the survival of the species. The present work aims to articulate speech processing in the motor context with aspects of the project entitled emotional states and vocalization: a comparative study of the prosodic contours of crying in human and non-human animals. First, concepts of speech motor processing and general aspects of speech evolution will be presented to relate these two approaches to animal sound communication.Keywords: speech motor processing, animal communication, animal behaviour, language acquisition
Procedia PDF Downloads 89949 Morphosyntactic Abilities in Speakers with Broca’s Aphasia: A Preliminary Examination
Authors: Mile Vuković, Lana Jerkić Rajić
Abstract:
Introduction: Broca's aphasia is a non-fluent type of aphasic syndrome, which is primarily manifested by impairment of language production. In connected speech, patients with this type of aphasia produce short sentences in which they often omit function words and morphemes or choose inadequate forms. Aim: This research was conducted to examine the morphosyntactic abilities of people with Broca's aphasia, comparing them with neurologically healthy subjects without a language disorder. Method: The sample included 15 patients with Broca's post-stroke aphasia, who had the relatively intact ability of auditory comprehension. The diagnosis of aphasia was based on the Boston Diagnostic Aphasia Examination. The control group comprised 16 neurologically healthy subjects without data on the presence of disorders in speech and language development. The patients' mother tongue was Serbian. The new Serbian Morphosyntactic Abilities Test (SMAT) was used. Descriptive (frequency, percentage, mean, SD, min, max) and inferential (Mann-Whitney U-test) statistics were used in data processing. Results: We noticed statistically significant differences between people with Broca's aphasia and neurotypical subjects on the SMAT (U = 1.500, z = -4.982, p = 0.000). The results showed that people with Broca's aphasia had achieved low scores on the SMAT, regardless of age (ρ = -0.045, p = 0.873) and time post onset (ρ = 0.330, p = 0.229). Conclusion: Preliminary results show that the SMAT has the potential to detect morphosyntactic deficits in Serbian speakers with Broca's aphasia.Keywords: Broca’s aphasia, morphosyntactic abilities, agrammatism, Serbian language
Procedia PDF Downloads 72948 Localization of Frontal and Temporal Speech Areas in Brain Tumor Patients by Their Structural Connections with Probabilistic Tractography
Authors: B.Shukir, H.Woo, P.Barzo, D.Kis
Abstract:
Preoperative brain mapping in tumors involving the speech areas has an important role to reduce surgical risks. Functional magnetic resonance imaging (fMRI) is the gold standard method to localize cortical speech areas preoperatively, but its availability in clinical routine is difficult. Diffusion MRI based probabilistic tractography is available in head MRI. It’s used to segment cortical subregions by their structural connectivity. In our study, we used probabilistic tractography to localize the frontal and temporal cortical speech areas. 15 patients with left frontal tumor were enrolled to our study. Speech fMRI and diffusion MRI acquired preoperatively. The standard automated anatomical labelling atlas 3 (AAL3) cortical atlas used to define 76 left frontal and 118 left temporal potential speech areas. 4 types of tractography were run according to the structural connection of these regions to the left arcuate fascicle (FA) to localize those cortical areas which have speech functions: 1, frontal through FA; 2, frontal with FA; 3, temporal to FA; 4, temporal with FA connections were determined. Thresholds of 1%, 5%, 10% and 15% applied. At each level, the number of affected frontal and temporal regions by fMRI and tractography were defined, the sensitivity and specificity were calculated. At the level of 1% threshold showed the best results. Sensitivity was 61,631,4% and 67,1523,12%, specificity was 87,210,4% and 75,611,37% for frontal and temporal regions, respectively. From our study, we conclude that probabilistic tractography is a reliable preoperative technique to localize cortical speech areas. However, its results are not feasible that the neurosurgeon rely on during the operation.Keywords: brain mapping, brain tumor, fMRI, probabilistic tractography
Procedia PDF Downloads 166947 Mood Choices and Modality Patterns in Donald Trump’s Inaugural Presidential Speech
Authors: Mary Titilayo Olowe
Abstract:
The controversies that trailed the political campaign and eventual choice of Donald Trump as the American president is so great that expectations are high as to what the content of his inaugural speech will portray. Given the fact that language is a dynamic vehicle of expressing intentions, the speech needs to be objectively assessed so as to access its content in the manner intended through the three strands of meaning postulated by the Systemic Functional Grammar (SFG): the ideational, the interpersonal and the textual. The focus of this paper, however, is on the interpersonal meaning which deals with how language exhibits social roles and relationship. This paper, therefore, attempts to analyse President Donald Trump’s inaugural speech to elicit interpersonal meaning in it. The analysis is done from the perspective of mood and modality which are housed in SFG. Results of the mood choice which is basically declarative, reveal an information-centered speech while the high option for the modal verb operator ‘will’ shows president Donald Trump’s ability to establish an equal and reliant relationship with his audience, i.e., the Americans. In conclusion, the appeal of the speech to different levels of Interpersonal meaning is largely responsible for its overall effectiveness. One can, therefore, understand the reason for the massive reaction it generates at the center of global discourse.Keywords: interpersonal, modality, mood, systemic functional grammar
Procedia PDF Downloads 224946 Speech Identification Test for Individuals with High-Frequency Sloping Hearing Loss in Telugu
Authors: S. B. Rathna Kumar, Sandya K. Varudhini, Aparna Ravichandran
Abstract:
Telugu is a south central Dravidian language spoken in Andhra Pradesh, a southern state of India. The available speech identification tests in Telugu have been developed to determine the communication problems of individuals having a flat frequency hearing loss. These conventional speech audiometric tests would provide redundant information when used on individuals with high-frequency sloping hearing loss because of better hearing sensitivity in the low- and mid-frequency regions. Hence, conventional speech identification tests do not indicate the true nature of the communication problem of individuals with high-frequency sloping hearing loss. It is highly possible that a person with a high-frequency sloping hearing loss may get maximum scores if conventional speech identification tests are used. Hence, there is a need to develop speech identification test materials that are specifically designed to assess the speech identification performance of individuals with high-frequency sloping hearing loss. The present study aimed to develop speech identification test for individuals with high-frequency sloping hearing loss in Telugu. Individuals with high-frequency sloping hearing loss have difficulty in perception of voiceless consonants whose spectral energy is above 1000 Hz. Hence, the word lists constructed with phonemes having mid- and high-frequency spectral energy will estimate speech identification performance better for such individuals. The phonemes /k/, /g/, /c/, /ṭ/ /t/, /p/, /s/, /ś/, /ṣ/ and /h/are preferred for the construction of words as these phonemes have spectral energy distributed in the frequencies above 1000 KHz predominantly. The present study developed two word lists in Telugu (each word list contained 25 words) for evaluating speech identification performance of individuals with high-frequency sloping hearing loss. The performance of individuals with high-frequency sloping hearing loss was evaluated using both conventional and high-frequency word lists under recorded voice condition. The results revealed that the developed word lists were found to be more sensitive in identifying the true nature of the communication problem of individuals with high-frequency sloping hearing loss.Keywords: speech identification test, high-frequency sloping hearing loss, recorded voice condition, Telugu
Procedia PDF Downloads 419945 A Corpus-Based Contrastive Analysis of Directive Speech Act Verbs in English and Chinese Legal Texts
Authors: Wujian Han
Abstract:
In the process of human interaction and communication, speech act verbs are considered to be the most active component and the main means for information transmission, and are also taken as an indication of the structure of linguistic behavior. The theoretical value and practical significance of such everyday built-in metalanguage have long been recognized. This paper, which is part of a bigger study, is aimed to provide useful insights for a more precise and systematic application to speech act verbs translation between English and Chinese, especially with regard to the degree to which generic integrity is maintained in the practice of translation of legal documents. In this study, the corpus, i.e. Chinese legal texts and their English translations, English legal texts, ordinary Chinese texts, and ordinary English texts, serve as a testing ground for examining contrastively the usage of English and Chinese directive speech act verbs in legal genre. The scope of this paper is relatively wide and essentially covers all directive speech act verbs which are used in ordinary English and Chinese, such as order, command, request, prohibit, threat, advice, warn and permit. The researcher, by combining the corpus methodology with a contrastive perspective, explored a range of characteristics of English and Chinese directive speech act verbs including their semantic, syntactic and pragmatic features, and then contrasted them in a structured way. It has been found that there are similarities between English and Chinese directive speech act verbs in legal genre, such as similar semantic components between English speech act verbs and their translation equivalents in Chinese, formal and accurate usage of English and Chinese directive speech act verbs in legal contexts. But notable differences have been identified in areas of difference between their usage in the original Chinese and English legal texts such as valency patterns and frequency of occurrences. For example, the subjects of some directive speech act verbs are very frequently omitted in Chinese legal texts, but this is not the case in English legal texts. One of the practicable methods to achieve adequacy and conciseness in speech act verb translation from Chinese into English in legal genre is to repeat the subjects or the message with discrepancy, and vice versa. In addition, translation effects such as overuse and underuse of certain directive speech act verbs are also found in the translated English texts compared to the original English texts. Legal texts constitute a particularly valuable material for speech act verb study. Building up such a contrastive picture of the Chinese and English speech act verbs in legal language would yield results of value and interest to legal translators and students of language for legal purposes and have practical application to legal translation between English and Chinese.Keywords: contrastive analysis, corpus-based, directive speech act verbs, legal texts, translation between English and Chinese
Procedia PDF Downloads 499944 Subband Coding and Glottal Closure Instant (GCI) Using SEDREAMS Algorithm
Authors: Harisudha Kuresan, Dhanalakshmi Samiappan, T. Rama Rao
Abstract:
In modern telecommunication applications, Glottal Closure Instants location finding is important and is directly evaluated from the speech waveform. Here, we study the GCI using Speech Event Detection using Residual Excitation and the Mean Based Signal (SEDREAMS) algorithm. Speech coding uses parameter estimation using audio signal processing techniques to model the speech signal combined with generic data compression algorithms to represent the resulting modeled in a compact bit stream. This paper proposes a sub-band coder SBC, which is a type of transform coding and its performance for GCI detection using SEDREAMS are evaluated. In SBCs code in the speech signal is divided into two or more frequency bands and each of these sub-band signal is coded individually. The sub-bands after being processed are recombined to form the output signal, whose bandwidth covers the whole frequency spectrum. Then the signal is decomposed into low and high-frequency components and decimation and interpolation in frequency domain are performed. The proposed structure significantly reduces error, and precise locations of Glottal Closure Instants (GCIs) are found using SEDREAMS algorithm.Keywords: SEDREAMS, GCI, SBC, GOI
Procedia PDF Downloads 356943 Recognition of Voice Commands of Mentor Robot in Noisy Environment Using Hidden Markov Model
Authors: Khenfer Koummich Fatma, Hendel Fatiha, Mesbahi Larbi
Abstract:
This paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a human-machine interface with a voice recognition system that allows the operator to teleoperate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands pronounced in two languages: French and Arabic. The obtained recognition rate is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equals 30 dB, in this case; the Arabic speech recognition rate is 69%, and the French speech recognition rate is 80%. This can be explained by the ability of phonetic context of each speech when the noise is added.Keywords: Arabic speech recognition, Hidden Markov Model (HMM), HTK, noise, TIMIT, voice command
Procedia PDF Downloads 386942 Google Translate: AI Application
Authors: Shaima Almalhan, Lubna Shukri, Miriam Talal, Safaa Teskieh
Abstract:
Since artificial intelligence is a rapidly evolving topic that has had a significant impact on technical growth and innovation, this paper examines people's awareness, use, and engagement with the Google Translate application. To see how familiar aware users are with the app and its features, quantitative and qualitative research was conducted. The findings revealed that consumers have a high level of confidence in the application and how far people they benefit from this sort of innovation and how convenient it makes communication.Keywords: artificial intelligence, google translate, speech recognition, language translation, camera translation, speech to text, text to speech
Procedia PDF Downloads 154