Search results for: speech impairment
932 An Event-Related Potential Investigation of Speech-in-Noise Recognition in Native and Nonnative Speakers of English
Authors: Zahra Fotovatnia, Jeffery A. Jones, Alexandra Gottardo
Abstract:
Speech communication often occurs in environments where noise conceals part of a message. Listeners should compensate for the lack of auditory information by picking up distinct acoustic cues and using semantic and sentential context to recreate the speaker’s intended message. This situation seems to be more challenging in a nonnative than native language. On the other hand, early bilinguals are expected to show an advantage over the late bilingual and monolingual speakers of a language due to their better executive functioning components. In this study, English monolingual speakers were compared with early and late nonnative speakers of English to understand speech in noise processing (SIN) and the underlying neurobiological features of this phenomenon. Auditory mismatch negativities (MMNs) were recorded using a double-oddball paradigm in response to a minimal pair that differed in their middle vowel (beat/bit) at Wilfrid Laurier University in Ontario, Canada. The results did not show any significant structural and electroneural differences across groups. However, vocabulary knowledge correlated positively with performance on tests that measured SIN processing in participants who learned English after age 6. Moreover, their performance on the test negatively correlated with the integral area amplitudes in the left superior temporal gyrus (STG). In addition, the STG was engaged before the inferior frontal gyrus (IFG) in noise-free and low-noise test conditions in all groups. We infer that the pre-attentive processing of words engages temporal lobes earlier than the fronto-central areas and that vocabulary knowledge helps the nonnative perception of degraded speech.Keywords: degraded speech perception, event-related brain potentials, mismatch negativities, brain regions
Procedia PDF Downloads 107931 Protective Effect of the Histamine H3 Receptor Antagonist DL77 in Behavioral Cognitive Deficits Associated with Schizophrenia
Authors: B. Sadek, N. Khan, D. Łażewska, K. Kieć-Kononowicz
Abstract:
The effects of the non-imidazole histamine H3 receptor (H3R) antagonist DL77 in passive avoidance paradigm (PAP) and novel object recognition (NOR) task in MK801-induced cognitive deficits associated with schizophrenia (CDS) in adult male rats, and applying donepezil (DOZ) as a reference drug were investigated. The results show that acute systemic administration of DL77 (2.5, 5, and 10 mg/kg, i.p.) significantly improved MK801-induced (0.1 mg/kg, i.p.) memory deficits in PAP. The ameliorating activity of DL77 (5 mg/kg, i.p.) in MK801-induced deficits was partly reversed when rats were pretreated with the centrally-acting H2R antagonist zolantidine (ZOL, 10 mg/kg, i.p.) or with the antimuscarinic antagonist scopolamine (SCO, 0.1 mg/kg, i.p.), but not with the CNS penetrant H1R antagonist pyrilamine (PYR, 10 mg/kg, i.p.). Moreover, the memory enhancing effect of DL77 (5 mg/kg, i.p.) in MK801-induced memory deficits in PAP was strongly reversed when rats were pretreated with a combination of ZOL (10 mg/kg, i.p.) and SCO (1.0 mg/kg, i.p.). Furthermore, the significant ameliorative effect of DL77 (5 mg/kg, i.p.) on MK801-induced long-term memory (LTM) impairment in NOR test was comparable to the DOZ-provided memory-enhancing effect, and was abrogated when animals were pretreated with the histamine H3R agonist R-(α)-methylhistamine (RAMH, 10 mg/kg, i.p.). However, DL77(5 mg/kg, i.p.) failed to provide procognitive effect on MK801-induced short-term memory (STM) impairment in NOR test. In addition, DL77 (5 mg/kg) did not alter anxiety levels and locomotor activity of animals naive to elevated-plus maze (EPM), demonstrating that improved performances with DL77 (5 mg/kg) in PAP or NOR are unrelated to changes in emotional responding or spontaneous locomotor activity. These results provide evidence for the potential of H3Rs for the treatment of neurodegenerative disorders related to impaired memory function, e.g. CDS.Keywords: histamine H3 receptor, antagonist, learning, memory impairment, passive avoidance paradigm, novel object recognition
Procedia PDF Downloads 203930 Using Speech Emotion Recognition as a Longitudinal Biomarker for Alzheimer’s Diseases
Authors: Yishu Gong, Liangliang Yang, Jianyu Zhang, Zhengyu Chen, Sihong He, Xusheng Zhang, Wei Zhang
Abstract:
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide and is characterized by cognitive decline and behavioral changes. People living with Alzheimer’s disease often find it hard to complete routine tasks. However, there are limited objective assessments that aim to quantify the difficulty of certain tasks for AD patients compared to non-AD people. In this study, we propose to use speech emotion recognition (SER), especially the frustration level, as a potential biomarker for quantifying the difficulty patients experience when describing a picture. We build an SER model using data from the IEMOCAP dataset and apply the model to the DementiaBank data to detect the AD/non-AD group difference and perform longitudinal analysis to track the AD disease progression. Our results show that the frustration level detected from the SER model can possibly be used as a cost-effective tool for objective tracking of AD progression in addition to the Mini-Mental State Examination (MMSE) score.Keywords: Alzheimer’s disease, speech emotion recognition, longitudinal biomarker, machine learning
Procedia PDF Downloads 113929 Teaching Pragmatic Coherence in Literary Text: Analysis of Chimamanda Adichie’s Americanah
Authors: Joy Aworo-Okoroh
Abstract:
Literary texts are mirrors of a real-life situation. Thus, authors choose the linguistic items that would best encode their intended meanings and messages. However, words mean more than they seem. The meaning of words is not static rather, it is dynamic as they constantly enter into relationships within a context. Literary texts can only be meaningful if all pragmatic cues are identified and interpreted. Drawing upon Teun Van Djik's theory of local pragmatic coherence, it is established that words enter into relations in a text and these relations account for sequential speech acts in the texts. Comprehension of the text is dependent on the interpretation of these relations.To show the relevance of pragmatic coherence in literary text analysis, ten conversations were selected in Americanah in order to give a clear idea of the pragmatic relations used. The conversations were analysed, identifying the speech act and epistemic relations inherent in them. A subtle analysis of the structure of the conversations was also carried out. It was discovered that justification is the most commonly used relation and the meaning of the text is dependent on the interpretation of these instances' pragmatic coherence. The study concludes that to effectively teach literature in English, pragmatic coherence should be incorporated as words mean more than they say.Keywords: pragmatic coherence, epistemic coherence, speech act, Americanah
Procedia PDF Downloads 136928 Deep-Learning to Generation of Weights for Image Captioning Using Part-of-Speech Approach
Authors: Tiago do Carmo Nogueira, Cássio Dener Noronha Vinhal, Gélson da Cruz Júnior, Matheus Rudolfo Diedrich Ullmann
Abstract:
Generating automatic image descriptions through natural language is a challenging task. Image captioning is a task that consistently describes an image by combining computer vision and natural language processing techniques. To accomplish this task, cutting-edge models use encoder-decoder structures. Thus, Convolutional Neural Networks (CNN) are used to extract the characteristics of the images, and Recurrent Neural Networks (RNN) generate the descriptive sentences of the images. However, cutting-edge approaches still suffer from problems of generating incorrect captions and accumulating errors in the decoders. To solve this problem, we propose a model based on the encoder-decoder structure, introducing a module that generates the weights according to the importance of the word to form the sentence, using the part-of-speech (PoS). Thus, the results demonstrate that our model surpasses state-of-the-art models.Keywords: gated recurrent units, caption generation, convolutional neural network, part-of-speech
Procedia PDF Downloads 102927 Complications and Outcomes of Cochlear Implantation in Children Younger than 12 Months: A Multicenter Study
Authors: Alimohamad Asghari, Ahmad Daneshi, Mohammad Farhadi, Arash Bayat, Mohammad Ajalloueyan, Marjan Mirsalehi, Mohsen Rajati, Seyed Basir Hashemi, Nader Saki, Ali Omidvari
Abstract:
Evidence suggests that Cochlear Implantation (CI) is a beneficial approach for auditory and speech skills improvement in children with severe to profound hearing loss. However, it remains controversial if implantation in children <12 months is safe and effective compared to older children. The present study aimed to determine whether children's ages affect surgical complications and auditory and speech development. The current multicenter study enrolled 86 children who underwent CI surgery at <12 months of age (group A) and 362 children who underwent implantation between 12 and 24 months of age (group B). The Categories of Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) scores were determined pre-impanation, and "one-year" and "two-year" post-implantation. Four complications (overall rate: 4.65%; three minor) occurred in group A and 12 complications (overall rate: 4.41%; nine minor) occurred in group B. We found no statistically significant difference in the complication rates between the groups (p>0.05). The mean SIR and CAP scores improved over time following CI activation in both groups. However, we did not find significant differences in CAP and SIR scores between the groups across different time points. Cochlear implantation is a safe and efficient procedure in children younger than 12 months, providing substantial auditory and speech benefits comparable to children undergoing implantation at 12 to 24 months of age. Furthermore, surgical complications in younger children are similar to those of children undergoing the CI at an older age.Keywords: cochlear implant, Infant, complications, outcome
Procedia PDF Downloads 108926 Oral Grammatical Errors of Arabic as Second Language (ASL) Learners: An Applied Linguistic Approach
Authors: Sadeq Al Yaari, Fayza Al Hammadi, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari, Salah Al Yami
Abstract:
Background: When we further take Arabic grammatical issues into account in accordance with applied linguistic investigations on Arabic as Second Language (ASL) learners, a fundamental issue arises at this point as to the production of speech in Arabic: Oral grammatical errors committed by ASL learners. Aims: Using manual rating as well as computational analytic methodology to test a corpus of recorded speech by Second Language (ASL) learners of Arabic, this study aims to find the areas of difficulties in learning Arabic grammar. More specifically, it examines how and why ASL learners make grammatical errors in their oral speech. Methods: Tape recordings of four (4) Arabic as Second Language (ASL) learners who ranged in age from 23 to 30 were naturally collected. All participants have completed an intensive Arabic program (two years) and 20 minute-speech was recorded for each participant. Having the collected corpus, the next procedure was to rate them against Arabic standard grammar. The rating includes four processes: Description, analysis and assessment. Conclusions: Outcomes made from the issues addressed in this paper can be summarized in the fact that ASL learners face many grammatical difficulties when studying Arabic word order, tenses and aspects, function words, subject-verb agreement, verb form, active-passive voice, global and local errors, processes-based errors including addition, omission, substitution or a combination of any of them.Keywords: grammar, error, oral, Arabic, second language, learner, applied linguistics.
Procedia PDF Downloads 45925 English Learning Speech Assistant Speak Application in Artificial Intelligence
Authors: Albatool Al Abdulwahid, Bayan Shakally, Mariam Mohamed, Wed Almokri
Abstract:
Artificial intelligence has infiltrated every part of our life and every field we can think of. With technical developments, artificial intelligence applications are becoming more prevalent. We chose ELSA speak because it is a magnificent example of Artificial intelligent applications, ELSA speak is a smartphone application that is free to download on both IOS and Android smartphones. ELSA speak utilizes artificial intelligence to help non-native English speakers pronounce words and phrases similar to a native speaker, as well as enhance their English skills. It employs speech-recognition technology that aids the application to excel the pronunciation of its users. This remarkable feature distinguishes ELSA from other voice recognition algorithms and increase the efficiency of the application. This study focused on evaluating ELSA speak application, by testing the degree of effectiveness based on survey questions. The results of the questionnaire were variable. The generality of the participants strongly agreed that ELSA has helped them enhance their pronunciation skills. However, a few participants were unconfident about the application’s ability to assist them in their learning journey.Keywords: ELSA speak application, artificial intelligence, speech-recognition technology, language learning, english pronunciation
Procedia PDF Downloads 106924 Auditory Function in Hypothyroidism as Compared to Controls
Authors: Mrunal Phatak
Abstract:
Introduction: Thyroid hormone is important for the normal function of the auditory system. Hearing impairment can occur insidiously in subclinical hypothyroidism. The present study was undertaken with the aim of evaluating audiological tests like tuning fork tests, pure tone audiometry, brainstem evoked auditory potentials (BAEPs), and auditory reaction time (ART) in hypothyroid women and in age and sex-matched controls to evaluate the effect of thyroid hormone on hearing. The objective of the study was to investigate hearing status by the audiological profile in hypothyroidism (group 1) and healthy controls (group 2) to compare the audiological profile between these groups and find the correlation of levels of TSH, T3 and T4 with the above parameters. Material and methods: A total sample size of 124 women in the age group of 30 to 50 years was recruited and divided into the Cases group comprising 62 newly diagnosed hypothyroid women and a Control group having 62 women with normal thyroid profiles. Otoscopic examination, tuning fork tests, Pure tone audiometry tests (PTA). Brain Stem Auditory Evoked Potential (BAEP) and Auditory Reaction Time (ART) were done in both ears, i.e., a total of 248 ears of all subjects. Results: By BAEPs, hearing impairment was detected in a total of 64 years (51.61%). A significant increase was seen in Wave V latency, IPL I-V and IPL III-V, and the decrease was seen in the amplitude of Wave I and V in both the ears cases. A positive correlation of Wave V latency of the Right and Left ears is seen with TSH levels (p < 0.001) and a negative correlation with T3 (>0.05) and with T4 (p < 0.01). The negative correlation of wave V amplitude of the Right and Left ears is seen with TSH levels (p < 0.001), and a significant positive correlation is seen with T3 and T4. Pure tone audiometry parameters showed hearing impairment of conductive (31.29%), sensorineural (36.29%), as well as mixed type (15.32%). Hearing loss was mild in 65.32% of ears and moderate in 17.74% of ears. Pure tone averages (PTA) were significantly increased in cases than in controls in both ears. A significant positive correlation of PTA of Right and Left ears is seen with TSH levels (p<0.05). A negative correlation between T3 and T4 is seen. A significant increase in HF ART and LF ART is seen in cases as compared to controls. A positive correlation between ART of high frequency and low frequency is seen with TSH levels and a negative correlation with T3 and T4 (p > 0.05). Conclusion: The abnormal BAEPs in hypothyroid women suggest an impaired central auditory pathway. BAEP abnormalities are indicative of a nonspecific injury in the bulbo-ponto-mesencephalic centers. The results of auditory investigations suggest a causal relationship between hypothyroidism and hearing loss. The site of lesion in the auditory pathway is probably at several levels, namely, in the middle ear and at cochlear and retrocochlear sites. Prolonged ART also suggests an impairment in central processing mechanisms. The results of the present study conclude that the probable reason for hearing impairment in hypothyroidism may be delayed impulse conduction in the acoustic nerve up to the level of the midbrain (IPL I-V, III-V), particularly the inferior colliculus (wave V). There is also impairment in central processing mechanisms, as shown by prolonged ART.Keywords: hypothyroidism, deafness, pure tone audiometry, brain stem auditory evoked potential
Procedia PDF Downloads 38923 Auditory Profile Function in Hypothyroidism
Authors: Mrunal Phatak, Suvarna Raut
Abstract:
Introduction: Thyroid hormone is important for the normal function of the auditory system. Hearing impairment can occur insidiously in subclinical hypothyroidism. The present study was undertaken with the aims of evaluating audiological tests like tuning fork tests, pure tone audiometry, brainstem evoked auditory potentials (BAEPs), and auditory reaction time (ART) in hypothyroid women and in age and sex matched controls so as to evaluate the effect of thyroid hormone on hearing. The objective of the study was to investigate hearing status by the audiological profile in hypothyroidism (group 1) and healthy controls ( group 2) to compare the audiological profile between these groups and find the correlation of levels of TSH, T3, and T4 with the above parameters. Material and methods: A total sample size of 124 women in the age group of 30 to 50 years was recruited and divided into the Cases group comprising of 62 newly diagnosed hypothyroid women and the Control group having 62 women with normal thyroid profile. Otoscopic examination, tuning fork tests, Pure tone audiometry tests (PTA). Brain Stem Auditory Evoked Potential (BAEP) and Auditory Reaction Time (ART) were done in both ears, i.e. total 248 ears of all subjects. Results: By BAEPs, hearing impairment was detected in total 64 ears (51.61%). A significant increase was seen in Wave V latency, IPL I-V, and IPL III-V, and the decrease was seen in the amplitude of Wave I and V in both the ears in cases. Positive correlation of Wave V latency of Right and Left ears is seen with TSH levels (p < 0.001) and a negative correlation with T3 (>0.05) and with T4 (p < 0.01). Negative correlation of wave V amplitude of Right and Left ears is seen with TSH levels (p < 0.001), and a significant positive correlation is seen with T3 and T4. Pure tone audiometry parameters showed hearing impairment of conductive (31.29%), sensorineural (36.29%), as well as the mixed type (15.32%). Hearing loss was mild in 65.32% of ears and moderate in 17.74% of ears. Pure tone averages (PTA) were significantly increased in cases than in controls in both the ears. Significant positive correlation of PTA of Right and Left ears is seen with TSH levels (p<0.05). Negative correlation with T3 and T4 is seen. A significant increase in HF ART and LF ART is seen in cases as compared to controls. Positive correlation of ART of high frequency and low frequency is seen with TSH levels and a negative correlation with T3 and T4 (p > 0.05). Conclusion: The abnormal BAEPs in hypothyroid women suggest an impaired central auditory pathway. BAEP abnormalities are indicative of a nonspecific injury in the bulbo-ponto-mesencephalic centres. The results of auditory investigations suggest a causal relationship between hypothyroidism and hearing loss. The site of lesion in the auditory pathway is probably at several levels, namely, in the middle ear and at cochlear and retrocochlear sites. Prolonged ART also suggests the impairment in central processing mechanisms. The results of the present study conclude that the probable reason for hearing impairment in hypothyroidism may be delayed impulse conduction in acoustic nerve up to the level of the midbrain (IPL I-V, III-V), particularly inferior colliculus (wave V). There is also impairment in central processing mechanisms, as shown by prolonged ART.Keywords: deafness, pure tone audiometry, brain stem auditory evoked potential, hyopothyroidism
Procedia PDF Downloads 132922 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286921 Conspiracy Theory in Discussions of the Coronavirus Pandemic in the Gulf Region
Authors: Rasha Salameh
Abstract:
In light of the tense relationship between Saudi Arabia and Iran, this research paper sheds some light on Al-Arabiya’s reporting of Coronavirus in the Gulf. Particularly because most of the cases, in the beginning, were coming from Iran, some programs of this Saudi channel embraced a conspiracy theory. Hate speech has been used in talking about the topic and discussing it. The results of these discussions will be detailed in this paper in percentages with regard to the research sample, which includes five programs on Al-Arabiya channel: ‘DNA’, ‘Marraya’ (Mirrors), ‘Panorama’, ‘Tafaolcom’ (Your Interaction) and the ‘Diplomatic Street’, in the period between January 19, that is, the date of the first case in Iran, and April 10, 2020. The research shows the use of a conspiracy theory in the programs, in addition to some professional violations. The surveyed sample also shows that the matter receded due to the Arab Gulf states' preoccupation with the successively increasing cases that have appeared there since the start of the pandemic. The results indicate that hate speech was present in the sample at a rate of 98.1% and that most of the programs that dealt with the Iranian issue under the Corona pandemic on Al Arabiya used the conspiracy theory at a rate of 75.5%.Keywords: Al-Arabiya, Iran, Corona, hate speech, conspiracy theory, politicization of the pandemic
Procedia PDF Downloads 136920 Reduced Lung Volume: A Possible Cause of Stuttering
Authors: Shantanu Arya, Sachin Sakhuja, Gunjan Mehta, Sanjay Munjal
Abstract:
Stuttering may be defined as a speech disorder affecting the fluency domain of speech and characterized by covert features like word substitution, omittance and circumlocution and overt features like prolongation of sound, syllables and blocks etc. Many etiologies have been postulated to explain stuttering based on various experiments and research. Moreover, Breathlessness has also been reported by many individuals with stuttering for which breathing exercises are generally advised. However, no studies reporting objective evaluation of the pulmonary capacity and further objective assessment of the efficacy of breathing exercises have been conducted. Pulmonary Function Test which evaluates parameters like Forced Vital Capacity, Peak Expiratory Flow Rate, Forced expiratory flow Rate can be used to study the pulmonary behavior of individuals with stuttering. The study aimed: a) To identify speech motor & physiologic behaviours associated with stuttering by administering PFT. b) To recognize possible reasons for an association between speech motor behaviour & stuttering severity. In this regard, PFT tests were administered on individuals who reported signs and symptoms of stuttering and showed abnormal scores on Stuttering Severity Index. Parameters like Forced Vital Capacity, Forced Expiratory Volume, Peak Expiratory Flow Rate (L/min), Forced Expiratory Flow Rate (L/min) were evaluated and correlated with scores of Stuttering Severity Index. Results showed significant decrease in the parameters (lower than normal scores) in individuals with established stuttering. Strong correlation was also found between degree of stuttering and the degree of decrease in the pulmonary volumes. Thus, it is evident that fluent speech requires strong support of lung pressure and requisite volumes. Further research in demonstrating the efficacy of abdominal breathing exercises in this regard is needed.Keywords: forced expiratory flow rate, forced expiratory volume, forced vital capacity, peak expiratory flow rate, stuttering
Procedia PDF Downloads 275919 The Analysis of Deceptive and Truthful Speech: A Computational Linguistic Based Method
Authors: Seham El Kareh, Miramar Etman
Abstract:
Recently, detecting liars and extracting features which distinguish them from truth-tellers have been the focus of a wide range of disciplines. To the author’s best knowledge, most of the work has been done on facial expressions and body gestures but only few works have been done on the language used by both liars and truth-tellers. This paper sheds light on four axes. The first axis copes with building an audio corpus for deceptive and truthful speech for Egyptian Arabic speakers. The second axis focuses on examining the human perception of lies and proving our need for computational linguistic-based methods to extract features which characterize truthful and deceptive speech. The third axis is concerned with building a linguistic analysis program that could extract from the corpus the inter- and intra-linguistic cues for deceptive and truthful speech. The program built here is based on selected categories from the Linguistic Inquiry and Word Count program. Our results demonstrated that Egyptian Arabic speakers on one hand preferred to use first-person pronouns and present tense compared to the past tense when lying and their lies lacked of second-person pronouns, and on the other hand, when telling the truth, they preferred to use the verbs related to motion and the nouns related to time. The results also showed that there is a need for bigger data to prove the significance of words related to emotions and numbers.Keywords: Egyptian Arabic corpus, computational analysis, deceptive features, forensic linguistics, human perception, truthful features
Procedia PDF Downloads 206918 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children
Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura
Abstract:
Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification
Procedia PDF Downloads 301917 Part of Speech Tagging Using Statistical Approach for Nepali Text
Authors: Archit Yajnik
Abstract:
Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.Keywords: hidden markov model, natural language processing, POS tagging, viterbi algorithm
Procedia PDF Downloads 329916 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs
Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu
Abstract:
This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network
Procedia PDF Downloads 63915 The Influence of Neural Synchrony on Auditory Middle Latency and Late Latency Responses and Its Correlation with Audiological Profile in Individuals with Auditory Neuropathy
Authors: P. Renjitha, P. Hari Prakash
Abstract:
Auditory neuropathy spectrum disorder (ANSD) is an auditory disorder with normal cochlear outer hair cell function and disrupted auditory nerve function. It results in unique clinical characteristic with absent auditory brainstem response (ABR), absent acoustic reflex and the presence of otoacoustic emissions (OAE) and cochlear microphonics. The lesion site could be at cochlear inner hair cells, the synapse between the inner hair cells and type I auditory nerve fibers, and/or the auditory nerve itself. But the literatures on synchrony at higher auditory system are sporadic and are less understood. It might be interesting to see if there is a recovery of neural synchrony at higher auditory centers. Also, does the level at which the auditory system recovers with adequate synchrony to the extent of observable evoke response potentials (ERPs) can predict speech perception? In the current study, eight ANSD participants and healthy controls underwent detailed audiological assessment including ABR, auditory middle latency response (AMLR), and auditory late latency response (ALLR). AMLR was recorded for clicks and ALLR was evoked using 500Hz and 2 kHz tone bursts. Analysis revealed that the participant could be categorized into three groups. Group I (2/8) where ALLR was present only for 2kHz tone burst. Group II (4/8), where AMLR was absent and ALLR was seen for both the stimuli. Group III (2/8) consisted individuals with identifiable AMLR and ALLR for all the stimuli. The highest speech identification sore observed in ANSD group was 30% and hence considered having poor speech perception. Overall test result indicates that the site of neural synchrony recovery could be varying across individuals with ANSD. Some individuals show recovery of neural synchrony at the thalamocortical level while others show the same only at the cortical level. Within ALLR itself there could be variation across stimuli again could be related to neural synchrony. Nevertheless, none of these patterns could possible explain the speech perception ability of the individuals. Hence, it could be concluded that neural synchrony as measured by evoked potentials could not be a good clinical predictor speech perception.Keywords: auditory late latency response, auditory middle latency response, auditory neuropathy spectrum disorder, correlation with speech identification score
Procedia PDF Downloads 149914 A Stylistic Analysis of the Short Story ‘The Escape’ by Qaisra Shahraz
Authors: Huma Javed
Abstract:
Stylistics is a broad term that is concerned with both literature and linguistics, due to which the significance of the stylistics increases. This research aims to analyze Qaisra Shahraz's short story ‘The Escape’ from the stylistic analysis viewpoint. The focus of this study is on three aspects grammar category, lexical category, and figure of speech of the short story. The research designs for this article are both explorative and descriptive. The analysis of the data shows that the writer has used more nouns in the story as compared to other lexical items, which suggests that story has a descriptive style rather than narrative.Keywords: The Escape, stylistics, grammatical category, lexical category, figure of speech
Procedia PDF Downloads 237913 Imprecise Vowel Articulation in Down Syndrome: An Acoustic Study
Authors: Anitha Naittee Abraham, N. Sreedevi
Abstract:
Individuals with Down syndrome (DS) have relatively better expressive language compared to other individuals with intellectual disabilities. Reduced speech intelligibility is one of the major concerns of this group of individuals due to their anatomical and physiological differences. The study investigated the vowel articulation of Malayalam speaking children with DS in the age range of 5-10 years. The vowel production of 10 children with DS was compared with typically developing children in the same age range. Vowels were extracted from 3 words with the corner vowels /a/, /i/ and /u/ in the word-initial position, using Praat (version 5.3.23) software. Acoustic analysis was based on vowel space area (VSA), Formant centralization ration (FCR) and F2i/F2u. The findings revealed increased formant values for the control group except for F2a and F2u. Also, the experimental group had higher FCR, lower VSA, and F2i/F2u values suggestive of imprecise vowel articulation due to restricted tongue movements. The results of the independent t-test revealed a significant difference in F1a, F2i, F2u, VSA, FCR and F2i/F2u values between the experimental and control group. These findings support the fact that children with DS have imprecise vowel articulation that interferes with the overall speech intelligibility. Hence it is essential to target the oromotor skills to enhance the speech intelligibility which in turn benefit in the social and vocational domains of these individuals.Keywords: Down syndrome, FCR, vowel articulation, vowel space
Procedia PDF Downloads 186912 Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe
Authors: Babatunde Olumide Olawale, Oyebode Olumide Oyediran
Abstract:
The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe.Keywords: access control, multimodal biometrics, pattern recognition, security safe
Procedia PDF Downloads 335911 Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals
Authors: Surbhi Mathur, J. M. Vyas
Abstract:
Although the rapid development of forensic speaker recognition technology has been conducted, there are still many problems to be solved. The biggest problem arises when the cases involving disguised voice samples come across for the purpose of examination and identification. Such type of voice samples of anonymous callers is frequently encountered in crimes involving kidnapping, blackmailing, hoax extortion and many more, where the speaker makes a deliberate effort to manipulate their natural voice in order to conceal their identity due to the fear of being caught. Voice disguise causes serious damage to the natural vocal parameters of the speakers and thus complicates the process of identification. The sole objective of this doctoral project is to find out the possibility of rendering definite opinions in cases involving disguised speech by experimentally determining the effects of different disguise forms on personal identification and percentage rate of speaker recognition for various voice disguise techniques such as raised pitch, lower pitch, increased nasality, covering the mouth, constricting tract, obstacle in mouth etc by analyzing and comparing the amount of phonetic and acoustic variation in of artificial (disguised) and natural sample of an individual, by auditory as well as spectrographic analysis.Keywords: forensic, speaker recognition, voice, speech, disguise, identification
Procedia PDF Downloads 368910 Effectiveness of Impairment Specified Muscle Strengthening Programme in a Group of Disabled Athletes
Authors: A. L. I. Prasanna, E. Liyanage, S. A. Rajaratne, K. P. A. P. Kariyawasam, A. A. J. Rajaratne
Abstract:
Maintaining or improving the muscle strength of the injured body part is essential to optimize performance among disabled athletes. General conditioning and strengthening exercises might be ineffective if not sufficiently intense enough or targeted for each participant’s specific impairment. Specific strengthening programme, targeted to the affected body part, are essential to improve the strength of impaired muscles and increase in strength will help reducing the impact of disability. Methods: The muscle strength of hip, knee and ankle joints was assessed in a group of randomly selected disabled athletes, using the Medical Research Council (MRC) grading. Those having muscle strength of grade 4 or less were selected for this study (24 in number) and were given and a custom made exercise program designed to strengthen their hip, knee or ankle joint musculature, according to the muscle or group of muscles affected. Effectiveness of the strengthening program was assessed after a period of 3 months. Results: Statistical analysis was done using the Minitab 16 statistical software. A Mann-Whitney U test was used to compare the strength of muscle group before and after exercise programme. A significant difference was observed after the three month strengthening program for knee flexors (Left and Right) (P =0.0889, 0.0312) hip flexors (left and right) (P=0.0312, 0.0466), hip extensors (Left and Right) (P=0.0478, 0.0513), ankle plantar flexors (Left and Right) (P=0.0466, 0.0423) and right ankle dorsiflexors (P= 0.0337). No significant difference of strength was observed after the strengthening program in the knee extensors (left and right), hip abductors (left and right) and left ankle dorsiflexors. Conclusion: Impairment specific exercise programme appear to be beneficial for disabled athletes to significantly improve the muscle strength of the affected joints.Keywords: muscle strengthening programme, disabled athletes, physiotherapy, rehabilitation sciences
Procedia PDF Downloads 357909 Human Computer Interaction Using Computer Vision and Speech Processing
Authors: Shreyansh Jain Jeetmal, Shobith P. Chadaga, Shreyas H. Srinivas
Abstract:
Internet of Things (IoT) is seen as the next major step in the ongoing revolution in the Information Age. It is predicted that in the near future billions of embedded devices will be communicating with each other to perform a plethora of tasks with or without human intervention. One of the major ongoing hotbed of research activity in IoT is Human Computer Interaction (HCI). HCI is used to facilitate communication between an intelligent system and a user. An intelligent system typically comprises of a system consisting of various sensors, actuators and embedded controllers which communicate with each other to monitor data collected from the environment. Communication by the user to the system is typically done using voice. One of the major ongoing applications of HCI is in home automation as a personal assistant. The prime objective of our project is to implement a use case of HCI for home automation. Our system is designed to detect and recognize the users and personalize the appliances in the house according to their individual preferences. Our HCI system is also capable of speaking with the user when certain commands are spoken such as searching on the web for information and controlling appliances. Our system can also monitor the environment in the house such as air quality and gas leakages for added safety.Keywords: human computer interaction, internet of things, computer vision, sensor networks, speech to text, text to speech, android
Procedia PDF Downloads 362908 The Effect of the Base Computer Method on Repetitive Behaviors and Communication Skills
Authors: Hoorieh Darvishi, Rezaei
Abstract:
Introduction: This study investigates the efficacy of computer-based interventions for children with Autism Spectrum Disorder , specifically targeting communication deficits and repetitive behaviors. The research evaluates novel software applications designed to enhance narrative capabilities and sensory integration through structured, progressive intervention protocols Method: The study evaluated two intervention software programs designed for children with autism, focusing on narrative speech and sensory integration. Twelve children aged 5-11 participated in the two-month intervention, attending three 45-minute weekly sessions, with pre- and post-tests measuring speech, communication, and behavioral outcomes. The narrative speech software incorporated 14 stories using the Cohen model. It progressively reduced software assistance as children improved their storytelling abilities, ultimately enabling independent narration. The process involved story comprehension questions and guided story completion exercises. The sensory integration software featured approximately 100 exercises progressing from basic classification to complex cognitive tasks. The program included attention exercises, auditory memory training (advancing from single to four-syllable words), problem-solving, decision-making, reasoning, working memory, and emotion recognition activities. Each module was accompanied by frequency and pitch-adjusted music that child enjoys it to enhance learning through multiple sensory channels (visual, auditory, and tactile). Conclusion: The results indicated that the use of these software programs significantly improved communication and narrative speech scores in children, while also reducing scores related to repetitive behaviors. Findings: These findings highlight the positive impact of computer-based interventions on enhancing communication skills and reducing repetitive behaviors in children with autism.Keywords: autism, narrative speech, persian, SI, repetitive behaviors, communication
Procedia PDF Downloads 12907 Developing Communicative Skills in Foreign Languages by Video Tasks
Authors: Ekaterina G. Lipatova
Abstract:
The developing potential of a video task in teaching foreign languages involves the opportunities to improve four aspects of speech production process: listening, reading, speaking and writing. A video represents the sequence of actions, realized in the pictures logically connected and verbalized speech flow that simplifies and stimulates the process of perception. In this connection listening skills of students are developed effectively as well as their intellectual properties such as synthesizing, analyzing and generalizing the information. In terms of teaching capacity, a video task, in our opinion, is more stimulating than a traditional listening, since it involves the student into the plot of the communicative situation, emotional background and potentially makes them react to the gist in the cognitive and communicative ways. To be an effective method of teaching the video task should be structured in the way of psycho-linguistic characteristics of speech production process, in other words, should include three phases: before-watching, while-watching and after-watching. The system of tasks provided to each phase might involve the situations on reflecting to the video content in the forms of filling-the-gap tasks, multiple choice, True-or-False tasks (reading skills), exercises on expressing the opinion, project fulfilling (writing and speaking skills). In the before-watching phase we offer the students to adjust their perception mechanism to the topic and the problem of the chosen video by such task as “what do you know about such a problem?”, “is it new for you?”, “have you ever faced the situation of…?”. Then we proceed with the lexical and grammatical analysis of language units that form the body of a speech sample to lessen the perception and develop the student’s lexicon. The goal of while-watching phase is to build the student’s awareness about the problem presented in the video and challenge their inner attitude towards what they have seen by identifying the mistakes in the statements about the video content or making the summary, justifying their understanding. Finally, we move on to development of their speech skills within the communicative situation they observed and learnt by stimulating them to search the similar ideas in their backgrounds and represent them orally or in the written form or express their own opinion on the problem. It is compulsory to highlight, that a video task should contain the urgent, valid and interesting event related to the future profession of the student, since it will help to activate cognitive, emotional, verbal and ethic capacity of students. Also, logically structured video tasks are easily integrated into the system of e-learning and can provide the opportunity for the students to work with the foreign language on their own.Keywords: communicative situation, perception mechanism, speech production process, speech skills
Procedia PDF Downloads 245906 Analyzing Speech Acts in Reddit Posts of Formerly Incarcerated Youths
Authors: Yusra Ibrahim
Abstract:
This study explores the online discourse of justice-involved youth on Reddit, focusing on how anonymity and asynchronicity influence their ability to share and reflect on their incarceration experiences within the "Ask Me Anything" (AMA) community. The study utilizes a quantitative analysis of speech acts to examine the varied communication patterns exhibited by youths and commenters across two AMA threads. The results indicate that, although Reddit is not specifically designed for formerly incarcerated youths, its features provide a supportive environment for them to share their incarceration experiences with non-incarcerated individuals. The level of empathy and support from the audience varies based on the audience’s perspectives on incarceration and related traumatic experiences. Additionally, the study identifies a reciprocal relationship where youths benefit from community support while offering insights into the juvenile justice system and helping the audience understand the experience of incarceration. The study also reveals cultural shocks in physical and digital environments that youth experience after release and when using social media platforms and the internet. The study has implications for juvenile justice personnel, policymakers, and researchers in the juvenile justice system.Keywords: juvenile justice, online discourse, reddit AMA, anonymity, speech acts taxonomy, reintegration, online community support
Procedia PDF Downloads 42905 Leadership Effectiveness Compared among Three Cultures Using Voice Pitches
Authors: Asena Biber, Ates Gul Ergun, Seda Bulut
Abstract:
Based on the literature, there are large numbers of studies investigating the relationship between culture and leadership effectiveness. Although giving effective speeches is vital characteristic for a leader to be perceived as effective, to our knowledge, there is no research study the determinants of perceived effective leader speech. The aim of this study is to find the effects of both culture and voice pitch on perceptions of leader's speech effectiveness. Our hypothesis is that people from high power distance countries will perceive leaders' speech effective when the leader's voice pitch is high, comparing with people from relatively low power distance countries. The participants of the study were 36 undergraduate students (12 Pakistanis, 12 Nigerians, and 12 Turks) who are studying in Turkey. National power distance scores of Nigerians ranked as first, Turks ranked as second and Pakistanis ranked as third. There are two independent variables in this study; three nationality groups that representing three levels of power distance and voice pitch of the leader which is manipulated as high and low levels. Researchers prepared an audio to manipulate high and low conditions of voice pitch. A professional whose native language is English read the predetermined speech in high and low voice pitch conditions. Voice pitch was measured using Hertz (Hz) and Decibel (dB). Each nationality group (Pakistan, Nigeria, and Turkey) were divided into groups of six students who listened to either the low or high pitch conditions in the cubicles of the laboratory. It was expected from participants to listen to the audio and fill in the questionnaire which was measuring the leadership effectiveness on a response scale ranging from 1 to 5. To determine the effects of nationality and voice pitch on perceived effectiveness of leader' voice pitch, 3 (Pakistani, Nigerian, and Turk) x 2 (low voice pitch and high voice pitch) two way between subjects analysis of variances was carried out. The results indicated that there was no significant main effect of voice pitch and interaction effect on perceived effectiveness of the leader’s voice pitch. However, there was a significant main effect of nationality on perceived effectiveness of the leader's voice pitch. Based on the results of Turkey’s HSD post-hoc test, only the perceived effectiveness of the leader's speech difference between Pakistanis and Nigerians was statistically significant. The results show that the hypothesis of this study was not supported. As limitations of the study, it is of importance to mention that the sample size should be bigger. Also, the language of the questionnaire and speech should be in the participant’s native language in further studies.Keywords: culture, leadership effectiveness, power distance, voice pitch
Procedia PDF Downloads 182904 Effects of Vitexin on Scopolamine-Induced Memory Impairment in Rats
Authors: Mehdi Sheikhi, Marjan Nassiri-Asl, Esmail Abbasi, Mahsa Shafiee
Abstract:
Various synthetic derivatives of natural flavonoids are known to have neuroactive properties. The present study aimed to investigate the effects of vitexin (5, 7, 4-trihydroxyflavone-8-glucoside), a flavonoid found in such plants as tartary buckwheat sprouts, wheat leaves phenolome, Mimosa pudica Linn and Passiflora spp, on scopolamine-induced memory impairment in rats. To achieve this goal, we assessed the effects of vitexin on memory retrieval in the presence or absence of scopolamine using a step-through passive avoidance trial. In the first part of the study, vitexin (25, 50, and 100 μM) was administered intracerebroventricularly (i.c.v.) before acquisition trials. In the second part, vitexin, at the same doses, was administered before scopolamine (10 μg, i.c.v.) and before the acquisition trials. During retention tests, vitexin (100 μM) in the absence of scopolamine significantly increased the stepthrough latencies compared to scopolamine. In addition, vitexin (100 μM) significantly reversed the shorter step-through latencies induced by scopolamine (P < 0.05). These results indicate that vitexin has a potential role in enhancing memory retrieval. A possible mechanism is modulation of cholinergic receptors; however, other mechanisms may be involved in its effects in acute exposure.Keywords: flavonoid, memory retrieval, passive avoidance, scopolamine, vitexin
Procedia PDF Downloads 352903 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study
Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu
Abstract:
Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.Keywords: dysphagia, teleassessment, challenges, Indian SLP
Procedia PDF Downloads 136