Search results for: speech recognition performance
14497 Investigating the Online Effect of Language on Gesture in Advanced Bilinguals of Two Structurally Different Languages in Comparison to L1 Native Speakers of L2 and Explores Whether Bilinguals Will Follow Target L2 Patterns in Speech and Co-speech
Authors: Armita Ghobadi, Samantha Emerson, Seyda Ozcaliskan
Abstract:
Being a bilingual involves mastery of both speech and gesture patterns in a second language (L2). We know from earlier work in first language (L1) production contexts that speech and co-speech gesture form a tightly integrated system: co-speech gesture mirrors the patterns observed in speech, suggesting an online effect of language on nonverbal representation of events in gesture during the act of speaking (i.e., “thinking for speaking”). Relatively less is known about the online effect of language on gesture in bilinguals speaking structurally different languages. The few existing studies—mostly with small sample sizes—suggests inconclusive findings: some show greater achievement of L2 patterns in gesture with more advanced L2 speech production, while others show preferences for L1 gesture patterns even in advanced bilinguals. In this study, we focus on advanced bilingual speakers of two structurally different languages (Spanish L1 with English L2) in comparison to L1 English speakers. We ask whether bilingual speakers will follow target L2 patterns not only in speech but also in gesture, or alternatively, follow L2 patterns in speech but resort to L1 patterns in gesture. We examined this question by studying speech and gestures produced by 23 advanced adult Spanish (L1)-English (L2) bilinguals (Mage=22; SD=7) and 23 monolingual English speakers (Mage=20; SD=2). Participants were shown 16 animated motion event scenes that included distinct manner and path components (e.g., "run over the bridge"). We recorded and transcribed all participant responses for speech and segmented it into sentence units that included at least one motion verb and its associated arguments. We also coded all gestures that accompanied each sentence unit. We focused on motion event descriptions as it shows strong crosslinguistic differences in the packaging of motion elements in speech and co-speech gesture in first language production contexts. English speakers synthesize manner and path into a single clause or gesture (he runs over the bridge; running fingers forward), while Spanish speakers express each component separately (manner-only: el corre=he is running; circle arms next to body conveying running; path-only: el cruza el puente=he crosses the bridge; trace finger forward conveying trajectory). We tallied all responses by group and packaging type, separately for speech and co-speech gesture. Our preliminary results (n=4/group) showed that productions in English L1 and Spanish L1 differed, with greater preference for conflated packaging in L1 English and separated packaging in L1 Spanish—a pattern that was also largely evident in co-speech gesture. Bilinguals’ production in L2 English, however, followed the patterns of the target language in speech—with greater preference for conflated packaging—but not in gesture. Bilinguals used separated and conflated strategies in gesture in roughly similar rates in their L2 English, showing an effect of both L1 and L2 on co-speech gesture. Our results suggest that online production of L2 language has more limited effects on L2 gestures and that mastery of native-like patterns in L2 gesture might take longer than native-like L2 speech patterns.Keywords: bilingualism, cross-linguistic variation, gesture, second language acquisition, thinking for speaking hypothesis
Procedia PDF Downloads 7614496 Cognitive Semantics Study of Conceptual and Metonymical Expressions in Johnson's Speeches about COVID-19
Authors: Hussain Hameed Mayuuf
Abstract:
The study is an attempt to investigate the conceptual metonymies is used in political discourse about COVID-19. Thus, this study tries to analyze and investigate how the conceptual metonymies in Johnson's speech about coronavirus are constructed. This study aims at: Identifying how are metonymies relevant to understand the messages in Boris Johnson speeches and to find out how can conceptual blending theory help people to understand the messages in the political speech about COVID-19. Lastly, it tries to Point out the kinds of integration networks are common in political speech. The study is based on the hypotheses that conceptual blending theory is a powerful tool for investigating the intended messages in Johnson's speech and there are different processes of blending networks and conceptual mapping that enable the listeners to identify the messages in political speech. This study presents a qualitative and quantitative analysis of four speeches about COVID-19; they are said by Boris Johnson. The selected data have been tackled from the cognitive-semantic perspective by adopting Conceptual Blending Theory as a model for the analysis. It concludes that CBT is applicable to the analysis of metonymies in political discourse. Its mechanisms enable listeners to analyze and understand these speeches. Also the listener can identify and understand the hidden messages in Biden and Johnson's discourse about COVID-19 by using different conceptual networks. Finally, it is concluded that the double scope networks are the most common types of blending of metonymies in the political speech.Keywords: cognitive, semantics, conceptual, metonymical, Covid-19
Procedia PDF Downloads 12814495 Foot Recognition Using Deep Learning for Knee Rehabilitation
Authors: Rakkrit Duangsoithong, Jermphiphut Jaruenpunyasak, Alba Garcia
Abstract:
The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.Keywords: foot recognition, deep learning, knee rehabilitation, convolutional neural network
Procedia PDF Downloads 16114494 Specified Human Motion Recognition and Unknown Hand-Held Object Tracking
Authors: Jinsiang Shaw, Pik-Hoe Chen
Abstract:
This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection.Keywords: Automatic Tracking, Back Projection, Motion Recognition, Shoplifting
Procedia PDF Downloads 33314493 The Effect of the Base Computer Method on Repetitive Behaviors and Communication Skills
Authors: Hoorieh Darvishi, Rezaei
Abstract:
Introduction: This study investigates the efficacy of computer-based interventions for children with Autism Spectrum Disorder , specifically targeting communication deficits and repetitive behaviors. The research evaluates novel software applications designed to enhance narrative capabilities and sensory integration through structured, progressive intervention protocols Method: The study evaluated two intervention software programs designed for children with autism, focusing on narrative speech and sensory integration. Twelve children aged 5-11 participated in the two-month intervention, attending three 45-minute weekly sessions, with pre- and post-tests measuring speech, communication, and behavioral outcomes. The narrative speech software incorporated 14 stories using the Cohen model. It progressively reduced software assistance as children improved their storytelling abilities, ultimately enabling independent narration. The process involved story comprehension questions and guided story completion exercises. The sensory integration software featured approximately 100 exercises progressing from basic classification to complex cognitive tasks. The program included attention exercises, auditory memory training (advancing from single to four-syllable words), problem-solving, decision-making, reasoning, working memory, and emotion recognition activities. Each module was accompanied by frequency and pitch-adjusted music that child enjoys it to enhance learning through multiple sensory channels (visual, auditory, and tactile). Conclusion: The results indicated that the use of these software programs significantly improved communication and narrative speech scores in children, while also reducing scores related to repetitive behaviors. Findings: These findings highlight the positive impact of computer-based interventions on enhancing communication skills and reducing repetitive behaviors in children with autism.Keywords: autism, communication_skills, repetitive_behaviors, sensory_integration
Procedia PDF Downloads 914492 Complications and Outcomes of Cochlear Implantation in Children Younger than 12 Months: A Multicenter Study
Authors: Alimohamad Asghari, Ahmad Daneshi, Mohammad Farhadi, Arash Bayat, Mohammad Ajalloueyan, Marjan Mirsalehi, Mohsen Rajati, Seyed Basir Hashemi, Nader Saki, Ali Omidvari
Abstract:
Evidence suggests that Cochlear Implantation (CI) is a beneficial approach for auditory and speech skills improvement in children with severe to profound hearing loss. However, it remains controversial if implantation in children <12 months is safe and effective compared to older children. The present study aimed to determine whether children's ages affect surgical complications and auditory and speech development. The current multicenter study enrolled 86 children who underwent CI surgery at <12 months of age (group A) and 362 children who underwent implantation between 12 and 24 months of age (group B). The Categories of Auditory Performance (CAP) and Speech Intelligibility Rating (SIR) scores were determined pre-impanation, and "one-year" and "two-year" post-implantation. Four complications (overall rate: 4.65%; three minor) occurred in group A and 12 complications (overall rate: 4.41%; nine minor) occurred in group B. We found no statistically significant difference in the complication rates between the groups (p>0.05). The mean SIR and CAP scores improved over time following CI activation in both groups. However, we did not find significant differences in CAP and SIR scores between the groups across different time points. Cochlear implantation is a safe and efficient procedure in children younger than 12 months, providing substantial auditory and speech benefits comparable to children undergoing implantation at 12 to 24 months of age. Furthermore, surgical complications in younger children are similar to those of children undergoing the CI at an older age.Keywords: cochlear implant, Infant, complications, outcome
Procedia PDF Downloads 10814491 The Investigation of Women Civil Engineers’ Identity Development through the Lens of Recognition Theory
Authors: Hasan Sungur, Evrim Baran, Benjamin Ahn, Aliye Karabulut Ilgu, Chris Rehmann, Cassandra Rutherford
Abstract:
Engineering identity contributes to the professional and educational persistence of women engineers. A crucial factor contributing to the development of the engineering identity is recognition. Those without adequate recognition often do not succeed in positively building their identities. This research draws on Honneth’s recognition theory to identify factors impacting women civil engineers’ feelings of recognition as civil engineers. A survey was composed and distributed to 330 female alumni who graduated from the Department of Civil, Construction, and Environmental Engineering at Iowa State University in the last ten years. The survey items include demographics, perceptions of the identity of civil engineering, and factors that influence the recognition of civil engineering identities, such as views of society and family. Descriptive analysis of the survey responses revealed that the perceptions of civil engineering varied widely. Participants’ definitions of civil engineering included the terms: construction, design, and infrastructure. Almost half of the participants reported that the major reason to study civil engineering was their interest in the subject matter, and most reported that they were proud to be civil engineers. Many study participants reported that their parents see them as civil engineers. Treatment of institutions and the workplace were also considered as having a significant impact on the recognition of women civil engineers. Almost half of the participants reported that they felt isolated or ignored at work because of their gender. This research emphasizes the importance of recognition for the development of the civil engineering identity of womenKeywords: civil engineering, gender, identity, recognition
Procedia PDF Downloads 25514490 Cultural-Creative Design with Language Figures of Speech
Authors: Wei Chen Chang, Ming Yu Hsiao
Abstract:
The commodity takes one kind of mark, the designer how to construction and interpretation the user how to use the process and effectively convey message in design education has always been an important issue. Cultural-creative design refers to signifying cultural heritage for product design. In terms of Peirce’s Semiotic Triangle: signifying elements-object-interpretant, signifying elements are the outcomes of design, the object is cultural heritage, and the interpretant is the positioning and description of product design. How to elaborate the positioning, design, and development of a product is a narrative issue of the interpretant, and how to shape the signifying elements of a product by modifying and adapting styles is a rhetoric matter. This study investigated the rhetoric of elements signifying products to develop a rhetoric model with cultural style. Figures of speech are a rhetoric method in narrative. By adapting figures of speech to the interpretant, this study developed the rhetoric context of cultural context by narrative means. In this two-phase study, phase I defines figures of speech and phase II analyzes existing cultural-creative products in terms of figures of speech to develop a rhetoric of style model. We expect it can reference for the future development of Cultural-creative design.Keywords: cultural-creative design, cultural-creative products, figures of speech, Peirce’s semiotic triangle, rhetoric of style model
Procedia PDF Downloads 37214489 Application of Signature Verification Models for Document Recognition
Authors: Boris M. Fedorov, Liudmila P. Goncharenko, Sergey A. Sybachin, Natalia A. Mamedova, Ekaterina V. Makarenkova, Saule Rakhimova
Abstract:
In modern economic conditions, the question of the possibility of correct recognition of a signature on digital documents in order to verify the expression of will or confirm a certain operation is relevant. The additional complexity of processing lies in the dynamic variability of the signature for each individual, as well as in the way information is processed because the signature refers to biometric data. The article discusses the issues of using artificial intelligence models in order to improve the quality of signature confirmation in document recognition. The analysis of several possible options for using the model is carried out. The results of the study are given, in which it is possible to correctly determine the authenticity of the signature on small samples.Keywords: signature recognition, biometric data, artificial intelligence, neural networks
Procedia PDF Downloads 14814488 Audio-Visual Recognition Based on Effective Model and Distillation
Authors: Heng Yang, Tao Luo, Yakun Zhang, Kai Wang, Wei Qin, Liang Xie, Ye Yan, Erwei Yin
Abstract:
Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual.Keywords: lipreading, audio-visual, Efficientnet, distillation
Procedia PDF Downloads 13414487 Naïve Bayes: A Classical Approach for the Epileptic Seizures Recognition
Authors: Bhaveek Maini, Sanjay Dhanka, Surita Maini
Abstract:
Electroencephalography (EEG) is used to classify several epileptic seizures worldwide. It is a very crucial task for the neurologist to identify the epileptic seizure with manual EEG analysis, as it takes lots of effort and time. Human error is always at high risk in EEG, as acquiring signals needs manual intervention. Disease diagnosis using machine learning (ML) has continuously been explored since its inception. Moreover, where a large number of datasets have to be analyzed, ML is acting as a boon for doctors. In this research paper, authors proposed two different ML models, i.e., logistic regression (LR) and Naïve Bayes (NB), to predict epileptic seizures based on general parameters. These two techniques are applied to the epileptic seizures recognition dataset, available on the UCI ML repository. The algorithms are implemented on an 80:20 train test ratio (80% for training and 20% for testing), and the performance of the model was validated by 10-fold cross-validation. The proposed study has claimed accuracy of 81.87% and 95.49% for LR and NB, respectively.Keywords: epileptic seizure recognition, logistic regression, Naïve Bayes, machine learning
Procedia PDF Downloads 6114486 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening
Authors: Ksheeraj Sai Vepuri, Nada Attar
Abstract:
We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.Keywords: facial expression recognittion, image preprocessing, deep learning, CNN
Procedia PDF Downloads 14314485 Automatic Music Score Recognition System Using Digital Image Processing
Authors: Yuan-Hsiang Chang, Zhong-Xian Peng, Li-Der Jeng
Abstract:
Music has always been an integral part of human’s daily lives. But, for the most people, reading musical score and turning it into melody is not easy. This study aims to develop an Automatic music score recognition system using digital image processing, which can be used to read and analyze musical score images automatically. The technical approaches included: (1) staff region segmentation; (2) image preprocessing; (3) note recognition; and (4) accidental and rest recognition. Digital image processing techniques (e.g., horizontal /vertical projections, connected component labeling, morphological processing, template matching, etc.) were applied according to musical notes, accidents, and rests in staff notations. Preliminary results showed that our system could achieve detection and recognition rates of 96.3% and 91.7%, respectively. In conclusion, we presented an effective automated musical score recognition system that could be integrated in a system with a media player to play music/songs given input images of musical score. Ultimately, this system could also be incorporated in applications for mobile devices as a learning tool, such that a music player could learn to play music/songs.Keywords: connected component labeling, image processing, morphological processing, optical musical recognition
Procedia PDF Downloads 41914484 An Exploratory Survey Questionnaire to Understand What Emotions Are Important and Difficult to Communicate for People with Dysarthria and Their Methodology of Communicating
Authors: Lubna Alhinti, Heidi Christensen, Stuart Cunningham
Abstract:
People with speech disorders may rely on augmentative and alternative communication (AAC) technologies to help them communicate. However, the limitations of the current AAC technologies act as barriers to the optimal use of these technologies in daily communication settings. The ability to communicate effectively relies on a number of factors that are not limited to the intelligibility of the spoken words. In fact, non-verbal cues play a critical role in the correct comprehension of messages and having to rely on verbal communication only, as is the case with current AAC technology, may contribute to problems in communication. This is especially true for people’s ability to express their feelings and emotions, which are communicated to a large part through non-verbal cues. This paper focuses on understanding more about the non-verbal communication ability of people with dysarthria, with the overarching aim of this research being to improve AAC technology by allowing people with dysarthria to better communicate emotions. Preliminary survey results are presented that gives an understanding of how people with dysarthria convey emotions, what emotions that are important for them to get across, what emotions that are difficult for them to convey, and whether there is a difference in communicating emotions when speaking to familiar versus unfamiliar people.Keywords: alternative and augmentative communication technology, dysarthria, speech emotion recognition, VIVOCA
Procedia PDF Downloads 16414483 A Recognition Method of Ancient Yi Script Based on Deep Learning
Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma
Abstract:
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.Keywords: recognition, CNN, Yi character, divergence
Procedia PDF Downloads 16414482 Quantum Cum Synaptic-Neuronal Paradigm and Schema for Human Speech Output and Autism
Authors: Gobinathan Devathasan, Kezia Devathasan
Abstract:
Objective: To improve the current modified Broca-Wernicke-Lichtheim-Kussmaul speech schema and provide insight into autism. Methods: We reviewed the pertinent literature. Current findings, involving Brodmann areas 22, 46, 9,44,45,6,4 are based on neuropathology and functional MRI studies. However, in primary autism, there is no lucid explanation and changes described, whether neuropathology or functional MRI, appear consequential. Findings: We forward an enhanced model which may explain the enigma related to autism. Vowel output is subcortical and does need cortical representation whereas consonant speech is cortical in origin. Left lateralization is needed to commence the circuitry spin as our life have evolved with L-amino acids and left spin of electrons. A fundamental species difference is we are capable of three syllable-consonants and bi-syllable expression whereas cetaceans and songbirds are confined to single or dual consonants. The 4 key sites for speech are superior auditory cortex, Broca’s two areas, and the supplementary motor cortex. Using the Argand’s diagram and Reimann’s projection, we theorize that the Euclidean three dimensional synaptic neuronal circuits of speech are quantized to coherent waves, and then decoherence takes place at area 6 (spherical representation). In this quantum state complex, 3-consonant languages are instantaneously integrated and multiple languages can be learned, verbalized and differentiated. Conclusion: We postulate that evolutionary human speech is elevated to quantum interaction unlike cetaceans and birds to achieve the three consonants/bi-syllable speech. In classical primary autism, the sudden speech switches off and on noted in several cases could now be explained not by any anatomical lesion but failure of coherence. Area 6 projects directly into prefrontal saccadic area (8); and this further explains the second primary feature in autism: lack of eye contact. The third feature which is repetitive finger gestures, located adjacent to the speech/motor areas, are actual attempts to communicate with the autistic child akin to sign language for the deaf.Keywords: quantum neuronal paradigm, cetaceans and human speech, autism and rapid magnetic stimulation, coherence and decoherence of speech
Procedia PDF Downloads 19514481 Characterising the Processes Underlying Emotion Recognition Deficits in Adolescents with Conduct Disorder
Authors: Nayra Martin-Key, Erich Graf, Wendy Adams, Graeme Fairchild
Abstract:
Children and adolescents with Conduct Disorder (CD) have been shown to demonstrate impairments in emotion recognition, but it is currently unclear whether this deficit is related to specific emotions or whether it represents a global deficit in emotion recognition. An emotion recognition task with concurrent eye-tracking was employed to further explore this relationship in a sample of male and female adolescents with CD. Participants made emotion categorization judgements for presented dynamic and morphed static facial expressions. The results demonstrated that males with CD, and to a lesser extent, females with CD, displayed impaired facial expression recognition in general, whereas callous-unemotional (CU) traits were linked to specific problems in sadness recognition in females with CD. A region-of-interest analysis of the eye-tracking data indicated that males with CD exhibited reduced fixation times for the eye-region of the face compared to typically-developing (TD) females, but not TD males. Females with CD did not show reduced fixation to the eye-region of the face relative to TD females. In addition, CU traits did not influence CD subjects’ attention to the eye-region of the face. These findings suggest that the emotion recognition deficits found in CD males, the worst performing group in the behavioural tasks, are partly driven by reduced attention to the eyes.Keywords: attention, callous-unemotional traits, conduct disorder, emotion recognition, eye-region, eye-tracking, sex differences
Procedia PDF Downloads 32114480 A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network
Authors: Marcio Leal, Marta Villamil
Abstract:
Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%.Keywords: artificial neural network, computer vision, dynamic time warping, infrared, sign language recognition
Procedia PDF Downloads 21714479 Offline Signature Verification in Punjabi Based On SURF Features and Critical Point Matching Using HMM
Authors: Rajpal Kaur, Pooja Choudhary
Abstract:
Biometrics, which refers to identifying an individual based on his or her physiological or behavioral characteristics, has the capabilities to the reliably distinguish between an authorized person and an imposter. The Signature recognition systems can categorized as offline (static) and online (dynamic). This paper presents Surf Feature based recognition of offline signatures system that is trained with low-resolution scanned signature images. The signature of a person is an important biometric attribute of a human being which can be used to authenticate human identity. However the signatures of human can be handled as an image and recognized using computer vision and HMM techniques. With modern computers, there is need to develop fast algorithms for signature recognition. There are multiple techniques are defined to signature recognition with a lot of scope of research. In this paper, (static signature) off-line signature recognition & verification using surf feature with HMM is proposed, where the signature is captured and presented to the user in an image format. Signatures are verified depended on parameters extracted from the signature using various image processing techniques. The Off-line Signature Verification and Recognition is implemented using Mat lab platform. This work has been analyzed or tested and found suitable for its purpose or result. The proposed method performs better than the other recently proposed methods.Keywords: offline signature verification, offline signature recognition, signatures, SURF features, HMM
Procedia PDF Downloads 38414478 Employability Potential of Differently Abled in the Indian Apparel Industry
Authors: Gunjita Shami, Noopur Anand
Abstract:
The pilot run of 50 days was undertaken to test employability potential of people with visual and hearing & speech impairment. Various roles in an apparel manufacturing set up like spreading of fabric for cutting, folding, sealing and labeling cartons, pasting size barcode stickers on packed garments, removing tickets from the garments in the finishing stage were studied. Their performance was quantified basis timesheets for all the days and improvement per day was quantified. Their final day output was compared to that of the able-bodied worker. For example in the carton making activity on day one visually impaired worker was making one box every three minutes which improved to four boxes per minute on day 28 displaying 91.6% improvement compared or an improvement of 3.6% per day which was comparable to the able-bodied seasoned workers, who were making 5 boxes per minute. The performance of persons with hearing and speech impairment in the finishing department was 10% higher than that of able-bodied seasoned workers in the same process. Overall in all the activities the differently abled showed day to day improvement of 65% while able bodied displayed improvement of 52%. On the first day performance of able-bodied worker was 75% better than that of differently abled while on the 50th day it was only 20% better. Therefore the performance of persons with disabilities was found comparable to the able bodied person. The results, though on a small scale, showed a big promise of employment of persons with disability in the apparel industry. Armed with the promising result a full-scale study has been undertaken to identify the roles suitable for certain kind of disability in apparel production, work-aids required to assist the differently abled to improve performance and measures to be undertaken to make production floor 'friendlier' for them. The results have been discussed in this paper which opens doors for integrating differently abled into the world projected and assumed for only able-bodied.Keywords: apparel sector, differently abled, employability, performance, work-aid
Procedia PDF Downloads 14914477 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition
Authors: Mohamed Lotfy, Ghada Soliman
Abstract:
Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.Keywords: computer vision, pattern recognition, optical character recognition, deep learning
Procedia PDF Downloads 9314476 Analysis of Linguistic Disfluencies in Bilingual Children’s Discourse
Authors: Sheena Christabel Pravin, M. Palanivelan
Abstract:
Speech disfluencies are common in spontaneous speech. The primary purpose of this study was to distinguish linguistic disfluencies from stuttering disfluencies in bilingual Tamil–English (TE) speaking children. The secondary purpose was to determine whether their disfluencies are mediated by native language dominance and/or on an early onset of developmental stuttering at childhood. A detailed study was carried out to identify the prosodic and acoustic features that uniquely represent the disfluent regions of speech. This paper focuses on statistical modeling of repetitions, prolongations, pauses and interjections in the speech corpus encompassing bilingual spontaneous utterances from school going children – English and Tamil. Two classifiers including Hidden Markov Models (HMM) and the Multilayer Perceptron (MLP), which is a class of feed-forward artificial neural network, were compared in the classification of disfluencies. The results of the classifiers document the patterns of disfluency in spontaneous speech samples of school-aged children to distinguish between Children Who Stutter (CWS) and Children with Language Impairment CLI). The ability of the models in classifying the disfluencies was measured in terms of F-measure, Recall, and Precision.Keywords: bi-lingual, children who stutter, children with language impairment, hidden markov models, multi-layer perceptron, linguistic disfluencies, stuttering disfluencies
Procedia PDF Downloads 21714475 Recognition of Grocery Products in Images Captured by Cellular Phones
Authors: Farshideh Einsele, Hassan Foroosh
Abstract:
In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation, style, illumination, and can suffer from perspective distortion. Pre-processing is performed to make the characters scale and rotation invariant. Since text degradations can not be appropriately defined using wellknown geometric transformations such as translation, rotation, affine transformation and shearing, we use the whole character black pixels as our feature vector. Classification is performed with minimum distance classifier using the maximum likelihood criterion, which delivers very promising Character Recognition Rate (CRR) of 89%. We achieve considerably higher Word Recognition Rate (WRR) of 99% when using lower level linguistic knowledge about product words during the recognition process.Keywords: camera-based OCR, feature extraction, document, image processing, grocery products
Procedia PDF Downloads 40614474 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset
Procedia PDF Downloads 35314473 A High Performance Piano Note Recognition Scheme via Precise Onset Detection and Segmented Short-Time Fourier Transform
Authors: Sonali Banrjee, Swarup Kumar Mitra, Aritra Acharyya
Abstract:
A piano note recognition method has been proposed by the authors in this paper. The authors have used a comprehensive method for onset detection of each note present in a piano piece followed by segmented short-time Fourier transform (STFT) for the identification of piano notes. The performance evaluation of the proposed method has been carried out in different harsh noisy environments by adding different levels of additive white Gaussian noise (AWGN) having different signal-to-noise ratio (SNR) in the original signal and evaluating the note detection error rate (NDER) of different piano pieces consisting of different number of notes at different SNR levels. The NDER is found to be remained within 15% for all piano pieces under consideration when the SNR is kept above 8 dB.Keywords: AWGN, onset detection, piano note, STFT
Procedia PDF Downloads 16014472 An Erudite Technique for Face Detection and Recognition Using Curvature Analysis
Authors: S. Jagadeesh Kumar
Abstract:
Face detection and recognition is an authoritative technology for image database management, video surveillance, and human computer interface (HCI). Face recognition is a rapidly nascent method, which has been extensively discarded in forensics such as felonious identification, tenable entree, and custodial security. This paper recommends an erudite technique using curvature analysis (CA) that has less false positives incidence, operative in different light environments and confiscates the artifacts that are introduced during image acquisition by ring correction in polar coordinate (RCP) method. This technique affronts mean and median filtering technique to remove the artifacts but it works in polar coordinate during image acquisition. Investigational fallouts for face detection and recognition confirms decent recitation even in diagonal orientation and stance variation.Keywords: curvature analysis, ring correction in polar coordinate method, face detection, face recognition, human computer interaction
Procedia PDF Downloads 28614471 Deep Learning Based Unsupervised Sport Scene Recognition and Highlights Generation
Authors: Ksenia Meshkova
Abstract:
With increasing amount of multimedia data, it is very important to automate and speed up the process of obtaining meta. This process means not just recognition of some object or its movement, but recognition of the entire scene versus separate frames and having timeline segmentation as a final result. Labeling datasets is time consuming, besides, attributing characteristics to particular scenes is clearly difficult due to their nature. In this article, we will consider autoencoders application to unsupervised scene recognition and clusterization based on interpretable features. Further, we will focus on particular types of auto encoders that relevant to our study. We will take a look at the specificity of deep learning related to information theory and rate-distortion theory and describe the solutions empowering poor interpretability of deep learning in media content processing. As a conclusion, we will present the results of the work of custom framework, based on autoencoders, capable of scene recognition as was deeply studied above, with highlights generation resulted out of this recognition. We will not describe in detail the mathematical description of neural networks work but will clarify the necessary concepts and pay attention to important nuances.Keywords: neural networks, computer vision, representation learning, autoencoders
Procedia PDF Downloads 12714470 Emotional and Physiological Reaction While Listening the Speech of Adults Who Stutter
Authors: Xharavina V., Gallopeni F., Ahmeti K.
Abstract:
Stuttered speech is filled with intermittent sound prolongations and/or rapid part word repetitions. Oftentimes, these aberrant acoustic behaviors are associated with intermittent physical tension and struggle behaviors such as head jerks, arm jerks, finger tapping, excessive eye-blinks, etc. Additionally, the jarring nature of acoustic and physical manifestations that often accompanies moderate-severe stuttering may induce negative emotional responses in listeners, which alters communication between the person who stutters and their listeners. However, researches for the influence of negative emotions in the communication and for physical reaction are limited. Therefore, to compare psycho-physiological responses of fluent adults, while listening the speech of adults who speak fluency and adults who stutter, are necessary. This study comprises the experimental method, with total of 104 participants (average age-20 years old, SD=2.1), divided into 3 groups. All participants self-reported no impairments in speech, language, or hearing. Exploring the responses of the participants, there were used two records speeches; a voice who speaks fluently and the voice who stutters. Heartbeats and the pulse were measured by the digital blood pressure monitor called 'Tensoval', as a physiological response to the fluent and stuttering sample. Meanwhile, the emotional responses of participants were measured by the self-reporting questionnaire (Steenbarger, 2001). Results showed an increase in heartbeats during the stuttering speech compared with the fluent sample (p < 0.5). The listeners also self-reported themselves as more alive, unhappy, nervous, repulsive, sad, tense, distracted and upset when listening the stuttering words versus the words of the fluent adult (where it was reported to experience positive emotions). These data support the notions that speech with stuttering can bring a psycho-physical reaction to the listeners. Speech pathologists should be aware that listeners show intolerable physiological reactions to stuttering that remain visible over time.Keywords: emotional, physiological, stuttering, fluent speech
Procedia PDF Downloads 14214469 Speech Acts of Selected Classroom Encounters: Analyzing the Speech Acts of a Career Technology Lesson
Authors: Michael Amankwaa Adu
Abstract:
Effective communication in the classroom plays a vital role in ensuring successful teaching and learning. In particular, the types of language and speech acts teachers use shape classroom interactions and influence student engagement. This study aims to analyze the speech acts employed by a Career Technology teacher in a junior high school. While much research has focused on speech acts in language classrooms, less attention has been given to how these acts operate in non-language subject areas like technical education. The study explores how different types of speech acts—directives, assertives, expressives, and commissives—are used during three classroom encounters: lesson introduction, content delivery, and classroom management. This research seeks to fill the gap in understanding how teachers of non-language subjects use speech acts to manage classroom dynamics and facilitate learning. The study employs a mixed-methods design, combining qualitative and quantitative approaches. Data was collected through direct classroom observation and audio recordings of a one-hour Career Technology lesson. The transcriptions of the lesson were analyzed using John Searle’s taxonomy of speech acts, classifying the teacher’s utterances into directives, assertives, expressives, and commissives. Results show that directives were the most frequently used speech act, accounting for 59.3% of the teacher's utterances. These speech acts were essential in guiding student behavior, giving instructions, and maintaining classroom control. Assertives made up 20.4% of the speech acts, primarily used for stating facts and reinforcing content. Expressives, at 14.2%, expressed emotions such as approval or frustration, helping to manage the emotional atmosphere of the classroom. Commissives were the least used, representing 6.2% of the speech acts, often used to set expectations or outline future actions. No declarations were observed during the lesson. The findings of this study reveal the critical role that speech acts play in managing classroom behavior and delivering content in technical subjects. Directives were crucial for ensuring students followed instructions and completed tasks, while assertives helped in reinforcing lesson objectives. Expressives contributed to motivating or disciplining students, and commissives, though less frequent, helped set clear expectations for students’ future actions. The absence of declarations suggests that the teacher prioritized guiding students over making formal pronouncements. These insights can inform teaching strategies across various subject areas, demonstrating that a diverse use of speech acts can create a balanced and interactive learning environment. This study contributes to the growing field of pragmatics in education and offers practical recommendations for educators, particularly in non-language classrooms, on how to utilize speech acts to enhance both classroom management and student engagement.Keywords: classroom interaction, pragmatics, speech acts, teacher communication, career technology
Procedia PDF Downloads 2014468 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 105