Search results for: voice response recognition
7144 A Weighted Approach to Unconstrained Iris Recognition
Authors: Yao-Hong Tsai
Abstract:
This paper presents a weighted approach to unconstrained iris recognition. Nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.Keywords: authentication, iris recognition, adaboost, local binary pattern
Procedia PDF Downloads 2257143 Reviewing Image Recognition and Anomaly Detection Methods Utilizing GANs
Authors: Agastya Pratap Singh
Abstract:
This review paper examines the emerging applications of generative adversarial networks (GANs) in the fields of image recognition and anomaly detection. With the rapid growth of digital image data, the need for efficient and accurate methodologies to identify and classify images has become increasingly critical. GANs, known for their ability to generate realistic data, have gained significant attention for their potential to enhance traditional image recognition systems and improve anomaly detection performance. The paper systematically analyzes various GAN architectures and their modifications tailored for image recognition tasks, highlighting their strengths and limitations. Additionally, it delves into the effectiveness of GANs in detecting anomalies in diverse datasets, including medical imaging, industrial inspection, and surveillance. The review also discusses the challenges faced in training GANs, such as mode collapse and stability issues, and presents recent advancements aimed at overcoming these obstacles.Keywords: generative adversarial networks, image recognition, anomaly detection, synthetic data generation, deep learning, computer vision, unsupervised learning, pattern recognition, model evaluation, machine learning applications
Procedia PDF Downloads 327142 Effects of Oxytocin on Neural Response to Facial Emotion Recognition in Schizophrenia
Authors: Avyarthana Dey, Naren P. Rao, Arpitha Jacob, Chaitra V. Hiremath, Shivarama Varambally, Ganesan Venkatasubramanian, Rose Dawn Bharath, Bangalore N. Gangadhar
Abstract:
Objective: Impaired facial emotion recognition is widely reported in schizophrenia. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. However, its effect on facial emotion recognition deficits seen in schizophrenia is not well explored. In this study, we examined the effect of intranasal OXT on processing facial emotions and its neural correlates in patients with schizophrenia. Method: 12 male patients (age= 31.08±7.61 years, education= 14.50±2.20 years) participated in this single-blind, counterbalanced functional magnetic resonance imaging (fMRI) study. All participants underwent three fMRI scans; one at baseline, one each after single dose 24IU intranasal OXT and intranasal placebo. The order of administration of OXT and placebo were counterbalanced and subject was blind to the drug administered. Participants performed a facial emotion recognition task presented in a block design with six alternating blocks of faces and shapes. The faces depicted happy, angry or fearful emotions. The images were preprocessed and analyzed using SPM 12. First level contrasts comparing recognition of emotions and shapes were modelled at individual subject level. A group level analysis was performed using the contrasts generated at the first level to compare the effects of intranasal OXT and placebo. The results were thresholded at uncorrected p < 0.001 with a cluster size of 6 voxels. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. Results: Compared to placebo, intranasal OXT attenuated activity in inferior temporal, fusiform and parahippocampal gyri (BA 20), premotor cortex (BA 6), middle frontal gyrus (BA 10) and anterior cingulate gyrus (BA 24) and enhanced activity in the middle occipital gyrus (BA 18), inferior occipital gyrus (BA 19), and superior temporal gyrus (BA 22). There were no significant differences between the conditions on the accuracy scores of emotion recognition between baseline (77.3±18.38), oxytocin (82.63 ± 10.92) or Placebo (76.62 ± 22.67). Conclusion: Our results provide further evidence to the modulatory effect of oxytocin in patients with schizophrenia. Single dose oxytocin resulted in significant changes in activity of brain regions involved in emotion processing. Future studies need to examine the effectiveness of long-term treatment with OXT for emotion recognition deficits in patients with schizophrenia.Keywords: recognition, functional connectivity, oxytocin, schizophrenia, social cognition
Procedia PDF Downloads 2217141 Efficient Feature Fusion for Noise Iris in Unconstrained Environment
Authors: Yao-Hong Tsai
Abstract:
This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.Keywords: image fusion, iris recognition, local binary pattern, wavelet
Procedia PDF Downloads 3677140 Online Handwritten Character Recognition for South Indian Scripts Using Support Vector Machines
Authors: Steffy Maria Joseph, Abdu Rahiman V, Abdul Hameed K. M.
Abstract:
Online handwritten character recognition is a challenging field in Artificial Intelligence. The classification success rate of current techniques decreases when the dataset involves similarity and complexity in stroke styles, number of strokes and stroke characteristics variations. Malayalam is a complex south indian language spoken by about 35 million people especially in Kerala and Lakshadweep islands. In this paper, we consider the significant feature extraction for the similar stroke styles of Malayalam. This extracted feature set are suitable for the recognition of other handwritten south indian languages like Tamil, Telugu and Kannada. A classification scheme based on support vector machines (SVM) is proposed to improve the accuracy in classification and recognition of online malayalam handwritten characters. SVM Classifiers are the best for real world applications. The contribution of various features towards the accuracy in recognition is analysed. Performance for different kernels of SVM are also studied. A graphical user interface has developed for reading and displaying the character. Different writing styles are taken for each of the 44 alphabets. Various features are extracted and used for classification after the preprocessing of input data samples. Highest recognition accuracy of 97% is obtained experimentally at the best feature combination with polynomial kernel in SVM.Keywords: SVM, matlab, malayalam, South Indian scripts, onlinehandwritten character recognition
Procedia PDF Downloads 5767139 Voices from Inside and the Power of Art to Transform and Restore
Authors: Karen Miner-Romanoff
Abstract:
Few art programs for incarcerated juveniles exist; however, evaluation results indicate decreased recidivism and behavior problems. This paper reports on an on-going study of a promising art program for incarcerated adolescents with community exhibits and charitable sale of their work. Voices from Inside, a partnership between Franklin University and the Ohio Department of Youth Services, sponsored three exhibits in 2012, 2013, and 2014. In 2013, youth exhibitor survey results (response rate 47%, 16 of 34) showed that 81% cited as benefits cooperation with others, task completion, and increased self-esteem from public recognition and art sales. Community attendee survey results (response rate 29.5%, 59 of 200) showed positive attitude changes toward juvenile offenders, from 40% to 53%. Qualitative responses were similarly positive. The 2014 youth exhibitor sample was larger (response rate 58%, 29 of 50) and showed that 93% cited positive benefits including increase in self-esteem, decrease in stress, pride or recognition of the ability to reach a goal from completing, exhibiting and selling their art to benefit a charity for at-risk youth. This year, the research was able to conduct ten one-on-one interviews inside of the youth facilities, and qualitative responses were even more positive with one youth explaining, “This art represents my joy, my tears, my pain and my hope.” Community attendee survey results (response rate 50%, 86 of 170) were transformative in that that they indicated significant impression on attitudes toward juvenile offenders and their rehabilitative needs with one attendee stating that the event had an, “Immense impact for me bringing into focus the humanity and value these youth still have for us and society.” Future research indicates a need for a correlation study to determine the extent to which these art programs reduce behavioral incidents inside of the facility and long-term reduction in reoffending rates. Generally, further study of juvenile offenders’ art for rehabilitation and restorative justice, the power of art to transform, and university-community partnerships implementing art programs for juvenile offenders should continue.Keywords: art, juvenile, incarcerated, restorative justice
Procedia PDF Downloads 4317138 Gender Recognition with Deep Belief Networks
Authors: Xiaoqi Jia, Qing Zhu, Hao Zhang, Su Yang
Abstract:
A gender recognition system is able to tell the gender of the given person through a few of frontal facial images. An effective gender recognition approach enables to improve the performance of many other applications, including security monitoring, human-computer interaction, image or video retrieval and so on. In this paper, we present an effective method for gender classification task in frontal facial images based on deep belief networks (DBNs), which can pre-train model and improve accuracy a little bit. Our experiments have shown that the pre-training method with DBNs for gender classification task is feasible and achieves a little improvement of accuracy on FERET and CAS-PEAL-R1 facial datasets.Keywords: gender recognition, beep belief net-works, semi-supervised learning, greedy-layer wise RBMs
Procedia PDF Downloads 4557137 Third Language Perception of English Initial Plosives by Mandarin-Japanese Bilinguals
Authors: Rika Aoki
Abstract:
The aim of this paper is to investigate whether being bilinguals facilitates or impedes the perception of a third language. The present study conducted a perception experiment in which Mandarin-Japanese bilinguals categorized a Voice-Onset-Time (VOT) continuum into English /b/ or /p/. The results show that early bilinguals were influenced by both Mandarin and Japanese, while late bilinguals behaved in a similar manner to Mandarin monolinguals Thus, it can be concluded that in the present study having two languages did not help bilinguals to perceive L3 stop contrast native-likely.Keywords: bilinguals, perception, third language acquisition, voice-onset-time
Procedia PDF Downloads 2927136 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 2867135 Emotion Recognition Using Artificial Intelligence
Authors: Rahul Mohite, Lahcen Ouarbya
Abstract:
This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed.Keywords: facial reputation, expression reputation, deep gaining knowledge of, photo reputation, facial technology, sign processing, photo type
Procedia PDF Downloads 1237134 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning
Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond
Abstract:
Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.Keywords: time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition
Procedia PDF Downloads 1237133 Hear My Voice: The Educational Experiences of Disabled Students
Authors: Karl Baker-Green, Ian Woolsey
Abstract:
Historically, a variety of methods have been used to access the student voice within higher education, including module evaluations and informal classroom feedback. However, currently, the views articulated in student-staff-committee meetings bear the most weight and can therefore have the most significant impact on departmental policy. Arguably, these forums are exclusionary as several students, including those who experience severe anxiety, might feel unable to participate in this face-to-face (large) group activities. Similarly, students who declare a disability, but are not in possession of a learning contract, are more likely to withdraw from their studies than those whose additional needs have been formally recognised. It is also worth noting that whilst the number of disabled students in Higher Education has increased in recent years, the percentage of those who have been issued a learning contract has decreased. These issues foreground the need to explore the educational experiences of students with or without a learning contract in order to identify their respective aspirations and needs and therefore help shape education policy. This is in keeping with the ‘Nothing about us without us’, agenda, which recognises that disabled individuals are best placed to understand their own requirements and the most effective strategies to meet these.Keywords: education, student voice, student experience, student retention
Procedia PDF Downloads 947132 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition
Abstract:
The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network
Procedia PDF Downloads 957131 Make Up Flash: Web Application for the Improvement of Physical Appearance in Images Based on Recognition Methods
Authors: Stefania Arguelles Reyes, Octavio José Salcedo Parra, Alberto Acosta López
Abstract:
This paper presents a web application for the improvement of images through recognition. The web application is based on the analysis of picture-based recognition methods that allow an improvement on the physical appearance of people posting in social networks. The basis relies on the study of tools that can correct or improve some features of the face, with the help of a wide collection of user images taken as reference to build a facial profile. Automatic facial profiling can be achieved with a deeper study of the Object Detection Library. It was possible to improve the initial images with the help of MATLAB and its filtering functions. The user can have a direct interaction with the program and manually adjust his preferences.Keywords: Matlab, make up, recognition methods, web application
Procedia PDF Downloads 1477130 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach
Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed
Abstract:
Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model
Procedia PDF Downloads 4627129 Fine Grained Action Recognition of Skateboarding Tricks
Authors: Frederik Calsius, Mirela Popa, Alexia Briassouli
Abstract:
In the field of machine learning, it is common practice to use benchmark datasets to prove the working of a method. The domain of action recognition in videos often uses datasets like Kinet-ics, Something-Something, UCF-101 and HMDB-51 to report results. Considering the properties of the datasets, there are no datasets that focus solely on very short clips (2 to 3 seconds), and on highly-similar fine-grained actions within one specific domain. This paper researches how current state-of-the-art action recognition methods perform on a dataset that consists of highly similar, fine-grained actions. To do so, a dataset of skateboarding tricks was created. The performed analysis highlights both benefits and limitations of state-of-the-art methods, while proposing future research directions in the activity recognition domain. The conducted research shows that the best results are obtained by fusing RGB data with OpenPose data for the Temporal Shift Module.Keywords: activity recognition, fused deep representations, fine-grained dataset, temporal modeling
Procedia PDF Downloads 2317128 Developing an AI-Driven Application for Real-Time Emotion Recognition from Human Vocal Patterns
Authors: Sayor Ajfar Aaron, Mushfiqur Rahman, Sajjat Hossain Abir, Ashif Newaz
Abstract:
This study delves into the development of an artificial intelligence application designed for real-time emotion recognition from human vocal patterns. Utilizing advanced machine learning algorithms, including deep learning and neural networks, the paper highlights both the technical challenges and potential opportunities in accurately interpreting emotional cues from speech. Key findings demonstrate the critical role of diverse training datasets and the impact of ambient noise on recognition accuracy, offering insights into future directions for improving robustness and applicability in real-world scenarios.Keywords: artificial intelligence, convolutional neural network, emotion recognition, vocal patterns
Procedia PDF Downloads 577127 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Authors: Kyi Pyar Zaw, Zin Mar Kyu
Abstract:
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation
Procedia PDF Downloads 3207126 Intelligent Human Pose Recognition Based on EMG Signal Analysis and Machine 3D Model
Authors: Si Chen, Quanhong Jiang
Abstract:
In the increasingly mature posture recognition technology, human movement information is widely used in sports rehabilitation, human-computer interaction, medical health, human posture assessment, and other fields today; this project uses the most original ideas; it is proposed to use the collection equipment for the collection of myoelectric data, reflect the muscle posture change on a degree of freedom through data processing, carry out data-muscle three-dimensional model joint adjustment, and realize basic pose recognition. Based on this, bionic aids or medical rehabilitation equipment can be further developed with the help of robotic arms and cutting-edge technology, which has a bright future and unlimited development space.Keywords: pose recognition, 3D animation, electromyography, machine learning, bionics
Procedia PDF Downloads 817125 Smartphone-Based Human Activity Recognition by Machine Learning Methods
Authors: Yanting Cao, Kazumitsu Nawata
Abstract:
As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.Keywords: smart sensors, human activity recognition, artificial intelligence, SVM
Procedia PDF Downloads 1447124 Fight the Burnout: Phase Two of a NICU Nurse Wellness Bundle
Authors: Megan Weisbart
Abstract:
Background/Significance: The Intensive Care Unit (ICU) environment contributes to nurse burnout. Burnout costs include decreased employee compassion, missed workdays, worse patient outcomes, diminished job performance, high turnover, and higher organizational cost. Meaningful recognition, nurturing of interpersonal connections, and mindfulness-based interventions are associated with decreased burnout. The purpose of this quality improvement project was to decrease Neonatal ICU (NICU) nurse burnout using a Wellness Bundle that fosters meaningful recognition, interpersonal connections and includes mindfulness-based interventions. Methods: The Professional Quality of Life Scale Version 5 (ProQOL5) was used to measure burnout before Wellness Bundle implementation, after six months, and will be given yearly for three years. Meaningful recognition bundle items include Online submission and posting of staff shoutouts, recognition events, Nurses Week and Unit Practice Council member gifts, and an employee recognition program. Fostering of interpersonal connections bundle items include: Monthly staff games with prizes, social events, raffle fundraisers, unit blog, unit wellness basket, and a wellness resource sheet. Quick coherence techniques were implemented at staff meetings and huddles as a mindfulness-based intervention. Findings: The mean baseline burnout score of 14 NICU nurses was 20.71 (low burnout). The baseline range was 13-28, with 11 nurses experiencing low burnout, three nurses experiencing moderate burnout, and zero nurses experiencing high burnout. After six months of the Wellness Bundle Implementation, the mean burnout score of 39 NICU nurses was 22.28 (low burnout). The range was 14-31, with 22 nurses experiencing low burnout, 17 nurses experiencing moderate burnout, and zero nurses experiencing high burnout. Conclusion: A NICU Wellness Bundle that incorporated meaningful recognition, fostering of interpersonal connections, and mindfulness-based activities was implemented to improve work environments and decrease nurse burnout. Participation bias and low baseline response rate may have affected the reliability of the data and necessitate another comparative measure of burnout in one year.Keywords: burnout, NICU, nurse, wellness
Procedia PDF Downloads 887123 Functional Outcome of Speech, Voice and Swallowing Following Excision of Glomus Jugulare Tumor
Authors: B. S. Premalatha, Kausalya Sahani
Abstract:
Background: Glomus jugulare tumors arise within the jugular foramen and are commonly seen in females particularly on the left side. Surgical excision of the tumor may cause lower cranial nerve deficits. Cranial nerve involvement produces hoarseness of voice, slurred speech, and dysphagia along with other physical symptoms, thereby affecting the quality of life of individuals. Though oncological clearance is mainly emphasized on while treating these individuals, little importance is given to their communication, voice and swallowing problems, which play a crucial part in daily functioning. Objective: To examine the functions of voice, speech and swallowing outcomes of the subjects, following excision of glomus jugulare tumor. Methods: Two female subjects aged 56 and 62 years had come with a complaint of change in voice, inability to swallow and reduced clarity of speech following surgery for left glomus jugulare tumor were participants of the study. Their surgical information revealed multiple cranial nerve palsies involving the left facial, left superior and recurrent branches of the vagus nerve, left pharyngeal, left soft palate, left hypoglossal and vestibular nerves. Functional outcomes of voice, speech and swallowing were evaluated by perceptual and objective assessment procedures. Assessment included the examination of oral structures and functions, dysarthria by Frenchey dysarthria assessment, cranial nerve functions and swallowing functions. MDVP and Dr. Speech software were used to evaluate acoustic parameters of voice and quality of voice respectively. Results: The study revealed that both the subjects, subsequent to excision of glomus jugulare tumor, showed a varied picture of affected oral structure and functions, articulation, voice and swallowing functions. The cranial nerve assessment showed impairment of the vagus, hypoglossal, facial and glossopharyngeal nerves. Voice examination indicated vocal cord paralysis associated with breathy quality of voice, weak voluntary cough, reduced pitch and loudness range, and poor respiratory support. Perturbation parameters as jitter, shimmer were affected along with s/z ratio indicative of voice fold pathology. Reduced MPD(Maximum Phonation Duration) of vowels indicated that disturbed coordination between respiratory and laryngeal systems. Hypernasality was found to be a prominent feature which reduced speech intelligibility. Imprecise articulation was seen in both the subjects as the hypoglossal nerve was affected following surgery. Injury to vagus, hypoglossal, gloss pharyngeal and facial nerves disturbed the function of swallowing. All the phases of swallow were affected. Aspiration was observed before and during the swallow, confirming the oropharyngeal dysphagia. All the subsystems were affected as per Frenchey Dysarthria Assessment signifying the diagnosis of flaccid dysarthria. Conclusion: There is an observable communication and swallowing difficulty seen following excision of glomus jugulare tumor. Even with complete resection, extensive rehabilitation may be necessary due to significant lower cranial nerve dysfunction. The finding of the present study stresses the need for involvement of as speech and swallowing therapist for pre-operative counseling and assessment of functional outcomes.Keywords: functional outcome, glomus jugulare tumor excision, multiple cranial nerve impairment, speech and swallowing
Procedia PDF Downloads 2527122 Multimodal Employee Attendance Management System
Authors: Khaled Mohammed
Abstract:
This paper presents novel face recognition and identification approaches for the real-time attendance management problem in large companies/factories and government institutions. The proposed uses the Minimum Ratio (MR) approach for employee identification. Capturing the authentic face variability from a sequence of video frames has been considered for the recognition of faces and resulted in system robustness against the variability of facial features. Experimental results indicated an improvement in the performance of the proposed system compared to the Previous approaches at a rate between 2% to 5%. In addition, it decreased the time two times if compared with the Previous techniques, such as Extreme Learning Machine (ELM) & Multi-Scale Structural Similarity index (MS-SSIM). Finally, it achieved an accuracy of 99%.Keywords: attendance management system, face detection and recognition, live face recognition, minimum ratio
Procedia PDF Downloads 1557121 Human Gait Recognition Using Moment with Fuzzy
Authors: Jyoti Bharti, Navneet Manjhi, M. K.Gupta, Bimi Jain
Abstract:
A reliable gait features are required to extract the gait sequences from an images. In this paper suggested a simple method for gait identification which is based on moments. Moment values are extracted on different number of frames of gray scale and silhouette images of CASIA database. These moment values are considered as feature values. Fuzzy logic and nearest neighbour classifier are used for classification. Both achieved higher recognition.Keywords: gait, fuzzy logic, nearest neighbour, recognition rate, moments
Procedia PDF Downloads 7597120 A Conglomerate of Multiple Optical Character Recognition Table Detection and Extraction
Authors: Smita Pallavi, Raj Ratn Pranesh, Sumit Kumar
Abstract:
Information representation as tables is compact and concise method that eases searching, indexing, and storage requirements. Extracting and cloning tables from parsable documents is easier and widely used; however, industry still faces challenges in detecting and extracting tables from OCR (Optical Character Recognition) documents or images. This paper proposes an algorithm that detects and extracts multiple tables from OCR document. The algorithm uses a combination of image processing techniques, text recognition, and procedural coding to identify distinct tables in the same image and map the text to appropriate the corresponding cell in dataframe, which can be stored as comma-separated values, database, excel, and multiple other usable formats.Keywords: table extraction, optical character recognition, image processing, text extraction, morphological transformation
Procedia PDF Downloads 1457119 Conversational Assistive Technology of Visually Impaired Person for Social Interaction
Authors: Komal Ghafoor, Tauqir Ahmad, Murtaza Hanif, Hira Zaheer
Abstract:
Assistive technology has been developed to support visually impaired people in their social interactions. Conversation assistive technology is designed to enhance communication skills, facilitate social interaction, and improve the quality of life of visually impaired individuals. This technology includes speech recognition, text-to-speech features, and other communication devices that enable users to communicate with others in real time. The technology uses natural language processing and machine learning algorithms to analyze spoken language and provide appropriate responses. It also includes features such as voice commands and audio feedback to provide users with a more immersive experience. These technologies have been shown to increase the confidence and independence of visually impaired individuals in social situations and have the potential to improve their social skills and relationships with others. Overall, conversation-assistive technology is a promising tool for empowering visually impaired people and improving their social interactions. One of the key benefits of conversation-assistive technology is that it allows visually impaired individuals to overcome communication barriers that they may face in social situations. It can help them to communicate more effectively with friends, family, and colleagues, as well as strangers in public spaces. By providing a more seamless and natural way to communicate, this technology can help to reduce feelings of isolation and improve overall quality of life. The main objective of this research is to give blind users the capability to move around in unfamiliar environments through a user-friendly device by face, object, and activity recognition system. This model evaluates the accuracy of activity recognition. This device captures the front view of the blind, detects the objects, recognizes the activities, and answers the blind query. It is implemented using the front view of the camera. The local dataset is collected that includes different 1st-person human activities. The results obtained are the identification of the activities that the VGG-16 model was trained on, where Hugging, Shaking Hands, Talking, Walking, Waving video, etc.Keywords: dataset, visually impaired person, natural language process, human activity recognition
Procedia PDF Downloads 607118 Image Recognition and Anomaly Detection Powered by GANs: A Systematic Review
Authors: Agastya Pratap Singh
Abstract:
Generative Adversarial Networks (GANs) have emerged as powerful tools in the fields of image recognition and anomaly detection due to their ability to model complex data distributions and generate realistic images. This systematic review explores recent advancements and applications of GANs in both image recognition and anomaly detection tasks. We discuss various GAN architectures, such as DCGAN, CycleGAN, and StyleGAN, which have been tailored to improve accuracy, robustness, and efficiency in visual data analysis. In image recognition, GANs have been used to enhance data augmentation, improve classification models, and generate high-quality synthetic images. In anomaly detection, GANs have proven effective in identifying rare and subtle abnormalities across various domains, including medical imaging, cybersecurity, and industrial inspection. The review also highlights the challenges and limitations associated with GAN-based methods, such as instability during training and mode collapse, and suggests future research directions to overcome these issues. Through this review, we aim to provide researchers with a comprehensive understanding of the capabilities and potential of GANs in transforming image recognition and anomaly detection practices.Keywords: generative adversarial networks, image recognition, anomaly detection, DCGAN, CycleGAN, StyleGAN, data augmentation
Procedia PDF Downloads 257117 Recognition of Cursive Arabic Handwritten Text Using Embedded Training Based on Hidden Markov Models (HMMs)
Authors: Rabi Mouhcine, Amrouch Mustapha, Mahani Zouhir, Mammass Driss
Abstract:
In this paper, we present a system for offline recognition cursive Arabic handwritten text based on Hidden Markov Models (HMMs). The system is analytical without explicit segmentation used embedded training to perform and enhance the character models. Extraction features preceded by baseline estimation are statistical and geometric to integrate both the peculiarities of the text and the pixel distribution characteristics in the word image. These features are modelled using hidden Markov models and trained by embedded training. The experiments on images of the benchmark IFN/ENIT database show that the proposed system improves recognition.Keywords: recognition, handwriting, Arabic text, HMMs, embedded training
Procedia PDF Downloads 3557116 Obstacle Detection and Path Tracking Application for Disables
Authors: Aliya Ashraf, Mehreen Sirshar, Fatima Akhtar, Farwa Kazmi, Jawaria Wazir
Abstract:
Vision, the basis for performing navigational tasks, is absent or greatly reduced in visually impaired people due to which they face many hurdles. For increasing the navigational capabilities of visually impaired people a desktop application ODAPTA is presented in this paper. The application uses camera to capture video from surroundings, apply various image processing algorithms to get information about path and obstacles, tracks them and delivers that information to user through voice commands. Experimental results show that the application works effectively for straight paths in daylight.Keywords: visually impaired, ODAPTA, Region of Interest (ROI), driver fatigue, face detection, expression recognition, CCD camera, artificial intelligence
Procedia PDF Downloads 5507115 My Voice My Well-Being: A Participatory Research Study with Secondary School Students in Bangladesh
Authors: Saira Hossain
Abstract:
Well-being commonly refers to the concept that equates to a good life. Similarly, student well-being can be understood as a notion of a good life at school. What constitutes a good life at school for students? – is an emerging question that poses huge interest in this area of research. Student well-being is not only associated with a student’s socio-emotional and academic development at school but also success in life after school as an adult. Today, student well-being is a popular agenda for educators, policymakers, teachers, parents, and most importantly, for students. With the emergence of student well-being, student's voice in matters important to them at school is increasingly getting priority. However, the coin has another side too. Despite the growing importance of understanding student well-being, it is still an alien concept in countries like Bangladesh. The education system of Bangladesh is highly rigid, centralized, and exam-focused. Student's academic achievement has been given the utmost priority at school, whereas their voice, as well as their well-being, is grossly neglected in practice. In this regard, the study set out to explore students' conceptualization of well-being at school in Bangladesh. The study was qualitative. It employed a participatory research approach to elicit the views of 25 secondary school students of aged 14-16 in Bangladesh to explore the concept of well-being. Data analysis was conducted following the thematic analysis technique. The results suggested that student conceptualized well-being as a multidimensional concept with multiple domains, including having, being, relating, feeling, thinking, functioning, and striving. The future implication of the study findings is discussed. Additionally, the study also underscores the implication of the participatory approach as a research technique to explore students' opinion in Bangladesh, where there exists a culture of silence regarding the student's voice.Keywords: Bangladesh, participatory research, secondary school, student well-being
Procedia PDF Downloads 137