Search results for: automated facial recognition
2614 Intelligent Campus Monitoring: YOLOv8-Based High-Accuracy Activity Recognition
Authors: A. Degale Desta, Tamirat Kebamo
Abstract:
Background: Recent advances in computer vision and pattern recognition have significantly improved activity recognition through video analysis, particularly with the application of Deep Convolutional Neural Networks (CNNs). One-stage detectors now enable efficient video-based recognition by simultaneously predicting object categories and locations. Such advancements are highly relevant in educational settings where CCTV surveillance could automatically monitor academic activities, enhancing security and classroom management. However, current datasets and recognition systems lack the specific focus on campus environments necessary for practical application in these settings.Objective: This study aims to address this gap by developing a dataset and testing an automated activity recognition system specifically tailored for educational campuses. The EthioCAD dataset was created to capture various classroom activities and teacher-student interactions, facilitating reliable recognition of academic activities using deep learning models. Method: EthioCAD, a novel video-based dataset, was created with a design science research approach to encompass teacher-student interactions across three domains and 18 distinct classroom activities. Using the Roboflow AI framework, the data was processed, with 4.224 KB of frames and 33.485 MB of images managed for frame extraction, labeling, and organization. The Ultralytics YOLOv8 model was then implemented within Google Colab to evaluate the dataset’s effectiveness, achieving high mean Average Precision (mAP) scores. Results: The YOLOv8 model demonstrated robust activity recognition within campus-like settings, achieving an mAP50 of 90.2% and an mAP50-95 of 78.6%. These results highlight the potential of EthioCAD, combined with YOLOv8, to provide reliable detection and classification of classroom activities, supporting automated surveillance needs on educational campuses. Discussion: The high performance of YOLOv8 on the EthioCAD dataset suggests that automated activity recognition for surveillance is feasible within educational environments. This system addresses current limitations in campus-specific data and tools, offering a tailored solution for academic monitoring that could enhance the effectiveness of CCTV systems in these settings. Conclusion: The EthioCAD dataset, alongside the YOLOv8 model, provides a promising framework for automated campus activity recognition. This approach lays the groundwork for future advancements in CCTV-based educational surveillance systems, enabling more refined and reliable monitoring of classroom activities.Keywords: deep CNN, EthioCAD, deep learning, YOLOv8, activity recognition
Procedia PDF Downloads 102613 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models
Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan
Abstract:
Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network
Procedia PDF Downloads 262612 Visual Speech Perception of Arabic Emphatics
Authors: Maha Saliba Foster
Abstract:
Speech perception has been recognized as a bi-sensory process involving the auditory and visual channels. Compared to the auditory modality, the contribution of the visual signal to speech perception is not very well understood. Studying how the visual modality affects speech recognition can have pedagogical implications in second language learning, as well as clinical application in speech therapy. The current investigation explores the potential effect of speech visual cues on the perception of Arabic emphatics (AEs). The corpus consists of 36 minimal pairs each containing two contrasting consonants, an AE versus a non-emphatic (NE). Movies of four Lebanese speakers were edited to allow perceivers to have partial view of facial regions: lips only, lips-cheeks, lips-chin, lips-cheeks-chin, lips-cheeks-chin-neck. In the absence of any auditory information and relying solely on visual speech, perceivers were above chance at correctly identifying AEs or NEs across vowel contexts; moreover, the models were able to predict the probability of perceivers’ accuracy in identifying some of the COIs produced by certain speakers; additionally, results showed an overlap between the measurements selected by the computer and those selected by human perceivers. The lack of significant face effect on the perception of AEs seems to point to the lips, present in all of the videos, as the most important and often sufficient facial feature for emphasis recognition. Future investigations will aim at refining the analyses of visual cues used by perceivers by using Principal Component Analysis and including time evolution of facial feature measurements.Keywords: Arabic emphatics, machine learning, speech perception, visual speech perception
Procedia PDF Downloads 3062611 Noninvasive Evaluation of Acupuncture by Measuring Facial Temperature through Thermal Image
Authors: An Guo, Hieyong Jeong, Tianyi Wang, Na Li, Yuko Ohno
Abstract:
Acupuncture, known as sensory simulation, has been used to treat various disorders for thousands of years. However, present studies had not addressed approaches for noninvasive measurement in order to evaluate therapeutic effect of acupuncture. The purpose of this study is to propose a noninvasive method to evaluate acupuncture by measuring facial temperature through thermal image. Three human subjects were recruited in this study. Each subject received acupuncture therapy for 30 mins. Acupuncture needles (Ø0.16 x 30 mm) were inserted into Baihui point (DU20), Neiguan points (PC6) and Taichong points (LR3), acupuncture needles (Ø0.18 x 39 mm) were inserted into Tanzhong point (RN17), Zusanli points (ST36) and Yinlingquan points (SP9). Facial temperature was recorded by an infrared thermometer. Acupuncture therapeutic effect was compared pre- and post-acupuncture. Experiment results demonstrated that facial temperature changed according to acupuncture therapeutic effect. It was concluded that proposed method showed high potential to evaluate acupuncture by noninvasive measurement of facial temperature.Keywords: acupuncture, facial temperature, noninvasive evaluation, thermal image
Procedia PDF Downloads 1872610 Local Spectrum Feature Extraction for Face Recognition
Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd ZaizuIlyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh
Abstract:
This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.Keywords: local features modelling, face recognition system, Gaussian mixture models, Feret
Procedia PDF Downloads 6672609 Automatic Music Score Recognition System Using Digital Image Processing
Authors: Yuan-Hsiang Chang, Zhong-Xian Peng, Li-Der Jeng
Abstract:
Music has always been an integral part of human’s daily lives. But, for the most people, reading musical score and turning it into melody is not easy. This study aims to develop an Automatic music score recognition system using digital image processing, which can be used to read and analyze musical score images automatically. The technical approaches included: (1) staff region segmentation; (2) image preprocessing; (3) note recognition; and (4) accidental and rest recognition. Digital image processing techniques (e.g., horizontal /vertical projections, connected component labeling, morphological processing, template matching, etc.) were applied according to musical notes, accidents, and rests in staff notations. Preliminary results showed that our system could achieve detection and recognition rates of 96.3% and 91.7%, respectively. In conclusion, we presented an effective automated musical score recognition system that could be integrated in a system with a media player to play music/songs given input images of musical score. Ultimately, this system could also be incorporated in applications for mobile devices as a learning tool, such that a music player could learn to play music/songs.Keywords: connected component labeling, image processing, morphological processing, optical musical recognition
Procedia PDF Downloads 4192608 Exploring the Efficacy of Nitroglycerin in Filler-Induced Facial Skin Ischemia: A Narrative Review
Authors: Amir Feily, Hazhir Shahmoradi Akram, Mojtaba Ghaedi, Farshid Javdani, Naser Hatami, Navid Kalani, Mohammad Zarenezhad
Abstract:
Background: Filler-induced facial skin ischemia is a potential complication of dermal filler injections that can result in tissue damage and necrosis. Nitroglycerin has been suggested as a treatment option due to its vasodilatory effects, but its efficacy in this context is unclear. Methods: A narrative review was conducted to examine the available evidence on the efficacy of nitroglycerin in filler-induced facial skin ischemia. Relevant studies were identified through a search of electronic databases and manual searching of reference lists. Results: The review found limited evidence supporting the efficacy of nitroglycerin in this context. While there were case reports where the combination of nitroglycerin and hyaluronidase was successful in treating filler-induced facial skin ischemia, there was only one case report where nitroglycerin alone was successful. Furthermore, a rat model did not demonstrate any benefits of nitroglycerin and showed harmful results. Conclusion: The evidence regarding the efficacy of nitroglycerin in filler-induced facial skin ischemia is inconclusive and seems to be against its application. Further research is needed to determine the effectiveness of nitroglycerin alone and in combination with other treatments for this condition. Clinicians should consider limited evidence bases when deciding on treatment options for patients with filler-induced facial skin ischemia.Keywords: nitroglycerin, facial, skin ischemia, fillers, efficacy, narrative review
Procedia PDF Downloads 922607 Towards a Complete Automation Feature Recognition System for Sheet Metal Manufacturing
Authors: Bahaa Eltahawy, Mikko Ylihärsilä, Reino Virrankoski, Esko Petäjä
Abstract:
Sheet metal processing is automated, but the step from product models to the production machine control still requires human intervention. This may cause time consuming bottlenecks in the production process and increase the risk of human errors. In this paper we present a system, which automatically recognizes features from the CAD-model of the sheet metal product. By using these features, the system produces a complete model of the particular sheet metal product. Then the model is used as an input for the sheet metal processing machine. Currently the system is implemented, capable to recognize more than 11 of the most common sheet metal structural features, and the procedure is fully automated. This provides remarkable savings in the production time, and protects against the human errors. This paper presents the developed system architecture, applied algorithms and system software implementation and testing.Keywords: feature recognition, automation, sheet metal manufacturing, CAD, CAM
Procedia PDF Downloads 3542606 Highly Realistic Facial Expressions of Anthropomorphic Social Agent as a Factor in Solving the 'Uncanny Valley' Problem
Authors: Daniia Nigmatullina, Vlada Kugurakova, Maxim Talanov
Abstract:
We present a methodology and our plans of anthropomorphic social agent visualization. That includes creation of three-dimensional model of the virtual companion's head and its facial expressions. Talking Head is a cross-disciplinary project of developing of the human-machine interface with cognitive functions. During the creation of a realistic humanoid robot or a character, there might be the ‘uncanny valley’ problem. We think about this phenomenon and its possible causes. We are going to overcome the ‘uncanny valley’ by increasing of realism. This article discusses issues that should be considered when creating highly realistic characters (particularly the head), their facial expressions and speech visualization.Keywords: anthropomorphic social agent, facial animation, uncanny valley, visualization, 3D modeling
Procedia PDF Downloads 2902605 A Common Automated Programming Platform for Knowledge Based Software Engineering
Authors: Ivan Stanev, Maria Koleva
Abstract:
A common platform for automated programming (CPAP) is defined in details. Two versions of CPAP are described: Cloud-based (including the set of components for classic programming, and the set of components for combined programming) and KBASE based (including the set of components for automated programming, and the set of components for ontology programming). Four KBASE products (module for automated programming of robots, intelligent product manual, intelligent document display, and intelligent form generator) are analyzed and CPAP contributions to automated programming are presented.Keywords: automated programming, cloud computing, knowledge based software engineering, service oriented architecture
Procedia PDF Downloads 3432604 Anthropometric Measurements of Facial Proportions in Azerbaijan Population
Authors: Nigar Sultanova
Abstract:
Facial morphology is a constant topic of concern for clinicians. When anthropometric methods were introduced into clinical practice to quantify changes in the craniofacial framework, features distinguishing various ethnic group were discovered. Normative data of facial measurements are indispensable to precise determination of the degree of deviations from normal. Establish the reference range of facial proportions in Azerbaijan population by anthropometric measurements of craniofacial complex. The study group consisted of 350 healthy young subjects, 175 males and 175 females, 18 to 25 years of age, from 7 different regions of Azerbaijan. The anthropometric examination was performed according to L.Farkas's method with our modification. In order to determine the morphologic characteristics of seven regions of the craniofacial complex 42 anthropometric measurements were selected. The anthropometric examination. Included the usage of 33 anthropometric landmarks. The 80 indices of the facial proportions, suggested by Farkas and Munro, were calculated: head -10, face - 23, nose - 23, lips - 9, orbits - 11, ears - 4. The date base of the North American white population was used as a reference group. Anthropometric measurements of facial proportions in Azerbaijan population revealed a significant difference between mеn and womеn, according to sexual dimorphism. In comparison with North American whites, considerable differences of facial proportions were observed in the head, face, orbits, labio-oral, nose and ear region. However, in women of the Azerbaijani population, 29 out of 80 proportion indices were similar to the proportions of NAW women. In the men of the Azerbaijani population, 27 out of 80 proportion indices did not reveal a statistically significant difference from the proportions of NAW men. Estimation of the reference range of facial proportions in Azerbaijan population migth be helpful to formulate surgical plan in treatment of congenital or post-traumatic facial deformities successfully.Keywords: facial morphology, anthropometry, indices of proportion, measurement
Procedia PDF Downloads 1172603 Facial Infiltrating Lipomatosis, a Rare Cause of Facial Asymmetry to Be Known: Case Report and Literature Review
Authors: Shantanu Vyas, Neerja Meena
Abstract:
Facial infiltrating lipomatosis is a rare lipomatous lesion, first described by Slavin in 1983. It is a benign pseudotumor pathology. It corresponds to a non-encapsulated collection of mature adipocytes infiltrating the local tissue and hyperplasia of underlying bone leading to a craniofacial deformity. Very few cases have been reported in the literature. We report the case of a 19-year-old female patient, who was consulted for a swelling of the right hemiface progressively evolving since birth. Physical examination revealed facial asymmetry. On palpation, the mass was soft, painless, not compressible, not pulsatile, not fluctuating. In view of the asymptomatic nature and slow progression of the lesion, a lipomatous tumour, namely lipoma, was suggested. CT scan image shows a hyperplastic subcutaneous fat on the right hemiface. On the right jugal and temporal areas, there is a subcutaneous formation of fatty density, poorly limited, with no detectable peripheral capsule. It merges with the adjacent fat. In the bone window, there was a hyperplasia of underlying bone. Facial lipomatosis infiltration of the face is a benign pseudotumor pathology. As a result, it can be confused with other disorders, in particular, hemifacial hyperplasia. Combination of physical and radiological findings can establish the diagnosis. Surgical treatment is done for cosmetic purposes.Keywords: cosmetic correction and facial assemetry, aesthetic results, facial infiltration, surgery
Procedia PDF Downloads 762602 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination
Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini
Abstract:
This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.Keywords: impersonation, image registration, incrimination, object detection, threshold evaluation
Procedia PDF Downloads 2302601 Interventions for Children with Autism Using Interactive Technologies
Authors: Maria Hopkins, Sarah Koch, Fred Biasini
Abstract:
Autism is lifelong disorder that affects one out of every 110 Americans. The deficits that accompany Autism Spectrum Disorders (ASD), such as abnormal behaviors and social incompetence, often make it extremely difficult for these individuals to gain functional independence from caregivers. These long-term implications necessitate an immediate effort to improve social skills among children with an ASD. Any technology that could teach individuals with ASD necessary social skills would not only be invaluable for the individuals affected, but could also effect a massive saving to society in treatment programs. The overall purpose of the first study was to develop, implement, and evaluate an avatar tutor for social skills training in children with ASD. “Face Say” was developed as a colorful computer program that contains several different activities designed to teach children specific social skills, such as eye gaze, joint attention, and facial recognition. The children with ASD were asked to attend to FaceSay or a control painting computer game for six weeks. Children with ASD who received the training had an increase in emotion recognition, F(1, 48) = 23.04, p < 0.001 (adjusted Ms 8.70 and 6.79, respectively) compared to the control group. In addition, children who received the FaceSay training had higher post-test scored in facial recognition, F(1, 48) = 5.09, p < 0.05 (adjusted Ms: 38.11 and 33.37, respectively) compared to controls. The findings provide information about the benefits of computer-based training for children with ASD. Recent research suggests the value of also using socially assistive robots with children who have an ASD. Researchers investigating robots as tools for therapy in ASD have reported increased engagement, increased levels of attention, and novel social behaviors when robots are part of the social interaction. The overall goal of the second study was to develop a social robot designed to teach children specific social skills such as emotion recognition. The robot is approachable, with both an animal-like appearance and features of a human face (i.e., eyes, eyebrows, mouth). The feasibility of the robot is being investigated in children ages 7-12 to explore whether the social robot is capable of forming different facial expressions to accurately display emotions similar to those observed in the human face. The findings of this study will be used to create a potentially effective and cost efficient therapy for improving the cognitive-emotional skills of children with autism. Implications and study findings using the robot as an intervention tool will be discussed.Keywords: autism, intervention, technology, emotions
Procedia PDF Downloads 3812600 Three-Dimensional Measurement and Analysis of Facial Nerve Recess
Authors: Kang Shuo-Shuo, Li Jian-Nan, Yang Shiming
Abstract:
Purpose: The three-dimensional anatomical structure of the facial nerve recess and its relationship were measured by high-resolution temporal bone CT to provide imaging reference for cochlear implant operation. Materials and Methods: By analyzing the high-resolution CT of 160 cases (320 pleural ears) of the temporal bone, the following parameters were measured at the axial window niche level: 1. The distance between the facial nerve and chordae tympani nerve d1; 2. Distance between the facial nerve and circular window niche d2; 3. The relative Angle between the facial nerve and the circular window niche a; 4. Distance between the middle point of the face recess and the circular window niche d3; 5. The relative angle between the middle point of the face recess and the circular window niche b. Factors that might influence the anatomy of the facial recess were recorded, including the patient's sex, age, and anatomical variation (e.g., vestibular duct dilation, mastoid gas type, mothoid sinus advancement, jugular bulbar elevation, etc.), and the correlation between these factors and the measured facial recess parameters was analyzed. Result: The mean value of face-drum distance d1 is (3.92 ± 0.26) mm, the mean value of face-niche distance d2 is (5.95 ± 0.62) mm, the mean value of face-niche Angle a is (94.61 ± 9.04) °, and the mean value of fossa - niche distance d3 is (6.46 ± 0.63) mm. The average fossa-niche Angle b was (113.47 ± 7.83) °. Gender, age, and anterior sigmoid sinus were the three factors affecting the width of the opposite recess d1, the Angle of the opposite nerve relative to the circular window niche a, and the Angle of the facial recess relative to the circular window niche b. Conclusion: High-resolution temporal bone CT before cochlear implantation can show the important anatomical relationship of the facial nerve recess, and the measurement results have clinical reference value for the operation of cochlear implantation.Keywords: cochlear implantation, recess of facial nerve, temporal bone CT, three-dimensional measurement
Procedia PDF Downloads 162599 Peripheral Facial Nerve Palsy after Lip Augmentation
Authors: Sana Ilyas, Kishalaya Mukherjee, Suresh Shetty
Abstract:
Lip Augmentation has become more common in recent years. Patients do not expect to experience facial palsy after having lip augmentation. This poster will present the findings of such a presentation and will discuss the possible pathophysiology and management. (This poster has been published as a paper in the dental update, June 2022) Aim: The aim of the study was to explore the link between facial nerve palsy and lip fillers, to explore the literature surrounding facial nerve palsy, and to discuss the case of a patient who presented with facial nerve palsy with seemingly unknown cause. Methodology: There was a thorough assessment of the current literature surrounding the topic. This included published papers in journals through PubMed database searches and printed books on the topic. A case presentation was discussed in detail of a patient presenting with peripheral facial nerve palsy and associating it with lip augmentation that she had a day prior. Results and Conclusion: Even though the pathophysiology may not be clear for this presentation, it is important to highlight uncommon presentations or complications that may occur after treatment. This can help with understanding and managing similar cases, should they arise.It is also important to differentiate cause and association in order to make an accurate diagnosis. This may be difficult if there is little scientific literature. Therefore, further research can help to improve the understanding of the pathophysiology of similar presentations. This poster has been published as a paper in dental update, June 2022, and therefore shares a similar conclusiom.Keywords: facial palsy, lip augmentation, causation and correlation, dental cosmetics
Procedia PDF Downloads 1482598 Handwriting Recognition of Gurmukhi Script: A Survey of Online and Offline Techniques
Authors: Ravneet Kaur
Abstract:
Character recognition is a very interesting area of pattern recognition. From past few decades, an intensive research on character recognition for Roman, Chinese, and Japanese and Indian scripts have been reported. In this paper, a review of Handwritten Character Recognition work on Indian Script Gurmukhi is being highlighted. Most of the published papers were summarized, various methodologies were analysed and their results are reported.Keywords: Gurmukhi character recognition, online, offline, HCR survey
Procedia PDF Downloads 4242597 Difficulties in the Emotional Processing of Intimate Partner Violence Perpetrators
Authors: Javier Comes Fayos, Isabel RodríGuez Moreno, Sara Bressanutti, Marisol Lila, Angel Romero MartíNez, Luis Moya Albiol
Abstract:
Given the great impact produced by gender-based violence, its comprehensive approach seems essential. Consequently, research has focused on risk factors for violent behaviour, linking various psychosocial variables, as well as cognitive and neuropsychological deficits with the aggressors. However, studies on affective processing are scarce, so the present study investigates possible emotional alterations in men convicted of gender violence. The participants were 51 aggressors, who attended the CONTEXTO program with sentences of less than two years, and 47 men with no history of violence. The sample did not differ in age, socioeconomic level, education, or alcohol and other substances consumption. Anger, alexithymia and facial recognition of other people´s emotions were assessed through the State-Trait Anger Expression Inventory (STAXI-2), the Toronto Alexithymia Scale (TAS-20) and Reading the mind in the eyes (REM), respectively. Men convicted of gender-based violence showed higher scores on the anger trait and temperament dimensions, as well as on the anger expression index. They also scored higher on alexithymia and in the identification and emotional expression subscales. In addition, they showed greater difficulties in the facial recognition of emotions by having a lower score in the REM. These results seem to show difficulties in different affective areas in men condemned for gender violence. The deficits are reflected in greater difficulty in identifying and expressing emotions, in processing anger and in recognizing the emotions of others. All these difficulties have been related to the use of violent behavior. Consequently, it is essential and necessary to include emotional regulation in intervention programs for men who have been convicted of gender-based violence.Keywords: alexithymia, anger, emotional processing, emotional recognition, empathy, intimate partner violence
Procedia PDF Downloads 1992596 OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text
Authors: A. R. Bagirzade, A. Sh. Najafova, S. M. Yessirkepova, E. S. Albert
Abstract:
This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication.Keywords: ABBYY FineReader system, algorithm symbol recognition, OCR/ICR techniques, recognition technologies
Procedia PDF Downloads 1682595 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population
Authors: Ye Xue, Zhenhua Deng
Abstract:
Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool
Procedia PDF Downloads 582594 Tick Induced Facial Nerve Paresis: A Narrative Review
Authors: Jemma Porrett
Abstract:
Background: We present a literature review examining the research surrounding tick paralysis resulting in facial nerve palsy. A case of an intra-aural paralysis tick bite resulting in unilateral facial nerve palsy is also discussed. Methods: A novel case of otoacariasis with associated ipsilateral facial nerve involvement is presented. Additionally, we conducted a review of the literature, and we searched the MEDLINE and EMBASE databases for relevant literature published between 1915 and 2020. Utilising the following keywords; 'Ixodes', 'Facial paralysis', 'Tick bite', and 'Australia', 18 articles were deemed relevant to this study. Results: Eighteen articles included in the review comprised a total of 48 patients. Patients' ages ranged from one year to 84 years of age. Ten studies estimated the possible duration between a tick bite and facial nerve palsy, averaging 8.9 days. Forty-one patients presented with a single tick within the external auditory canal, three had a single tick located on the temple or forehead region, three had post-auricular ticks, and one patient had a remarkable 44 ticks removed from the face, scalp, neck, back, and limbs. A complete ipsilateral facial nerve palsy was present in 45 patients, notably, in 16 patients, this occurred following tick removal. House-Brackmann classification was utilised in 7 patients; four patients with grade 4, one patient with grade three, and two patients with grade 2 facial nerve palsy. Thirty-eight patients had complete recovery of facial palsy. Thirteen studies were analysed for time to recovery, with an average time of 19 days. Six patients had partial recovery at the time of follow-up. One article reported improvement in facial nerve palsy at 24 hours, but no further follow-up was reported. One patient was lost to follow up, and one article failed to mention any resolution of facial nerve palsy. One patient died from respiratory arrest following generalized paralysis. Conclusions: Tick paralysis is a severe but preventable disease. Careful examination of the face, scalp, and external auditory canal should be conducted in patients presenting with otalgia and facial nerve palsy, particularly in tropical areas, to exclude the possibility of tick infestation.Keywords: facial nerve palsy, tick bite, intra-aural, Australia
Procedia PDF Downloads 1132593 An Improved OCR Algorithm on Appearance Recognition of Electronic Components Based on Self-adaptation of Multifont Template
Authors: Zhu-Qing Jia, Tao Lin, Tong Zhou
Abstract:
The recognition method of Optical Character Recognition has been expensively utilized, while it is rare to be employed specifically in recognition of electronic components. This paper suggests a high-effective algorithm on appearance identification of integrated circuit components based on the existing methods of character recognition, and analyze the pros and cons.Keywords: optical character recognition, fuzzy page identification, mutual correlation matrix, confidence self-adaptation
Procedia PDF Downloads 5402592 Face Sketch Recognition in Forensic Application Using Scale Invariant Feature Transform and Multiscale Local Binary Patterns Fusion
Authors: Gargi Phadke, Mugdha Joshi, Shamal Salunkhe
Abstract:
Facial sketches are used as a crucial clue by criminal investigators for identification of suspects when the description of eyewitness or victims are only available as evidence. A forensic artist develops a sketch as per the verbal description is given by an eyewitness that shows the facial look of the culprit. In this paper, the fusion of Scale Invariant Feature Transform (SIFT) and multiscale local binary patterns (MLBP) are proposed as a feature to recognize a forensic face sketch images from a gallery of mugshot photos. This work focuses on comparative analysis of proposed scheme with existing algorithms in different challenges like illumination change and rotation condition. Experimental results show that proposed scheme can lead to better performance for the defined problem.Keywords: SIFT feature, MLBP, PCA, face sketch
Procedia PDF Downloads 3362591 KBASE Technological Framework - Requirements
Authors: Ivan Stanev, Maria Koleva
Abstract:
Automated software development issues are addressed in this paper. Layers and packages of a Common Platform for Automated Programming (CPAP) are defined based on Service Oriented Architecture, Cloud computing, Knowledge based automated software engineering (KBASE) and Method of automated programming. Tools of seven leading companies (AWS of Amazon, Azure of Microsoft, App Engine of Google, vCloud of VMWare, Bluemix of IBM, Helion of HP, OCPaaS of Oracle) are analyzed in the context of CPAP. Based on the results of the analysis CPAP requirements are formulatedKeywords: automated programming, cloud computing, knowledge based software engineering, service oriented architecture
Procedia PDF Downloads 3012590 The Effects of Affective Dimension of Face on Facial Attractiveness
Authors: Kyung-Ja Cho, Sun Jin Park
Abstract:
This study examined what effective dimension affects facial attractiveness. Two orthogonal dimensions, sharp-soft and babyish-mature, were used to rate the levels of facial attractiveness in 20’s women. This research also investigated the sex difference on the effect of effective dimension of face on attractiveness. The test subjects composed of 15 males and 18 females. They looked 330 photos of women in 20s. Then they rated the levels of the effective dimensions of faces with sharp-soft and babyish-mature, and the attraction with charmless-charming. The respond forms were Likert scales, the answer was scored from 1 to 9. As a result of multiple regression analysis, the subject reported the milder and younger appearance as more attractive. Both male and female subjects showed the same evaluation. This result means that two effective dimensions have the effect on estimating attractiveness.Keywords: affective dimension of faces, facial attractiveness, sharp-soft, babyish-mature
Procedia PDF Downloads 3362589 Sports Fans and Non-Interested Public Recognition of the Problems of Sports in Egypt through Caricature
Authors: Alaaeldin Hamdy Ahmed Mohammed
Abstract:
Introduction: This study examines sports’ fans and non-interested public perception and recognition of the problems that have negative impacts upon the Egyptian sports, particularly football, through caricatures. Eight caricature paintings were designed to express eight problems affecting the Egyptian sports and its development. These paintings were distributed on two groups of the fans and the non-interested public. Methods: The study was limited to eight caricatures representing the eight issues which are: the impact of stopping the sports activity on athletes, the effect of clubs’ disagreement, fanaticism between the members of the ultras of different clubs, the negative impact of the mingling of politics into sports, the negative role of the clubs affects the professionalism of the promising players, the conflict between the national organization responsible for sports, the breaking in of the fans to the playgrounds, the impact of the lack of planning on the national team. The Results: The results showed that both sports fans and those who are not interested in sports recognized the problems that the caricatures refer to and criticizes exaggeration although the rate was higher for the fans. These caricatures contributed also in their recognition of the danger of the negative impact of these problems on the Egyptian sports, particularly football which is the most common at the Egyptian sports fans. Discussion: This finding echoes the conclusion that caricatures are distinctive in the adults’ facial stimuli that are either systematically exaggerated recognition of them.Keywords: caricature, fans, football, sports
Procedia PDF Downloads 3172588 To Study the New Invocation of Biometric Authentication Technique
Authors: Aparna Gulhane
Abstract:
Biometrics is the science and technology of measuring and analyzing biological data form the basis of research in biological measuring techniques for the purpose of people identification and recognition. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements. Biometric systems are used to authenticate the person's identity. The idea is to use the special characteristics of a person to identify him. These papers present a biometric authentication techniques and actual deployment of potential by overall invocation of biometrics recognition, with an independent testing of various biometric authentication products and technology.Keywords: types of biometrics, importance of biometric, review for biometrics and getting a new implementation, biometric authentication technique
Procedia PDF Downloads 3212587 Strabismus Detection Using Eye Alignment Stability
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. Currently, many children with strabismus remain undiagnosed until school entry because current automated screening methods have limited success in the preschool age range. A method for strabismus detection using eye alignment stability (EAS) is proposed. This method starts with face detection, followed by facial landmark detection, eye region segmentation, eye gaze extraction, and eye alignment stability estimation. Binarization and morphological operations are performed for segmenting the pupil region from the eye. After finding the EAS, its absolute value is used to differentiate the strabismic eye from the non-strabismic eye. If the value of the eye alignment stability is greater than a particular threshold, then the eyes are misaligned, and if its value is less than the threshold, the eyes are aligned. The method was tested on 175 strabismic and non-strabismic images obtained from Kaggle and Google Photos. The strabismic eye is taken as a positive class, and the non-strabismic eye is taken as a negative class. The test produced a true positive rate of 100% and a false positive rate of 7.69%.Keywords: strabismus, face detection, facial landmarks, eye segmentation, eye gaze, binarization
Procedia PDF Downloads 762586 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement
Authors: Brittany Richardson, Ying Wang
Abstract:
For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments
Procedia PDF Downloads 1342585 Face Recognition Using Eigen Faces Algorithm
Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale
Abstract:
Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.Keywords: face detection, face recognition, eigen faces, algorithm
Procedia PDF Downloads 361