Search results for: facial recognition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1778

Search results for: facial recognition

1718 Peripheral Facial Nerve Palsy after Lip Augmentation

Authors: Sana Ilyas, Kishalaya Mukherjee, Suresh Shetty

Abstract:

Lip Augmentation has become more common in recent years. Patients do not expect to experience facial palsy after having lip augmentation. This poster will present the findings of such a presentation and will discuss the possible pathophysiology and management. (This poster has been published as a paper in the dental update, June 2022) Aim: The aim of the study was to explore the link between facial nerve palsy and lip fillers, to explore the literature surrounding facial nerve palsy, and to discuss the case of a patient who presented with facial nerve palsy with seemingly unknown cause. Methodology: There was a thorough assessment of the current literature surrounding the topic. This included published papers in journals through PubMed database searches and printed books on the topic. A case presentation was discussed in detail of a patient presenting with peripheral facial nerve palsy and associating it with lip augmentation that she had a day prior. Results and Conclusion: Even though the pathophysiology may not be clear for this presentation, it is important to highlight uncommon presentations or complications that may occur after treatment. This can help with understanding and managing similar cases, should they arise.It is also important to differentiate cause and association in order to make an accurate diagnosis. This may be difficult if there is little scientific literature. Therefore, further research can help to improve the understanding of the pathophysiology of similar presentations. This poster has been published as a paper in dental update, June 2022, and therefore shares a similar conclusiom.

Keywords: facial palsy, lip augmentation, causation and correlation, dental cosmetics

Procedia PDF Downloads 115
1717 Handwriting Recognition of Gurmukhi Script: A Survey of Online and Offline Techniques

Authors: Ravneet Kaur

Abstract:

Character recognition is a very interesting area of pattern recognition. From past few decades, an intensive research on character recognition for Roman, Chinese, and Japanese and Indian scripts have been reported. In this paper, a review of Handwritten Character Recognition work on Indian Script Gurmukhi is being highlighted. Most of the published papers were summarized, various methodologies were analysed and their results are reported.

Keywords: Gurmukhi character recognition, online, offline, HCR survey

Procedia PDF Downloads 394
1716 OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text

Authors: A. R. Bagirzade, A. Sh. Najafova, S. M. Yessirkepova, E. S. Albert

Abstract:

This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication.

Keywords: ABBYY FineReader system, algorithm symbol recognition, OCR/ICR techniques, recognition technologies

Procedia PDF Downloads 133
1715 Difficulties in the Emotional Processing of Intimate Partner Violence Perpetrators

Authors: Javier Comes Fayos, Isabel RodríGuez Moreno, Sara Bressanutti, Marisol Lila, Angel Romero MartíNez, Luis Moya Albiol

Abstract:

Given the great impact produced by gender-based violence, its comprehensive approach seems essential. Consequently, research has focused on risk factors for violent behaviour, linking various psychosocial variables, as well as cognitive and neuropsychological deficits with the aggressors. However, studies on affective processing are scarce, so the present study investigates possible emotional alterations in men convicted of gender violence. The participants were 51 aggressors, who attended the CONTEXTO program with sentences of less than two years, and 47 men with no history of violence. The sample did not differ in age, socioeconomic level, education, or alcohol and other substances consumption. Anger, alexithymia and facial recognition of other people´s emotions were assessed through the State-Trait Anger Expression Inventory (STAXI-2), the Toronto Alexithymia Scale (TAS-20) and Reading the mind in the eyes (REM), respectively. Men convicted of gender-based violence showed higher scores on the anger trait and temperament dimensions, as well as on the anger expression index. They also scored higher on alexithymia and in the identification and emotional expression subscales. In addition, they showed greater difficulties in the facial recognition of emotions by having a lower score in the REM. These results seem to show difficulties in different affective areas in men condemned for gender violence. The deficits are reflected in greater difficulty in identifying and expressing emotions, in processing anger and in recognizing the emotions of others. All these difficulties have been related to the use of violent behavior. Consequently, it is essential and necessary to include emotional regulation in intervention programs for men who have been convicted of gender-based violence.

Keywords: alexithymia, anger, emotional processing, emotional recognition, empathy, intimate partner violence

Procedia PDF Downloads 163
1714 Tick Induced Facial Nerve Paresis: A Narrative Review

Authors: Jemma Porrett

Abstract:

Background: We present a literature review examining the research surrounding tick paralysis resulting in facial nerve palsy. A case of an intra-aural paralysis tick bite resulting in unilateral facial nerve palsy is also discussed. Methods: A novel case of otoacariasis with associated ipsilateral facial nerve involvement is presented. Additionally, we conducted a review of the literature, and we searched the MEDLINE and EMBASE databases for relevant literature published between 1915 and 2020. Utilising the following keywords; 'Ixodes', 'Facial paralysis', 'Tick bite', and 'Australia', 18 articles were deemed relevant to this study. Results: Eighteen articles included in the review comprised a total of 48 patients. Patients' ages ranged from one year to 84 years of age. Ten studies estimated the possible duration between a tick bite and facial nerve palsy, averaging 8.9 days. Forty-one patients presented with a single tick within the external auditory canal, three had a single tick located on the temple or forehead region, three had post-auricular ticks, and one patient had a remarkable 44 ticks removed from the face, scalp, neck, back, and limbs. A complete ipsilateral facial nerve palsy was present in 45 patients, notably, in 16 patients, this occurred following tick removal. House-Brackmann classification was utilised in 7 patients; four patients with grade 4, one patient with grade three, and two patients with grade 2 facial nerve palsy. Thirty-eight patients had complete recovery of facial palsy. Thirteen studies were analysed for time to recovery, with an average time of 19 days. Six patients had partial recovery at the time of follow-up. One article reported improvement in facial nerve palsy at 24 hours, but no further follow-up was reported. One patient was lost to follow up, and one article failed to mention any resolution of facial nerve palsy. One patient died from respiratory arrest following generalized paralysis. Conclusions: Tick paralysis is a severe but preventable disease. Careful examination of the face, scalp, and external auditory canal should be conducted in patients presenting with otalgia and facial nerve palsy, particularly in tropical areas, to exclude the possibility of tick infestation.

Keywords: facial nerve palsy, tick bite, intra-aural, Australia

Procedia PDF Downloads 73
1713 An Improved OCR Algorithm on Appearance Recognition of Electronic Components Based on Self-adaptation of Multifont Template

Authors: Zhu-Qing Jia, Tao Lin, Tong Zhou

Abstract:

The recognition method of Optical Character Recognition has been expensively utilized, while it is rare to be employed specifically in recognition of electronic components. This paper suggests a high-effective algorithm on appearance identification of integrated circuit components based on the existing methods of character recognition, and analyze the pros and cons.

Keywords: optical character recognition, fuzzy page identification, mutual correlation matrix, confidence self-adaptation

Procedia PDF Downloads 507
1712 Face Sketch Recognition in Forensic Application Using Scale Invariant Feature Transform and Multiscale Local Binary Patterns Fusion

Authors: Gargi Phadke, Mugdha Joshi, Shamal Salunkhe

Abstract:

Facial sketches are used as a crucial clue by criminal investigators for identification of suspects when the description of eyewitness or victims are only available as evidence. A forensic artist develops a sketch as per the verbal description is given by an eyewitness that shows the facial look of the culprit. In this paper, the fusion of Scale Invariant Feature Transform (SIFT) and multiscale local binary patterns (MLBP) are proposed as a feature to recognize a forensic face sketch images from a gallery of mugshot photos. This work focuses on comparative analysis of proposed scheme with existing algorithms in different challenges like illumination change and rotation condition. Experimental results show that proposed scheme can lead to better performance for the defined problem.

Keywords: SIFT feature, MLBP, PCA, face sketch

Procedia PDF Downloads 299
1711 Sports Fans and Non-Interested Public Recognition of the Problems of Sports in Egypt through Caricature

Authors: Alaaeldin Hamdy Ahmed Mohammed

Abstract:

Introduction: This study examines sports’ fans and non-interested public perception and recognition of the problems that have negative impacts upon the Egyptian sports, particularly football, through caricatures. Eight caricature paintings were designed to express eight problems affecting the Egyptian sports and its development. These paintings were distributed on two groups of the fans and the non-interested public. Methods: The study was limited to eight caricatures representing the eight issues which are: the impact of stopping the sports activity on athletes, the effect of clubs’ disagreement, fanaticism between the members of the ultras of different clubs, the negative impact of the mingling of politics into sports, the negative role of the clubs affects the professionalism of the promising players, the conflict between the national organization responsible for sports, the breaking in of the fans to the playgrounds, the impact of the lack of planning on the national team. The Results: The results showed that both sports fans and those who are not interested in sports recognized the problems that the caricatures refer to and criticizes exaggeration although the rate was higher for the fans. These caricatures contributed also in their recognition of the danger of the negative impact of these problems on the Egyptian sports, particularly football which is the most common at the Egyptian sports fans. Discussion: This finding echoes the conclusion that caricatures are distinctive in the adults’ facial stimuli that are either systematically exaggerated recognition of them.

Keywords: caricature, fans, football, sports

Procedia PDF Downloads 285
1710 The Effects of Affective Dimension of Face on Facial Attractiveness

Authors: Kyung-Ja Cho, Sun Jin Park

Abstract:

This study examined what effective dimension affects facial attractiveness. Two orthogonal dimensions, sharp-soft and babyish-mature, were used to rate the levels of facial attractiveness in 20’s women. This research also investigated the sex difference on the effect of effective dimension of face on attractiveness. The test subjects composed of 15 males and 18 females. They looked 330 photos of women in 20s. Then they rated the levels of the effective dimensions of faces with sharp-soft and babyish-mature, and the attraction with charmless-charming. The respond forms were Likert scales, the answer was scored from 1 to 9. As a result of multiple regression analysis, the subject reported the milder and younger appearance as more attractive. Both male and female subjects showed the same evaluation. This result means that two effective dimensions have the effect on estimating attractiveness.

Keywords: affective dimension of faces, facial attractiveness, sharp-soft, babyish-mature

Procedia PDF Downloads 301
1709 To Study the New Invocation of Biometric Authentication Technique

Authors: Aparna Gulhane

Abstract:

Biometrics is the science and technology of measuring and analyzing biological data form the basis of research in biological measuring techniques for the purpose of people identification and recognition. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements. Biometric systems are used to authenticate the person's identity. The idea is to use the special characteristics of a person to identify him. These papers present a biometric authentication techniques and actual deployment of potential by overall invocation of biometrics recognition, with an independent testing of various biometric authentication products and technology.

Keywords: types of biometrics, importance of biometric, review for biometrics and getting a new implementation, biometric authentication technique

Procedia PDF Downloads 285
1708 Face Recognition Using Eigen Faces Algorithm

Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale

Abstract:

Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.

Keywords: face detection, face recognition, eigen faces, algorithm

Procedia PDF Downloads 332
1707 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 98
1706 In vivo Mechanical Characterization of Facial Skin Combining Digital Image Correlation and Finite Element

Authors: Huixin Wei, Shibin Wang, Linan Li, Lei Zhou, Xinhao Tu

Abstract:

Facial skin is a biomedical material with complex mechanical properties of anisotropy, viscoelasticity, and hyperelasticity. The mechanical properties of facial skin are crucial for a number of applications including facial plastic surgery, animation, dermatology, cosmetic industry, and impact biomechanics. Skin is a complex multi-layered material which can be broadly divided into three main layers, the epidermis, the dermis, and the hypodermis. Collagen fibers account for 75% of the dry weight of dermal tissue, and it is these fibers which are responsible for the mechanical properties of skin. Many research on the anisotropic mechanical properties are mainly concentrated on in vitro, but there is a great difference between in vivo and in vitro for mechanical properties of the skin. In this study, we presented a method to measure the mechanical properties of facial skin in vivo. Digital image correlation (DIC) and indentation tests were used to obtain the experiment data, including the deformation of facial surface and indentation force-displacement curve. Then, the experiment was simulated using a finite element (FE) model. Application of Computed Tomography (CT) and reconstruction techniques obtained the real tissue geometry. A three-dimensional FE model of facial skin, including a bi-layer system, was obtained. As the epidermis is relatively thin, the epidermis and dermis were regarded as one layer and below it was hypodermis in this study. The upper layer was modeled as a Gasser-Ogden-Holzapfel (GOH) model to describe hyperelastic and anisotropic behaviors of the dermis. The under layer was modeled as a linear elastic model. In conclusion, the material properties of two-layer were determined by minimizing the error between the FE data and experimental data.

Keywords: facial skin, indentation test, finite element, digital image correlation, computed tomography

Procedia PDF Downloads 87
1705 When and Why Unhappy People Avoid Enjoyable Experiences

Authors: Hao Shen, Aparna Labroo

Abstract:

Across four studies, we show people in a negative mood avoid anticipated enjoyable experiences because of the subjective difficulty in simulating those experiences, and they misattribute these feelings of difficulty to reduced pleasantness of the anticipated experience. We observe the avoidance of enjoyable experiences only for anticipated experiences that involve smile-like facial-muscular simulation. When the need for facial-muscular simulation is attenuated, or when the anticipated experience relies on facial-muscular simulation to a lesser extent, people in a negative mood no longer avoid enjoyable experiences, but rather seek such experiences because they fit better with their ongoing mood-repair goals.

Keywords: emotion regulation, mood repair, embodiment, anticipated experiences

Procedia PDF Downloads 390
1704 A Theoretical Study on Pain Assessment through Human Facial Expresion

Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee

Abstract:

A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.

Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)

Procedia PDF Downloads 302
1703 Forensic Comparison of Facial Images for Human Identification

Authors: D. P. Gangwar

Abstract:

Identification of human through facial images has got great importance in forensic science. The video recordings, CCTV footage, passports, driver licenses and other related documents are invariably sent to the laboratory for comparison of the questioned photographs as well as video recordings with suspected photographs/recordings to prove the identity of a person. More than 300 questioned and 300 control photographs received in actual crime cases, received from various investigation agencies, have been compared by me so far using various familiar analysis and comparison techniques such as Holistic comparison, Morphological analysis, Photo-anthropometry and superimposition. On the basis of findings obtained during the examination huge photo exhibits, a realistic and comprehensive technique has been proposed which could be very useful for forensic.

Keywords: CCTV Images, facial features, photo-anthropometry, superimposition

Procedia PDF Downloads 499
1702 Tensor Deep Stacking Neural Networks and Bilinear Mapping Based Speech Emotion Classification Using Facial Electromyography

Authors: P. S. Jagadeesh Kumar, Yang Yung, Wenli Hu

Abstract:

Speech emotion classification is a dominant research field in finding a sturdy and profligate classifier appropriate for different real-life applications. This effort accentuates on classifying different emotions from speech signal quarried from the features related to pitch, formants, energy contours, jitter, shimmer, spectral, perceptual and temporal features. Tensor deep stacking neural networks were supported to examine the factors that influence the classification success rate. Facial electromyography signals were composed of several forms of focuses in a controlled atmosphere by means of audio-visual stimuli. Proficient facial electromyography signals were pre-processed using moving average filter, and a set of arithmetical features were excavated. Extracted features were mapped into consistent emotions using bilinear mapping. With facial electromyography signals, a database comprising diverse emotions will be exposed with a suitable fine-tuning of features and training data. A success rate of 92% can be attained deprived of increasing the system connivance and the computation time for sorting diverse emotional states.

Keywords: speech emotion classification, tensor deep stacking neural networks, facial electromyography, bilinear mapping, audio-visual stimuli

Procedia PDF Downloads 216
1701 Oro-Facial Manifestations of Acute Myeloid Leukaemia -A Case Report

Authors: Aamna Tufail, Kajal Kotecha, Iordanis Toursounidis, Ravinder Pabla

Abstract:

Introduction/Aims: Acute Myeloid Leukaemia (AML) is a part of leukaemic group of hematopoietic disorders with a varying range of presentations, including oro-facial manifestations. Early recognition and management are essential for favourable outcomes. Materials and Methods: We present our experience, clinical presentation, and clinical photographs of a patient with previously undiagnosed AML who presented with oral symptoms to the emergency department of our hospital. An analysis of clinical characteristics, diagnostic investigations, and management modalities was performed. Results/Statistics: A 58-year-old man presented to A&E reporting an 11-day history of right sided facial swelling, acute TMJ symptoms, and oral discomfort. A dentist ruled out acute dental causes one day post onset of symptoms. Initial assessment was anatomically inconsistent and did not reveal a routine oral or maxillofacial etiology. Detailed clinical examination demonstrated fever, generalised pallor, swelling and erythema of right nasolabial region, bilateral masseteric tenderness, intraoral palatal ecchymosis, palatal ulceration, buccal and labial petechiae, cervical lymphadenopathy, and haematoma on dorsum of right hand overlying right 2nd metacarpal joint. Suspecting a systemic medical cause, we requested haematological investigations, which revealed neutropenia, thrombocytopenia, and anaemia. Flow cytometry confirmed CD34 + AML. Oral discomfort was managed symptomatically. The patient was referred to a tertiary care centre for acute haematologic care, where he was treated with IV antibiotics and continuing cycles of chemotherapy. Conclusions/Clinical Relevance: Oro-facial manifestations may be the first clinical sign of AML. Awareness of its features is vital in early diagnosis. In this context, dentists and oral medicine specialists can play an important role in detecting clinical signs of haematological disorders such as AML.

Keywords: acute myeloid leukaemia, oral symptoms, ulceration, diagnosis, management

Procedia PDF Downloads 34
1700 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: A. Shoiynbek, K. Kozhakhmet, P. Menezes, D. Kuanyshbay, D. Bayazitov

Abstract:

Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 68
1699 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder

Abstract:

One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.

Keywords: affective computing, emotion recognition, humanoid robot, human-robot-interaction (HRI), social robots

Procedia PDF Downloads 203
1698 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 49
1697 Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions

Authors: Carmel Sofer, Dan Vilenchik, Ron Dotsch, Galia Avidan

Abstract:

Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev.

Keywords: Glove, face perception, facial expression perception. , facial expression production, machine learning, word embedding, word2vec

Procedia PDF Downloads 153
1696 Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control

Authors: Van Nhan Nguyen, Harald Holone

Abstract:

Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed.

Keywords: automatic speech recognition, asr, air traffic control, atc

Procedia PDF Downloads 362
1695 Influence of Dental Midline Deviation with Respect to Facial Flow Line on Smile Esthetics – A Cross-sectional Study

Authors: Kanza Tahir, Mubassar Fida, Rashna Hoshang Sukhia

Abstract:

Background/Objective: A contemporary concept states that dental midline deviation towards the direction of facial flow line (FFL) can mask the compromised smile esthetics. This study aimed to identify a range of midline deviations that can be perceived towards or away from the FFL influencing smile esthetics. Materials and methods: A cross-sectional study was conducted using a frontal smile photograph of an adult female. The photograph was altered on Adobe Photoshop software into six different photographs by deviating the dental midlines towards and away from the FFL. A constant deviation of the chin towards the left side was incorporated in all the photographs. Forty-three laypersons (LP)and dental professionals (DPs) evaluated those photographs onVisual Analog Scale (VAS). An Independent t-test was used to compare the perception of dental midline deviation between LP and DPs. Simple linear regression was run to identify the factors associated with the VAS scoring. Results: A statistically significant difference was observed for picture two with 4 mm towards FFL in the perception of midline deviation between LP and DPs. LP could not perceive the midline deviations up to 4 mm, while DPs were able to perceive deviations above 2 mm. Age was positively associated with the VAS score, while the female gender had a negative association. Limitations: Only one component of mini-esthetics was studied. This study did not include an ideal picture for comparison. Only one female subject was studied of normal facial type. Conclusions: 2-4 mm of midline deviation towards the facial flow line can be tolerated by laypersons and dental professionals.

Keywords: midline, facial flow line, smile esthetics, female

Procedia PDF Downloads 67
1694 A Contribution to Human Activities Recognition Using Expert System Techniques

Authors: Malika Yaici, Soraya Aloui, Sara Semchaoui

Abstract:

This paper deals with human activity recognition from sensor data. It is an active research area, and the main objective is to obtain a high recognition rate. In this work, a recognition system based on expert systems is proposed; the recognition is performed using the objects, object states, and gestures and taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions and the activity). The system recognizes complex activities after decomposing them into simple, easy-to-recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision.

Keywords: human activity recognition, ubiquitous computing, context-awareness, expert system

Procedia PDF Downloads 61
1693 Switching to the Latin Alphabet in Kazakhstan: A Brief Overview of Character Recognition Methods

Authors: Ainagul Yermekova, Liudmila Goncharenko, Ali Baghirzade, Sergey Sybachin

Abstract:

In this article, we address the problem of Kazakhstan's transition to the Latin alphabet. The transition process started in 2017 and is scheduled to be completed in 2025. In connection with these events, the problem of recognizing the characters of the new alphabet is raised. Well-known character recognition programs such as ABBYY FineReader, FormReader, MyScript Stylus did not recognize specific Kazakh letters that were used in Cyrillic. The author tries to give an assessment of the well-known method of character recognition that could be in demand as part of the country's transition to the Latin alphabet. Three methods of character recognition: template, structured, and feature-based, are considered through the algorithms of operation. At the end of the article, a general conclusion is made about the possibility of applying a certain method to a particular recognition process: for example, in the process of population census, recognition of typographic text in Latin, or recognition of photos of car numbers, store signs, etc.

Keywords: text detection, template method, recognition algorithm, structured method, feature method

Procedia PDF Downloads 154
1692 Recognizing an Individual, Their Topic of Conversation and Cultural Background from 3D Body Movement

Authors: Gheida J. Shahrour, Martin J. Russell

Abstract:

The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that inter-subject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively.

Keywords: person recognition, topic recognition, culture recognition, 3D body movement signals, variability compensation

Procedia PDF Downloads 509
1691 Botulinum Toxin a in the Treatment of Late Facial Nerve Palsy Complications

Authors: Akulov M. A., Orlova O. R., Zaharov V. O., Tomskij A. A.

Abstract:

Introduction: One of the common postoperative complications of posterior cranial fossa (PCF) and cerebello-pontine angle tumor treatment is a facial nerve palsy, which leads to multiple and resistant to treatment impairments of mimic muscles structure and functions. After 4-6 months after facial nerve palsy with insufficient therapeutic intervention patients develop a postparalythic syndrome, which includes such symptoms as mimic muscle insufficiency, mimic muscle contractures, synkinesis and spontaneous muscular twitching. A novel method of treatment is the use of a recent local neuromuscular blocking agent– botulinum toxin A (BTA). Experience of BTA treatment enables an assumption that it can be successfully used in late facial nerve palsy complications to significantly increase quality of life of patients. Study aim. To evaluate the efficacy of botulinum toxin A (BTA) (Xeomin) treatment in patients with late facial nerve palsy complications. Patients and Methods: 31 patients aged 27-59 years 6 months after facial nerve palsy development were evaluated. All patients received conventional treatment, including massage, movement therapy etc. Facial nerve palsy developed after acoustic nerve tumor resection in 23 (74,2%) patients, petroclival meningioma resection – in 8 (25,8%) patients. The first group included 17 (54,8%) patients, receiving BT-therapy; the second group – 14 (45,2%) patients continuing conventional treatment. BT-injections were performed in synkinesis or contracture points 1-2 U on injured site and 2-4 U on healthy side (for symmetry). Facial nerve function was evaluated on 2 and 4 months of therapy according to House-Brackman scale. Pain syndrome alleviation was assessed on VAS. Results: At baseline all patients in the first and second groups demonstrated аpostparalytic syndrome. We observed a significant improvement in patients receiving BTA after only one month of treatment. Mean VAS score at baseline was 80,4±18,7 and 77,9±18,2 in the first and second group, respectively. In the first group after one month of treatment we observed a significant decrease of pain syndrome – mean VAS score was 44,7±10,2 (р<0,01), whereas in the second group VAS score was as high as 61,8±9,4 points (p>0,05). By the 3d month of treatment pain syndrome intensity continued to decrease in both groups, but, the first group demonstrated significantly better results; mean score was 8,2±3,1 and 31,8±4,6 in the first and second group, respectively (р<0,01). Total House-Brackman score at baseline was 3,67±0,16 in the first group and 3,74±0,19 in the second group. Treatment resulted in a significant symptom improvement in the first group, with no improvement in the second group. After 4 months of treatment House-Brockman score in the first group was 3,1-fold lower, than in the second group (р<0,05). Conclusion: Botulinum toxin injections decrease postparalytic syndrome symptoms in patients with facial nerve palsy.

Keywords: botulinum toxin, facial nerve palsy, postparalytic syndrome, synkinesis

Procedia PDF Downloads 263
1690 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance

Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu

Abstract:

Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.

Keywords: artificial intelligence, facial recognition, natural language processing, internet of things

Procedia PDF Downloads 317
1689 Benign Osteoblastoma of the Mandible Resection and Replacement of the Defects with Decellularized Cattle Bone Scaffold with Mesenchymal Bone Marrow Stem Cells

Authors: K. Mardaleishvili, G. Loladze, G. Shatirishivili, D. Chakhunashvili, A. Vishnevskaya, Z. Kakabadze

Abstract:

Benign osteoblastoma is a benign tumor of the bone, usually affecting the vertebrae and long tubular bones. It is a rarely seen tumor of the facial bones. The authors present a case of a 28-year-old male patient with a tumor in mandibular body. The lesion was radically resected and histological analysis of the specimen demonstrated features typical of a benign osteoblastoma. The defect of the jaw was reconstructed with titanium implants and decellularized and lyophilized cattle bone matrix with mesenchymal bone marrow stem cells transplantation. This presentation describes the procedures for rehabilitating a patient with decellularized bone scaffold in the region of the face, recovering the facial contours and esthetics of the patient.

Keywords: facial bones, osteoblastoma, stem cells, transplantation

Procedia PDF Downloads 396