Search results for: facial melanosis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 278

Search results for: facial melanosis

248 Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences

Authors: T. Hari Prasath, P. Ithaya Rani

Abstract:

In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods.

Keywords: detecting face, Gabor filter, multi-class AdaBoost classifier, Z-score normalization

Procedia PDF Downloads 278
247 In vivo Mechanical Characterization of Facial Skin Combining Digital Image Correlation and Finite Element

Authors: Huixin Wei, Shibin Wang, Linan Li, Lei Zhou, Xinhao Tu

Abstract:

Facial skin is a biomedical material with complex mechanical properties of anisotropy, viscoelasticity, and hyperelasticity. The mechanical properties of facial skin are crucial for a number of applications including facial plastic surgery, animation, dermatology, cosmetic industry, and impact biomechanics. Skin is a complex multi-layered material which can be broadly divided into three main layers, the epidermis, the dermis, and the hypodermis. Collagen fibers account for 75% of the dry weight of dermal tissue, and it is these fibers which are responsible for the mechanical properties of skin. Many research on the anisotropic mechanical properties are mainly concentrated on in vitro, but there is a great difference between in vivo and in vitro for mechanical properties of the skin. In this study, we presented a method to measure the mechanical properties of facial skin in vivo. Digital image correlation (DIC) and indentation tests were used to obtain the experiment data, including the deformation of facial surface and indentation force-displacement curve. Then, the experiment was simulated using a finite element (FE) model. Application of Computed Tomography (CT) and reconstruction techniques obtained the real tissue geometry. A three-dimensional FE model of facial skin, including a bi-layer system, was obtained. As the epidermis is relatively thin, the epidermis and dermis were regarded as one layer and below it was hypodermis in this study. The upper layer was modeled as a Gasser-Ogden-Holzapfel (GOH) model to describe hyperelastic and anisotropic behaviors of the dermis. The under layer was modeled as a linear elastic model. In conclusion, the material properties of two-layer were determined by minimizing the error between the FE data and experimental data.

Keywords: facial skin, indentation test, finite element, digital image correlation, computed tomography

Procedia PDF Downloads 113
246 Analysis and Detection of Facial Expressions in Autism Spectrum Disorder People Using Machine Learning

Authors: Muhammad Maisam Abbas, Salman Tariq, Usama Riaz, Muhammad Tanveer, Humaira Abdul Ghafoor

Abstract:

Autism Spectrum Disorder (ASD) refers to a developmental disorder that impairs an individual's communication and interaction ability. Individuals feel difficult to read facial expressions while communicating or interacting. Facial Expression Recognition (FER) is a unique method of classifying basic human expressions, i.e., happiness, fear, surprise, sadness, disgust, neutral, and anger through static and dynamic sources. This paper conducts a comprehensive comparison and proposed optimal method for a continued research project—a system that can assist people who have Autism Spectrum Disorder (ASD) in recognizing facial expressions. Comparison has been conducted on three supervised learning algorithms EigenFace, FisherFace, and LBPH. The JAFFE, CK+, and TFEID (I&II) datasets have been used to train and test the algorithms. The results were then evaluated based on variance, standard deviation, and accuracy. The experiments showed that FisherFace has the highest accuracy for all datasets and is considered the best algorithm to be implemented in our system.

Keywords: autism spectrum disorder, ASD, EigenFace, facial expression recognition, FisherFace, local binary pattern histogram, LBPH

Procedia PDF Downloads 174
245 Data Collection Techniques for Robotics to Identify the Facial Expressions of Traumatic Brain Injured Patients

Authors: Chaudhary Muhammad Aqdus Ilyas, Matthias Rehm, Kamal Nasrollahi, Thomas B. Moeslund

Abstract:

This paper presents the investigation of data collection procedures, associated with robots when placed with traumatic brain injured (TBI) patients for rehabilitation purposes through facial expression and mood analysis. Rehabilitation after TBI is very crucial due to nature of injury and variation in recovery time. It is advantageous to analyze these emotional signals in a contactless manner, due to the non-supportive behavior of patients, limited muscle movements and increase in negative emotional expressions. This work aims at the development of framework where robots can recognize TBI emotions through facial expressions to perform rehabilitation tasks by physical, cognitive or interactive activities. The result of these studies shows that with customized data collection strategies, proposed framework identify facial and emotional expressions more accurately that can be utilized in enhancing recovery treatment and social interaction in robotic context.

Keywords: computer vision, convolution neural network- long short term memory network (CNN-LSTM), facial expression and mood recognition, multimodal (RGB-thermal) analysis, rehabilitation, robots, traumatic brain injured patients

Procedia PDF Downloads 155
244 Facial Expression Recognition Using Sparse Gaussian Conditional Random Field

Authors: Mohammadamin Abbasnejad

Abstract:

The analysis of expression and facial Action Units (AUs) detection are very important tasks in fields of computer vision and Human Computer Interaction (HCI) due to the wide range of applications in human life. Many works have been done during the past few years which has their own advantages and disadvantages. In this work, we present a new model based on Gaussian Conditional Random Field. We solve our objective problem using ADMM and we show how well the proposed model works. We train and test our work on two facial expression datasets, CK+, and RU-FACS. Experimental evaluation shows that our proposed approach outperform state of the art expression recognition.

Keywords: Gaussian Conditional Random Field, ADMM, convergence, gradient descent

Procedia PDF Downloads 356
243 When and Why Unhappy People Avoid Enjoyable Experiences

Authors: Hao Shen, Aparna Labroo

Abstract:

Across four studies, we show people in a negative mood avoid anticipated enjoyable experiences because of the subjective difficulty in simulating those experiences, and they misattribute these feelings of difficulty to reduced pleasantness of the anticipated experience. We observe the avoidance of enjoyable experiences only for anticipated experiences that involve smile-like facial-muscular simulation. When the need for facial-muscular simulation is attenuated, or when the anticipated experience relies on facial-muscular simulation to a lesser extent, people in a negative mood no longer avoid enjoyable experiences, but rather seek such experiences because they fit better with their ongoing mood-repair goals.

Keywords: emotion regulation, mood repair, embodiment, anticipated experiences

Procedia PDF Downloads 429
242 A Theoretical Study on Pain Assessment through Human Facial Expresion

Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee

Abstract:

A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.

Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)

Procedia PDF Downloads 346
241 Gender Recognition with Deep Belief Networks

Authors: Xiaoqi Jia, Qing Zhu, Hao Zhang, Su Yang

Abstract:

A gender recognition system is able to tell the gender of the given person through a few of frontal facial images. An effective gender recognition approach enables to improve the performance of many other applications, including security monitoring, human-computer interaction, image or video retrieval and so on. In this paper, we present an effective method for gender classification task in frontal facial images based on deep belief networks (DBNs), which can pre-train model and improve accuracy a little bit. Our experiments have shown that the pre-training method with DBNs for gender classification task is feasible and achieves a little improvement of accuracy on FERET and CAS-PEAL-R1 facial datasets.

Keywords: gender recognition, beep belief net-works, semi-supervised learning, greedy-layer wise RBMs

Procedia PDF Downloads 453
240 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition

Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade

Abstract:

The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.

Keywords: automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection

Procedia PDF Downloads 169
239 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 130
238 The Effect of Experimentally Induced Stress on Facial Recognition Ability of Security Personnel’s

Authors: Zunjarrao Kadam, Vikas Minchekar

Abstract:

The facial recognition is an important task in criminal investigation procedure. The security guards-constantly watching the persons-can help to identify the suspected accused. The forensic psychologists are tackled such cases in the criminal justice system. The security personnel may loss their ability to correctly identify the persons due to constant stress while performing the duty. The present study aimed at to identify the effect of experimentally induced stress on facial recognition ability of security personnel’s. For this study 50, security guards from Sangli, Miraj & Jaysingpur city of the Maharashtra States of India were recruited in the experimental study. The randomized two group design was employed to carry out the research. In the initial condition twenty identity card size photographs were shown to both groups. Afterward, artificial stress was induced in the experimental group through the difficultpuzzle-solvingtask in a limited period. In the second condition, both groups were presented earlier photographs with another additional thirty new photographs. The subjects were asked to recognize the photographs which are shown earliest. The analyzed data revealed that control group has ahighest mean score of facial recognition than experimental group. The results were discussed in the present research.

Keywords: experimentally induced stress, facial recognition, cognition, security personnel

Procedia PDF Downloads 261
237 Forensic Comparison of Facial Images for Human Identification

Authors: D. P. Gangwar

Abstract:

Identification of human through facial images has got great importance in forensic science. The video recordings, CCTV footage, passports, driver licenses and other related documents are invariably sent to the laboratory for comparison of the questioned photographs as well as video recordings with suspected photographs/recordings to prove the identity of a person. More than 300 questioned and 300 control photographs received in actual crime cases, received from various investigation agencies, have been compared by me so far using various familiar analysis and comparison techniques such as Holistic comparison, Morphological analysis, Photo-anthropometry and superimposition. On the basis of findings obtained during the examination huge photo exhibits, a realistic and comprehensive technique has been proposed which could be very useful for forensic.

Keywords: CCTV Images, facial features, photo-anthropometry, superimposition

Procedia PDF Downloads 529
236 Tensor Deep Stacking Neural Networks and Bilinear Mapping Based Speech Emotion Classification Using Facial Electromyography

Authors: P. S. Jagadeesh Kumar, Yang Yung, Wenli Hu

Abstract:

Speech emotion classification is a dominant research field in finding a sturdy and profligate classifier appropriate for different real-life applications. This effort accentuates on classifying different emotions from speech signal quarried from the features related to pitch, formants, energy contours, jitter, shimmer, spectral, perceptual and temporal features. Tensor deep stacking neural networks were supported to examine the factors that influence the classification success rate. Facial electromyography signals were composed of several forms of focuses in a controlled atmosphere by means of audio-visual stimuli. Proficient facial electromyography signals were pre-processed using moving average filter, and a set of arithmetical features were excavated. Extracted features were mapped into consistent emotions using bilinear mapping. With facial electromyography signals, a database comprising diverse emotions will be exposed with a suitable fine-tuning of features and training data. A success rate of 92% can be attained deprived of increasing the system connivance and the computation time for sorting diverse emotional states.

Keywords: speech emotion classification, tensor deep stacking neural networks, facial electromyography, bilinear mapping, audio-visual stimuli

Procedia PDF Downloads 254
235 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System

Authors: J. K. Adedeji, M. O. Oyekanmi

Abstract:

This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.

Keywords: biometric characters, facial recognition, neural network, OpenCV

Procedia PDF Downloads 256
234 Facial Recognition on the Basis of Facial Fragments

Authors: Tetyana Baydyk, Ernst Kussul, Sandra Bonilla Meza

Abstract:

There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.

Keywords: face recognition, labeled faces in the wild (LFW) database, random local descriptor (RLD), random features

Procedia PDF Downloads 360
233 Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions

Authors: Carmel Sofer, Dan Vilenchik, Ron Dotsch, Galia Avidan

Abstract:

Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev.

Keywords: Glove, face perception, facial expression perception. , facial expression production, machine learning, word embedding, word2vec

Procedia PDF Downloads 176
232 A Geometric Based Hybrid Approach for Facial Feature Localization

Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik

Abstract:

Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.

Keywords: biometrics, face recognition, facial landmarks, image processing

Procedia PDF Downloads 412
231 Influence of Dental Midline Deviation with Respect to Facial Flow Line on Smile Esthetics – A Cross-sectional Study

Authors: Kanza Tahir, Mubassar Fida, Rashna Hoshang Sukhia

Abstract:

Background/Objective: A contemporary concept states that dental midline deviation towards the direction of facial flow line (FFL) can mask the compromised smile esthetics. This study aimed to identify a range of midline deviations that can be perceived towards or away from the FFL influencing smile esthetics. Materials and methods: A cross-sectional study was conducted using a frontal smile photograph of an adult female. The photograph was altered on Adobe Photoshop software into six different photographs by deviating the dental midlines towards and away from the FFL. A constant deviation of the chin towards the left side was incorporated in all the photographs. Forty-three laypersons (LP)and dental professionals (DPs) evaluated those photographs onVisual Analog Scale (VAS). An Independent t-test was used to compare the perception of dental midline deviation between LP and DPs. Simple linear regression was run to identify the factors associated with the VAS scoring. Results: A statistically significant difference was observed for picture two with 4 mm towards FFL in the perception of midline deviation between LP and DPs. LP could not perceive the midline deviations up to 4 mm, while DPs were able to perceive deviations above 2 mm. Age was positively associated with the VAS score, while the female gender had a negative association. Limitations: Only one component of mini-esthetics was studied. This study did not include an ideal picture for comparison. Only one female subject was studied of normal facial type. Conclusions: 2-4 mm of midline deviation towards the facial flow line can be tolerated by laypersons and dental professionals.

Keywords: midline, facial flow line, smile esthetics, female

Procedia PDF Downloads 91
230 Botulinum Toxin a in the Treatment of Late Facial Nerve Palsy Complications

Authors: Akulov M. A., Orlova O. R., Zaharov V. O., Tomskij A. A.

Abstract:

Introduction: One of the common postoperative complications of posterior cranial fossa (PCF) and cerebello-pontine angle tumor treatment is a facial nerve palsy, which leads to multiple and resistant to treatment impairments of mimic muscles structure and functions. After 4-6 months after facial nerve palsy with insufficient therapeutic intervention patients develop a postparalythic syndrome, which includes such symptoms as mimic muscle insufficiency, mimic muscle contractures, synkinesis and spontaneous muscular twitching. A novel method of treatment is the use of a recent local neuromuscular blocking agent– botulinum toxin A (BTA). Experience of BTA treatment enables an assumption that it can be successfully used in late facial nerve palsy complications to significantly increase quality of life of patients. Study aim. To evaluate the efficacy of botulinum toxin A (BTA) (Xeomin) treatment in patients with late facial nerve palsy complications. Patients and Methods: 31 patients aged 27-59 years 6 months after facial nerve palsy development were evaluated. All patients received conventional treatment, including massage, movement therapy etc. Facial nerve palsy developed after acoustic nerve tumor resection in 23 (74,2%) patients, petroclival meningioma resection – in 8 (25,8%) patients. The first group included 17 (54,8%) patients, receiving BT-therapy; the second group – 14 (45,2%) patients continuing conventional treatment. BT-injections were performed in synkinesis or contracture points 1-2 U on injured site and 2-4 U on healthy side (for symmetry). Facial nerve function was evaluated on 2 and 4 months of therapy according to House-Brackman scale. Pain syndrome alleviation was assessed on VAS. Results: At baseline all patients in the first and second groups demonstrated аpostparalytic syndrome. We observed a significant improvement in patients receiving BTA after only one month of treatment. Mean VAS score at baseline was 80,4±18,7 and 77,9±18,2 in the first and second group, respectively. In the first group after one month of treatment we observed a significant decrease of pain syndrome – mean VAS score was 44,7±10,2 (р<0,01), whereas in the second group VAS score was as high as 61,8±9,4 points (p>0,05). By the 3d month of treatment pain syndrome intensity continued to decrease in both groups, but, the first group demonstrated significantly better results; mean score was 8,2±3,1 and 31,8±4,6 in the first and second group, respectively (р<0,01). Total House-Brackman score at baseline was 3,67±0,16 in the first group and 3,74±0,19 in the second group. Treatment resulted in a significant symptom improvement in the first group, with no improvement in the second group. After 4 months of treatment House-Brockman score in the first group was 3,1-fold lower, than in the second group (р<0,05). Conclusion: Botulinum toxin injections decrease postparalytic syndrome symptoms in patients with facial nerve palsy.

Keywords: botulinum toxin, facial nerve palsy, postparalytic syndrome, synkinesis

Procedia PDF Downloads 297
229 Benign Osteoblastoma of the Mandible Resection and Replacement of the Defects with Decellularized Cattle Bone Scaffold with Mesenchymal Bone Marrow Stem Cells

Authors: K. Mardaleishvili, G. Loladze, G. Shatirishivili, D. Chakhunashvili, A. Vishnevskaya, Z. Kakabadze

Abstract:

Benign osteoblastoma is a benign tumor of the bone, usually affecting the vertebrae and long tubular bones. It is a rarely seen tumor of the facial bones. The authors present a case of a 28-year-old male patient with a tumor in mandibular body. The lesion was radically resected and histological analysis of the specimen demonstrated features typical of a benign osteoblastoma. The defect of the jaw was reconstructed with titanium implants and decellularized and lyophilized cattle bone matrix with mesenchymal bone marrow stem cells transplantation. This presentation describes the procedures for rehabilitating a patient with decellularized bone scaffold in the region of the face, recovering the facial contours and esthetics of the patient.

Keywords: facial bones, osteoblastoma, stem cells, transplantation

Procedia PDF Downloads 422
228 Correlation between Cephalometric Measurements and Visual Perception of Facial Profile in Skeletal Type II Patients

Authors: Choki, Supatchai Boonpratham, Suwannee Luppanapornlarp

Abstract:

The objective of this study was to find a correlation between cephalometric measurements and visual perception of facial profile in skeletal type II patients. In this study, 250 lateral cephalograms of female patients from age, 20 to 22 years were analyzed. The profile outlines of all the samples were hand traced and transformed into silhouettes by the principal investigator. Profile ratings were done by 9 orthodontists on Visual Analogue Scale from score one to ten (increasing level of convexity). 37 hard issue and soft tissue cephalometric measurements were analyzed by the principal investigator. All the measurements were repeated after 2 weeks interval for error assessment. At last, the rankings of visual perceptions were correlated with cephalometric measurements using Spearman correlation coefficient (P < 0.05). The results show that the increase in facial convexity was correlated with higher values of ANB (A point, nasion and B point), AF-BF (distance from A point to B point in mm), L1-NB (distance from lower incisor to NB line in mm), anterior maxillary alveolar height, posterior maxillary alveolar height, overjet, H angle hard tissue, H angle soft tissue and lower lip to E plane (absolute correlation values from 0.277 to 0.711). In contrast, the increase in facial convexity was correlated with lower values of Pg. to N perpendicular and Pg. to NB (mm) (absolute correlation value -0.302 and -0.294 respectively). From the soft tissue measurements, H angles had a higher correlation with visual perception than facial contour angle, nasolabial angle, and lower lip to E plane. In conclusion, the findings of this study indicated that the correlation of cephalometric measurements with visual perception was less than expected. Only 29% of cephalometric measurements had a significant correlation with visual perception. Therefore, diagnosis based solely on cephalometric analysis can result in failure to meet the patient’s esthetic expectation.

Keywords: cephalometric measurements, facial profile, skeletal type II, visual perception

Procedia PDF Downloads 138
227 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

Authors: Koray U. Erbas

Abstract:

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

Keywords: convolutional neural network, deep learning, deep learning based FER, facial emotion recognition

Procedia PDF Downloads 274
226 Facial Pose Classification Using Hilbert Space Filling Curve and Multidimensional Scaling

Authors: Mekamı Hayet, Bounoua Nacer, Benabderrahmane Sidahmed, Taleb Ahmed

Abstract:

Pose estimation is an important task in computer vision. Though the majority of the existing solutions provide good accuracy results, they are often overly complex and computationally expensive. In this perspective, we propose the use of dimensionality reduction techniques to address the problem of facial pose estimation. Firstly, a face image is converted into one-dimensional time series using Hilbert space filling curve, then the approach converts these time series data to a symbolic representation. Furthermore, a distance matrix is calculated between symbolic series of an input learning dataset of images, to generate classifiers of frontal vs. profile face pose. The proposed method is evaluated with three public datasets. Experimental results have shown that our approach is able to achieve a correct classification rate exceeding 97% with K-NN algorithm.

Keywords: machine learning, pattern recognition, facial pose classification, time series

Procedia PDF Downloads 350
225 Current Concepts of Male Aesthetics: Facial Areas to Be Focused and Prioritized with Botulinum Toxin and Hyaluronic Acid Dermal Fillers Combination Therapies, Recommendations on Asian Patients

Authors: Sadhana Deshmukh

Abstract:

Objective: Men represent only a fraction of the medical aesthetic practice. They are increasingly becoming more cosmetically-inclined. The primary objective is to harmonize facial proportion by prioritizing and focusing on forehead nose, cheek and chin complex. Introduction: Despite tremendous variability, diverse population of the Indian subcontinent, the male skull is unique in its overall larger size, and shape. Men tend to have a large forehead with prominent supraorbital ridges, wide glabella, square orbit, and a prominent protruding mandible. Men have increased skeletal muscle mass, with less facial subcutaneous fat. Facial aesthetics is evolving rapidly. Commonly published canons of facial proportions usually represent feminine standards and are not applicable to males. Strict adherence to these norms is therefore not necessary to obtain satisfying results in male patients. Materials and Methods: Male patients age group 30-60 years have been enrolled. Botulinum toxin and hyaluronic acid fillers were used to update consensus recommendations for facial rejuvenation using these two types of products alone and in combination. Results: There are specific recommendations by facial area, focusing on relaxing musculature, restoring volume, recontouring using toxin and dermal fillers alone and in combination. For upper face, though botulinum toxin remains the cornerstone of treatment, temples and forehead fillers are recommended for optimal results. In Mid face, these fillers are placed more laterally to maintain the masculine look. Botulinum toxin and fillers in combination can improve outcomes in the lower face. Chin augmentation remains the center point for lower face. Conclusions: Males are more likely to have shorter doctor visits, less likely to ask questions, have a lower attention to bodily changes. The physician must patiently gauge male patients’ aging and cosmetic goals. Clinicians can also benefit from ongoing guidance on products, tailoring treatments, treating multiple facial areas, and using combinations of products. An appreciation that rejuvenation is 3-dimensional process involving muscle control, volume restoration and recontouring helps.

Keywords: male aesthetics, botulinum toxin, hyaluronic acid dermal fillers, Asian patients

Procedia PDF Downloads 157
224 A Quality Improvement Project to Assess the Impact of Orthognathic Surgery on the Quality of Life of Patients: Pre-Operatively versus Post-Operatively

Authors: Fiona Lourenco, William Allen

Abstract:

Dentofacial deformities are primarily surgically treated via orthognathic surgery. Health-related quality of life is concerned with aspects of quality of life that relate specifically to an individual’s health. Design and Setting: Retrospective analysis of patients who had orthognathic surgery from January 2018 - December 2022 at the trust using the previously validated Orthognathic Quality of Life questionnaire (OQoL). Materials and Methods: 32 Patient questionnaires (which included pre-operative and post-operative separate sections) were obtained via telephone survey. The data was analysed using the two-tailed paired t-test and Wilcoxon signed-rank test. Results: The change in perception post-surgery was highly significant (both tests resulted in p<0.001 for overall analysis as well as for each domain). Overall, a 74% improvement in QoL was seen following orthognathic surgery. Reports of improvement in each domain were as follows: 71% in the social aspect of the deformity domain, 76% in facial aesthetics, 60% in function, and 57% improvement in awareness of facial deformity. Conclusion: The assessment of QoL is becoming progressively imperative in clinical research. The above data shows that orthognathic surgery has a significant improvement in the QoL of patients post-operatively. The results demonstrate improvement in all domains, with perceptions in facial aesthetics seeing the highest change post-operatively.

Keywords: dentofacial, oral, facial asymmetry, orthognathic surgery, quality of life

Procedia PDF Downloads 80
223 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning

Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin

Abstract:

This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.

Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing

Procedia PDF Downloads 27
222 Human Facial Emotion: A Comparative and Evolutionary Perspective Using a Canine Model

Authors: Catia Correia Caeiro, Kun Guo, Daniel Mills

Abstract:

Despite its growing interest, emotions are still an understudied cognitive process and their origins are currently the focus of much debate among the scientific community. The use of facial expressions as traditional hallmarks of discrete and holistic emotions created a circular reasoning due to a priori assumptions of meaning and its associated appearance-biases. Ekman and colleagues solved this problem and laid the foundations for the quantitative and systematic study of facial expressions in humans by developing an anatomically-based system (independent from meaning) to measure facial behaviour, the Facial Action Coding System (FACS). One way of investigating emotion cognition processes is by applying comparative psychology methodologies and looking at either closely-related species (e.g. chimpanzees) or phylogenetically distant species sharing similar present adaptation problems (analogy). In this study, the domestic dog was used as a comparative animal model to look at facial expressions in social interactions in parallel with human facial expressions. The orofacial musculature seems to be relatively well conserved across mammal species and the same holds true for the domestic dog. Furthermore, the dog is unique in having shared the same social environment as humans for more than 10,000 years, facing similar challenges and acquiring a unique set of socio-cognitive skills in the process. In this study, the spontaneous facial movements of humans and dogs were compared when interacting with hetero- and conspecifics as well as in solitary contexts. In total, 200 participants were examined with FACS and DogFACS (The Dog Facial Action Coding System): coding tools across four different emotionally-driven contexts: a) Happiness (play and reunion), b) anticipation (of positive reward), c) fear (object or situation triggered), and d) frustration (negation of a resource). A neutral control was added for both species. All four contexts are commonly encountered by humans and dogs, are comparable between species and seem to give rise to emotions from homologous brain systems. The videos used in the study were extracted from public databases (e.g. Youtube) or published scientific databases (e.g. AM-FED). The results obtained allowed us to delineate clear similarities and differences on the flexibility of the facial musculature in the two species. More importantly, they shed light on what common facial movements are a product of the emotion linked contexts (the ones appearing in both species) and which are characteristic of the species, revealing an important clue for the debate on the origin of emotions. Additionally, we were able to examine movements that might have emerged for interspecific communication. Finally, our results are discussed from an evolutionary perspective adding to the recent line of work that supports an ancient shared origin of emotions in a mammal ancestor and defining emotions as mechanisms with a clear adaptive purpose essential on numerous situations, ranging from maintenance of social bonds to fitness and survival modulators.

Keywords: comparative and evolutionary psychology, emotion, facial expressions, FACS

Procedia PDF Downloads 434
221 Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing

Authors: McClain Thiel

Abstract:

Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions.

Keywords: monocular distancing, computer vision, facial analysis, 3D localization

Procedia PDF Downloads 139
220 Facial Recognition Technology in Institutions of Higher Learning: Exploring the Use in Kenya

Authors: Samuel Mwangi, Josephine K. Mule

Abstract:

Access control as a security technique regulates who or what can access resources. It is a fundamental concept in security that minimizes risks to the institutions that use access control. Regulating access to institutions of higher learning is key to ensure only authorized personnel and students are allowed into the institutions. The use of biometrics has been criticized due to the setup and maintenance costs, hygiene concerns, and trepidations regarding data privacy, among other apprehensions. Facial recognition is arguably a fast and accurate way of validating identity in order to guard protected areas. It guarantees that only authorized individuals gain access to secure locations while requiring far less personal information whilst providing an additional layer of security beyond keys, fobs, or identity cards. This exploratory study sought to investigate the use of facial recognition in controlling access in institutions of higher learning in Kenya. The sample population was drawn from both private and public higher learning institutions. The data is based on responses from staff and students. Questionnaires were used for data collection and follow up interviews conducted to understand responses from the questionnaires. 80% of the sampled population indicated that there were many security breaches by unauthorized people, with some resulting in terror attacks. These security breaches were attributed to stolen identity cases, where staff or student identity cards were stolen and used by criminals to access the institutions. These unauthorized accesses have resulted in losses to the institutions, including reputational damages. The findings indicate that security breaches are a major problem in institutions of higher learning in Kenya. Consequently, access control would be beneficial if employed to curb security breaches. We suggest the use of facial recognition technology, given its uniqueness in identifying users and its non-repudiation capabilities.

Keywords: facial recognition, access control, technology, learning

Procedia PDF Downloads 125
219 Prostheticly Oriented Approach for Determination of Fixture Position for Facial Prostheses Retention in Cases with Atypical and Combined Facial Defects

Authors: K. A.Veselova, N. V.Gromova, I. N.Antonova, I. N. Kalakutskii

Abstract:

There are many diseases and incidents that may result facial defects and deformities: cancer, trauma, burns, congenital anomalies, and autoimmune diseases. In some cases, patient may acquire atypically extensive facial defect, including more than one anatomical region or, by contrast, atypically small defect (e.g. partial auricular defect). The anaplastology gives us opportunity to help patient with facial disfigurement in cases when plastic surgery is contraindicated. Using of implant retention for facial prosthesis is strongly recommended because improves both aesthetic and functional results and makes using of the prosthesis more comfortable. Prostheticly oriented fixture position is extremely important for aesthetic and functional long-term result; however, the optimal site for fixture placement is not clear in cases with atypical configuration of facial defect. The objective of this report is to demonstrate challenges in fixture position determination we have faced with and offer the solution. In this report, four cases of implant-supported facial prosthesis are described. Extra-oral implants with four millimeter length were used in all cases. The decision regarding the quantity of surgical stages was based on anamnesis of disease. Facial prostheses were manufactured according to conventional technique. Clinical and technological difficulties and mistakes are described, and prostheticly oriented approach for determination of fixture position is demonstrated. The case with atypically large combined orbital and nasal defect resulting after arteriovenous malformation is described: the correct positioning of artificial eye was impossible due to wrong position of the fixture (with suprastructure) located in medial aspect of supraorbital rim. The suprastructure was unfixed and this fixture wasn`t used for retention in order to achieve appropriate artificial eye placement and better aesthetic result. In other case with small partial auricular defect (only helix and antihelix were absent) caused by squamoized cell carcinoma T1N0M0 surgical template was used to avoid the difficulties. To achieve the prostheticly oriented fixture position in case of extremely small defect the template was made on preliminary cast using vacuum thermoforming method. Two radiopaque markers were incorporated into template in preferable for fixture placement positions taking into account future prosthesis configuration. The template was put on remaining ear and cone-beam CT was performed to insure, that the amount of bone is enough for implant insertion in preferable position. Before the surgery radiopaque markers were extracted and template was holed for guide drill. Fabrication of implant-retained facial prostheses gives us opportunity to improve aesthetics, retention and patients’ quality of life. But every inaccuracy in planning leads to challenges on surgery and prosthetic stages. Moreover, in cases with atypically small or extended facial defects prostheticly oriented approach for determination of fixture position is strongly required. The approach including surgical template fabrication is effective, easy and cheap way to avoid mistakes and unpredictable result.

Keywords: anaplastology, facial prosthesis, implant-retained facial prosthesis., maxillofacil prosthese

Procedia PDF Downloads 114