Search results for: facial animation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 344

Search results for: facial animation

224 Cephalometric Changes of Patient with Class II Division 1 [Malocclusion] Post Orthodontic Treatment with Growth Stimulation: A Case Report

Authors: Pricillia Priska Sianita

Abstract:

An aesthetic facial profile is one of the goals in Orthodontics treatment. However, this is not easily achieved, especially in patients with Class II Division 1 malocclusion who have the clinical characteristics of convex profile and significant skeletal discrepancy due to mandibular growth deficiency. Malocclusion with skeletal problems require proper treatment timing for growth stimulation, and it must be done in early age and in need of good cooperation from the patient. If this is not done and the patient has passed the growth period, the ideal treatment is orthognathic surgery which is more complicated and more painful. The growth stimulation of skeletal malocclusion requires a careful cephalometric evaluation ranging from diagnosis to determine the parts that require stimulation to post-treatment evaluation to see the success achieved through changes in the measurement of the skeletal parameters shown in the cephalometric analysis. This case report aims to describe skeletal changes cephalometrically that were achieved through orthodontic treatment in growing period. Material and method: Lateral Cephalograms, pre-treatment, and post-treatment of cases of Class II Division 1 malocclusion is selected from a collection of cephalometric radiographic in a private clinic. The Cephalogram is then traced and measured for the skeletal parameters. The result is noted as skeletal condition data of pre-treatment and post-treatment. Furthermore, superimposition is done to see the changes achieved. The results show that growth stimulation through orthodontic treatment can solve the skeletal problem of Class II Division 1 malocclusion and the skeletal changes that occur can be verified through cephalometric analysis. The skeletal changes have an impact on the improvement of patient's facial profile. To sum up, the treatment timing on a skeletal malocclusion is very important to obtain satisfactory results for the improvement of the aesthetic facial profile, and skeletal changes can be verified through cephalometric evaluation of pre- and post-treatment.

Keywords: cephalometric evaluation, class II division 1 malocclusion, growth stimulation, skeletal changes, skeletal problems

Procedia PDF Downloads 221
223 Quality of Life after Damage Control Laparotomy for Trauma

Authors: Noman Shahzad, Amyn Pardhan, Hasnain Zafar

Abstract:

Introduction: Though short term survival advantage of damage control laparotomy in management of critically ill trauma patients is established, there is little known about the long-term quality of life of these patients. Facial closure rate after damage control laparotomy is reported to be 20-70 percent. Abdominal wall reconstruction in those who failed to achieve facial closure is challenging and can potentially affect quality of life of these patients. Methodology: We conducted retrospective matched cohort study. Adult patients who underwent damage control laparotomy from Jan 2007 till Jun 2013 were identified through medical record. Patients who had concomitant disabling brain injury or limb injuries requiring amputation were excluded. Age, gender and presentation time matched non exposure group of patients who underwent laparotomy for trauma but no damage control were identified for each damage control laparotomy patient. Quality of life assessment was done via telephonic interview at least one year after the operation, using Urdu version of EuroQol Group Quality of Life (QOL) questionnaire EQ5D after permission. Wilcoxon signed rank test was used to compare QOL scores and McNemar test was used to compare individual parameters of QOL questionnaire. Study was approved by institutional ethical review committee. Results: Out of 32 patients who underwent damage control laparotomy during study period, 20 fulfilled the selection criteria for which 20 matched controls were selected. Median age of patients (IQ Range) was 33 (26-40) years. Facial closure rate in damage control laparotomy group was 40% (8/20). One third of those who did not achieve facial closure (4/12) underwent abdominal wall reconstruction. Self-reported QOL score of damage control laparotomy patients was significantly worse than non-damage control group (p = 0.032). There was no statistically significant difference in two groups regarding individual QOL measures. Significantly, more patients in damage control group were requiring use of abdominal binder, and more patients in damage control group had to either change their job or had limitations in continuing previous job. Our study was not adequately powered to detect factors responsible for worse QOL in damage control group. Conclusion: Quality of life of damage control patients is worse than their age and gender matched patients who underwent trauma laparotomy but not damage control. Adequately powered studies need to be conducted to explore factors responsible for this finding for potential improvement.

Keywords: damage control laparotomy, laparostomy, quality of life

Procedia PDF Downloads 248
222 3D Dentofacial Surgery Full Planning Procedures

Authors: Oliveira M., Gonçalves L., Francisco I., Caramelo F., Vale F., Sanz D., Domingues M., Lopes M., Moreia D., Lopes T., Santos T., Cardoso H.

Abstract:

The ARTHUR project consists of a platform that allows the virtual performance of maxillofacial surgeries, offering, in a photorealistic concept, the possibility for the patient to have an idea of the surgical changes before they are performed on their face. For this, the system brings together several image formats, dicoms and objs that, after loading, will generate the bone volume, soft tissues and hard tissues. The system also incorporates the patient's stereophotogrammetry, in addition to their data and clinical history. After loading and inserting data, the clinician can virtually perform the surgical operation and present the final result to the patient, generating a new facial surface that contemplates the changes made in the bone and tissues of the maxillary area. This tool acts in different situations that require facial reconstruction, however this project focuses specifically on two types of use cases: bone congenital disfigurement and acquired disfiguration such as oral cancer with bone attainment. Being developed a cloud based solution, with mobile support, the tool aims to reduce the decision time window of patient. Because the current simulations are not realistic or, if realistic, need time due to the need of building plaster models, patient rates on decision, rely on a long time window (1,2 months), because they don’t identify themselves with the presented surgical outcome. On the other hand, this planning was performed time based on average estimated values of the position of the maxilla and mandible. The team was based on averages of the facial measurements of the population, without specifying racial variability, so the proposed solution was not adjusted to the real individual physiognomic needs.

Keywords: 3D computing, image processing, image registry, image reconstruction

Procedia PDF Downloads 170
221 Holistic Approach for Natural Results in Facial Aesthetics

Authors: R. Denkova

Abstract:

Nowadays, aesthetic and psychological researches in some countries show that the aesthetic ideal for women is built by the same pattern of big volumes – lips, cheek, facial disproportions. They all look like made of a matrix. And they lose their unique and emotional aspects of beauty. How to escape this matrix and find the balance? The secret to being a unique injector is good assessment, creating a treatment plan and flawless injection strategy. The newest concepts in this new injection era which meet the requirements of a modern society and deliver balanced and natural looking results are based on the concept of injecting not the consequence, but the reason. Three case studies are presented with full face assessment, treatment plan and before/after pictures. Using different approaches and techniques of the MD codes concept, lights and shadows concept in order to preserve the emotional beauty and identity of the women. In conclusion, the cases demonstrate that beauty exists even beyond the matrix and it is the injector’s mission and responsibility is to preserve and highlight the natural beauty and unique identity of every different patient.

Keywords: beyond the matrix, emotional beauty, face assessment, injector, treatment plan

Procedia PDF Downloads 98
220 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 136
219 New Possibilities for Testing UX and UI Design on Mobile Devices

Authors: Jakub Berčík, Anna Mravcová, Jana Gálová, Katarína Neomániová

Abstract:

In an era when everything is increasingly digital, consumers are always looking for new options in solutions to their everyday needs. In this context, mobile apps are developing at an exponential pace. One of the fastest growing segments of mobile technologies is, obviously, e-commerce. It can be predicted that mobile commerce will record nearly three times the global growth of e-commerce across all platforms, which indicates its importance in the given segment. The current coronavirus pandemic is also changing many of the existing paradigms both socially, economically, and technologically, which has a major impact on changing consumer behaviour and the emphasis on simplification and clarity of mobile solutions. This is the area that user experience (UX) and user interface (UI) designers deal with. Their task is to design a sufficiently attractive and interesting solution that will be available on all mobile devices and at the same time will be easy enough for the customer/visitor to get to the destination or to get the necessary information in a few clicks. The basis for changes in UX design can now be obtained not only through online analytical tools but also through neuromarketing, especially in the case of mobile devices. The paper highlights new possibilities for testing UX design applications on mobile devices using a special platform that combines a stationary eye camera (eye tracking) and facial analysis (facial coding).

Keywords: emotions, mobile design, user experience, visual attention

Procedia PDF Downloads 98
218 Strabismus Detection Using Eye Alignment Stability

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. Currently, many children with strabismus remain undiagnosed until school entry because current automated screening methods have limited success in the preschool age range. A method for strabismus detection using eye alignment stability (EAS) is proposed. This method starts with face detection, followed by facial landmark detection, eye region segmentation, eye gaze extraction, and eye alignment stability estimation. Binarization and morphological operations are performed for segmenting the pupil region from the eye. After finding the EAS, its absolute value is used to differentiate the strabismic eye from the non-strabismic eye. If the value of the eye alignment stability is greater than a particular threshold, then the eyes are misaligned, and if its value is less than the threshold, the eyes are aligned. The method was tested on 175 strabismic and non-strabismic images obtained from Kaggle and Google Photos. The strabismic eye is taken as a positive class, and the non-strabismic eye is taken as a negative class. The test produced a true positive rate of 100% and a false positive rate of 7.69%.

Keywords: strabismus, face detection, facial landmarks, eye segmentation, eye gaze, binarization

Procedia PDF Downloads 47
217 Proposed Solutions Based on Affective Computing

Authors: Diego Adrian Cardenas Jorge, Gerardo Mirando Guisado, Alfredo Barrientos Padilla

Abstract:

A system based on Affective Computing can detect and interpret human information like voice, facial expressions and body movement to detect emotions and execute a corresponding response. This data is important due to the fact that a person can communicate more effectively with emotions than can be possible with words. This information can be processed through technological components like Facial Recognition, Gait Recognition or Gesture Recognition. As of now, solutions proposed using this technology only consider one component at a given moment. This research investigation proposes two solutions based on Affective Computing taking into account more than one component for emotion detection. The proposals reflect the levels of dependency between hardware devices and software, as well as the interaction process between the system and the user which implies the development of scenarios where both proposals will be put to the test in a live environment. Both solutions are to be developed in code by software engineers to prove the feasibility. To validate the impact on society and business interest, interviews with stakeholders are conducted with an investment mind set where each solution is labeled on a scale of 1 through 5, being one a minimum possible investment and 5 the maximum.

Keywords: affective computing, emotions, emotion detection, face recognition, gait recognition

Procedia PDF Downloads 336
216 The Role of Emotional Intelligence in the Manager's Psychophysiological Activity during a Performance-Review Discussion

Authors: Mikko Salminen, Niklas Ravaja

Abstract:

Emotional intelligence (EI) consists of skills for monitoring own emotions and emotions of others, skills for discriminating different emotions, and skills for using this information in thinking and actions. EI enhances, for example, work outcomes and organizational climate. We suggest that the role and manifestations of EI should also be studied in real leadership situations, especially during the emotional, social interaction. Leadership is essentially a process to influence others for reaching a certain goal. This influencing happens by managerial processes and computer-mediated communication (e.g. e-mail) but also by face-to-face, where facial expressions have a significant role in conveying emotional information. Persons with high EI are typically perceived more positively, and they have better social skills. We hypothesize, that during social interaction high EI enhances the ability to detect other’s emotional state and controlling own emotional expressions. We suggest, that emotionally intelligent leader’s experience less stress during social leadership situations, since they have better skills in dealing with the related emotional work. Thus the high-EI leaders would be more able to enjoy these situations, but also be more efficient in choosing appropriate expressions for building constructive dialogue. We suggest, that emotionally intelligent leaders show more positive emotional expressions than low-EI leaders. To study these hypotheses we observed performance review discussions of 40 leaders (24 female) with 78 (45 female) of their followers. Each leader held a discussion with two followers. Psychophysiological methods were chosen because they provide objective and continuous data from the whole duration of the discussions. We recorded sweating of the hands (electrodermal activation) by electrodes placed to the fingers of the non-dominant hand to assess the stress-related physiological arousal of the leaders. In addition, facial electromyography was recorded from cheek (zygomaticus major, activated during e.g. smiling) and periocular (orbicularis oculi, activated during smiling) muscles using electrode pairs placed on the left side of the face. Leader’s trait EI was measured with a 360 questionnaire, filled by each leader’s followers, peers, managers and by themselves. High-EI leaders had less sweating of the hands (p = .007) than the low-EI leaders. It is thus suggested that the high-EI leaders experienced less physiological stress during the discussions. Also, high scores in the factor “Using of emotions” were related to more facial muscle activation indicating positive emotional expressions (cheek muscle: p = .048; periocular muscle: p = .076, almost statistically significant). The results imply that emotionally intelligent managers are positively relaxed during s social leadership situations such as a performance review discussion. The current study also highlights the importance of EI in face-to-face social interaction, given the central role facial expressions have in interaction situations. The study also offers new insight to the biological basis of trait EI. It is suggested that the identification, forming, and intelligently using of facial expressions are skills that could be trained during leadership development courses.

Keywords: emotional intelligence, leadership, performance review discussion, psychophysiology, social interaction

Procedia PDF Downloads 225
215 Retrieving Iconometric Proportions of South Indian Sculptures Based on Statistical Analysis

Authors: M. Bagavandas

Abstract:

Introduction: South Indian stone sculptures are known for their elegance and history. They are available in large numbers in different monuments situated different parts of South India. These art pieces have been studied using iconography details, but this pioneering study introduces a novel method known as iconometry which is a quantitative study that deals with measurements of different parts of icons to find answers for important unanswered questions. The main aim of this paper is to compare iconometric measurements of the sculptures with canonical proportion to determine whether the sculptors of the past had followed any of the canonical proportions prescribed in the ancient text. If not, this study recovers the proportions used for carving sculptures which is not available to us now. Also, it will be interesting to see how these sculptural proportions of different monuments belonging to different dynasties differ from one another in terms these proportions. Methods and Materials: As Indian sculptures are depicted in different postures, one way of making measurements independent of size, is to decode on a suitable measurement and convert the other measurements as proportions with respect to the chosen measurement. Since in all canonical texts of Indian art, all different measurements are given in terms of face length, it is chosen as the required measurement for standardizing the measurements. In order to compare these facial measurements with measurements prescribed in Indian canons of Iconography, the ten facial measurements like face length, morphological face length, nose length, nose-to-chin length, eye length, lip length, face breadth, nose breadth, eye breadth and lip breadth were standardized using the face length and the number of measurements reduced to nine. Each measurement was divided by the corresponding face length and multiplied by twelve and given in angula unit used in the canonical texts. The reason for multiplying by twelve is that the face length is given as twelve angulas in the canonical texts for all figures. Clustering techniques were used to determine whether the sculptors of the past had followed any of the proportions prescribed in the canonical texts of the past to carve sculptures and also to compare the proportions of sculptures of different monuments. About one hundred twenty-seven stone sculptures from four monuments belonging to the Pallava, the Chola, the Pandya and the Vijayanagar dynasties were taken up for this study. These art pieces belong to a period ranging from the eighth to the sixteenth century A.D. and all of them adorning different monuments situated in different parts of Tamil Nadu State, South India. Anthropometric instruments were used for taking measurements and the author himself had measured all the sample pieces of this study. Result: Statistical analysis of sculptures of different centers of art from different dynasties shows a considerable difference in facial proportions and many of these proportions differ widely from the canonical proportions. The retrieved different facial proportions indicate that the definition of beauty has been changing from period to period and region to region.

Keywords: iconometry, proportions, sculptures, statistics

Procedia PDF Downloads 136
214 Analysis of Facial Expressions with Amazon Rekognition

Authors: Kashika P. H.

Abstract:

The development of computer vision systems has been greatly aided by the efficient and precise detection of images and videos. Although the ability to recognize and comprehend images is a strength of the human brain, employing technology to tackle this issue is exceedingly challenging. In the past few years, the use of Deep Learning algorithms to treat object detection has dramatically expanded. One of the key issues in the realm of image recognition is the recognition and detection of certain notable people from randomly acquired photographs. Face recognition uses a way to identify, assess, and compare faces for a variety of purposes, including user identification, user counting, and classification. With the aid of an accessible deep learning-based API, this article intends to recognize various faces of people and their facial descriptors more accurately. The purpose of this study is to locate suitable individuals and deliver accurate information about them by using the Amazon Rekognition system to identify a specific human from a vast image dataset. We have chosen the Amazon Rekognition system, which allows for more accurate face analysis, face comparison, and face search, to tackle this difficulty.

Keywords: Amazon rekognition, API, deep learning, computer vision, face detection, text detection

Procedia PDF Downloads 77
213 “Lightyear” – The Battle for LGBTQIA+ Representation Behind Disney/Pixar’s Failed Blockbuster

Authors: Ema Vitória Fonseca Lavrador

Abstract:

In this work, we intend to explore the impact that the film "Lightyear" (2022) had on the social context of its production, distribution, and reception. This film, produced by Walt Disney Animation Studios and Pixar Animation Studios, depicts the story of Buzz Lightyear, a Space Ranger from which the character of the same name in the "Toy Story" film franchise is based. This prequel was predicted to be the blockbuster of the year, but it was a financial fiasco and the subject of numerous controversies, which also caused it to be drowned out by the film "Minions: The Rise of Gru" (2022). The reason for its failure is not based on the film's narrative or quality but on its controversial context for being a commitment to LGBTQIA+ representation in an unexpected way, by featuring a same-sex couple and showing a kiss shared by them. This representation cost Disney distribution in countries against LGBTQIA+ representation in media and involved Disney in major disagreements with fans and politicians, especially for being a direct opposition to the Florida House Bill 1557, also called the “Don't Say Gay” bill. Many major companies have taken a stand against this law because it jeopardizes the safety of the LGBTQIA+ community, and, although Disney initially cut the kiss off the film, pressure from the staff and audience resulted in unprecedented progress. For featuring a brief homosexual kiss, its exhibition was banned in several countries and discouraged by the same public that was previously the focus of Disney's attention, as this is a conservative “family-friendly” branded company. We believe it is relevant to study the case of "Lightyear" because it is a work that raises awareness and promotes representation of communities affected during the dark times while less legislation is being approved to protect the rights and safety of queer people.

Keywords: Don’t Say Gay” bill, gender stereotypes, LGBTQIA+ representation, lightyear, Disney/Pixar

Procedia PDF Downloads 55
212 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System

Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur

Abstract:

Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.

Keywords: avatar, dictionary, HamNoSys, hearing impaired, Indian sign language (ISL), sign language

Procedia PDF Downloads 197
211 A Study of Effectiveness of Topical Grapeseed Oil for Reducing Wrinkles on Periorbital Areas in Asian People in Thailand

Authors: Cherish Romina Prajitno, Sunisa Thaichinda

Abstract:

One indicator of facial aging is wrinkles. Not only that, but wrinkles are a key indicator in our world of facial aesthetics. Wrinkles occur where fault lines develop in aging skin. Nowadays, people are more motivated to keep up their appealing and young appearance. Many individuals are seeking a fast recovery time for their aesthetic procedures and are interested in non-invasive techniques that have a proven track record for successful outcomes. The purpose of this study is to see the efficacy of 100% (pure) grapeseed oil for reducing periorbital wrinkles. This study used the split-face, double-blind method, and this treatment was administered for three months at random to fifteen patients, with the grapeseed oil at one side of the face and the other side with the placebo. The main outcome measure was determined by conducting a comparative analysis of the participants' wrinkle results during each visit using the VIsioscan® VC98. Additionally, we evaluated the skin's elasticity and barrier function using the Cutometer® MP 530 and Tewameter® TM300. Furthermore, we administered a satisfaction score questionnaire to the patients in the 12th week. The findings of the study indicate that grapeseed oil exhibited a noteworthy effect in diminishing the appearance of wrinkles in the periorbital region, enhancing the viscoelastic properties of the periorbital skin, and improving the functionality of the skin barrier in the periorbital area.

Keywords: periorbital wrinkles, pure grapeseed oil, split-face method

Procedia PDF Downloads 39
210 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 101
209 Characterising the Processes Underlying Emotion Recognition Deficits in Adolescents with Conduct Disorder

Authors: Nayra Martin-Key, Erich Graf, Wendy Adams, Graeme Fairchild

Abstract:

Children and adolescents with Conduct Disorder (CD) have been shown to demonstrate impairments in emotion recognition, but it is currently unclear whether this deficit is related to specific emotions or whether it represents a global deficit in emotion recognition. An emotion recognition task with concurrent eye-tracking was employed to further explore this relationship in a sample of male and female adolescents with CD. Participants made emotion categorization judgements for presented dynamic and morphed static facial expressions. The results demonstrated that males with CD, and to a lesser extent, females with CD, displayed impaired facial expression recognition in general, whereas callous-unemotional (CU) traits were linked to specific problems in sadness recognition in females with CD. A region-of-interest analysis of the eye-tracking data indicated that males with CD exhibited reduced fixation times for the eye-region of the face compared to typically-developing (TD) females, but not TD males. Females with CD did not show reduced fixation to the eye-region of the face relative to TD females. In addition, CU traits did not influence CD subjects’ attention to the eye-region of the face. These findings suggest that the emotion recognition deficits found in CD males, the worst performing group in the behavioural tasks, are partly driven by reduced attention to the eyes.

Keywords: attention, callous-unemotional traits, conduct disorder, emotion recognition, eye-region, eye-tracking, sex differences

Procedia PDF Downloads 278
208 A Comparative Study of Efficacy and Safety of Salicylic Acid, Trichloroacetic Acid and Glycolic Acid in Various Facial Melanosis

Authors: Shivani Dhande, Sanjiv Choudhary, Adarshlata Singh

Abstract:

Introduction: Chemical peeling is a popular, relatively inexpensive day procedure and generally safe method for treatment of pigmentary skin disorders and for skin rejuvenation. Chemical peels are classified by the depth of action into superficial, medium, and deep peels.Various facial pigmentary conditions have significant impact on quality of life causing psychological stress, necessitating its safe and effective treatment.Aim & Objectives:To compare the efficacy of Salicylic acid, Trichloroaceticacid & Glycolic Acid in facial melanosis(melasma,photomelanosis& post acne pigmentation).To study the side effects of above mentioned peeling agents. Method and Materials:It was a randomized parallel control single blind study consisting of total of 36 cases, 12 cases each of melasma, photo melanosis and post acne pigmentation within age group 20-50 years having fitzpatrick’s skin type4. Woods lamp examination was done to confirm the type of melasma.Patients with keloidal tendency, active herpes infection or past history of hypersensitivity to salicylic acid, trichloroaceticand glycolic acid as well aspatients on systemic isotretinoin were excluded.Clinical photographs at the beginning of therapy and then serially, were taken to assess the clinical response. Prior to application a written informed consent was obtained. A post auricular test peel was performed. Patients were divided into 3 groups, containing 12 patients each of melasma, photomelanosis and post acnepigmentation.All the three peels SA peel 20% (done once in 2 weeks), GA peel 50% (done once in 3 weeks) and TCA 15% (done once in 3 weeks) were used with total six settings for each patient. Before application of peel patients were counseled to wash the face with soap and water. Then face was dried and cleaned with spirit and acetone to remove all cutaneous oils. GA, TCA, SA were applied with cotton buds/gauze withmild strokes. After a contact period off 5-10mins neutralization was done with cold water. Post peel topical sunscreen application was mandatory. MASI was used pre and post treatment to assess melasma. Investigator’s global improvement scale- overall hyperpigmentation (4-significant, 3-moderate, 2-mild, 1-minimal, 0-no change ) and Patient’s satisfaction grading scale (>70%- excellent response, 50-70%- good response, <50%- average response) was used to assess improvement in all the three facial melanosis.Results:In our study of 12 patients of melasma, 4 (33.33%)patients showed excellent results;3 (25%) with GAand 1(8.33%) of TCA.Good response was seen in 4 (33.33%) patients;1(8.33%) each for GA & SA and 2(16.66%) for TCA.Poor response was seen in 4(33.33%) patients;1(8.33%) for TCA and 3 (25%) for SA.Of 12 patients of photomelanosis, excellent resultswas seen in 3(25%)patients of TCA. Good response was seen in 4 (33.33%) patients, 1(8.33%) each of TCA &SA and 2(16.66%) of GA.Poor responsewas seen in 5(41.66%) patients;3 (25%) for SA and 2(16.66%) of GA.Of 12 patients of post acne pigmentation, excellent responsein 3 (25%) patients;2(16.66%) of SA and 1(8.33%) of TCA.Good responsewas seen in 5(41.66%) patients;2(16.66%) of SA and GA and1(8.33%) of TCA.Poor response was seen in 4 (33.33%) patients; 2 (16.66%) for SA and TCA both. No major side effects in the form of scarring or persistant pigmentation was seen. Transient blackening of skin with burning sensation was seen in cases treated with TCA and SA. Post procedural itching and redness was noted with GA peel. Conclusion- In our study GA(50%),TCA(15%) & SA(20%) peels showed excellent response in melasma, photomelanosis and post-acne pigmentation respectively.All the 3 peeling agents were well tolerated without any significant side-effects in the above specified concentrations.

Keywords: facial melanosis, gycolic acid, salicylic acid, trichloroacetic acid

Procedia PDF Downloads 222
207 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 55
206 Interventions for Children with Autism Using Interactive Technologies

Authors: Maria Hopkins, Sarah Koch, Fred Biasini

Abstract:

Autism is lifelong disorder that affects one out of every 110 Americans. The deficits that accompany Autism Spectrum Disorders (ASD), such as abnormal behaviors and social incompetence, often make it extremely difficult for these individuals to gain functional independence from caregivers. These long-term implications necessitate an immediate effort to improve social skills among children with an ASD. Any technology that could teach individuals with ASD necessary social skills would not only be invaluable for the individuals affected, but could also effect a massive saving to society in treatment programs. The overall purpose of the first study was to develop, implement, and evaluate an avatar tutor for social skills training in children with ASD. “Face Say” was developed as a colorful computer program that contains several different activities designed to teach children specific social skills, such as eye gaze, joint attention, and facial recognition. The children with ASD were asked to attend to FaceSay or a control painting computer game for six weeks. Children with ASD who received the training had an increase in emotion recognition, F(1, 48) = 23.04, p < 0.001 (adjusted Ms 8.70 and 6.79, respectively) compared to the control group. In addition, children who received the FaceSay training had higher post-test scored in facial recognition, F(1, 48) = 5.09, p < 0.05 (adjusted Ms: 38.11 and 33.37, respectively) compared to controls. The findings provide information about the benefits of computer-based training for children with ASD. Recent research suggests the value of also using socially assistive robots with children who have an ASD. Researchers investigating robots as tools for therapy in ASD have reported increased engagement, increased levels of attention, and novel social behaviors when robots are part of the social interaction. The overall goal of the second study was to develop a social robot designed to teach children specific social skills such as emotion recognition. The robot is approachable, with both an animal-like appearance and features of a human face (i.e., eyes, eyebrows, mouth). The feasibility of the robot is being investigated in children ages 7-12 to explore whether the social robot is capable of forming different facial expressions to accurately display emotions similar to those observed in the human face. The findings of this study will be used to create a potentially effective and cost efficient therapy for improving the cognitive-emotional skills of children with autism. Implications and study findings using the robot as an intervention tool will be discussed.

Keywords: autism, intervention, technology, emotions

Procedia PDF Downloads 350
205 Face Recognition Using Body-Worn Camera: Dataset and Baseline Algorithms

Authors: Ali Almadan, Anoop Krishnan, Ajita Rattani

Abstract:

Facial recognition is a widely adopted technology in surveillance, border control, healthcare, banking services, and lately, in mobile user authentication with Apple introducing “Face ID” moniker with iPhone X. A lot of research has been conducted in the area of face recognition on datasets captured by surveillance cameras, DSLR, and mobile devices. Recently, face recognition technology has also been deployed on body-worn cameras to keep officers safe, enabling situational awareness and providing evidence for trial. However, limited academic research has been conducted on this topic so far, without the availability of any publicly available datasets with a sufficient sample size. This paper aims to advance research in the area of face recognition using body-worn cameras. To this aim, the contribution of this work is two-fold: (1) collection of a dataset consisting of a total of 136,939 facial images of 102 subjects captured using body-worn cameras in in-door and daylight conditions and (2) evaluation of various deep-learning architectures for face identification on the collected dataset. Experimental results suggest a maximum True Positive Rate(TPR) of 99.86% at False Positive Rate(FPR) of 0.000 obtained by SphereFace based deep learning architecture in daylight condition. The collected dataset and the baseline algorithms will promote further research and development. A downloadable link of the dataset and the algorithms is available by contacting the authors.

Keywords: face recognition, body-worn cameras, deep learning, person identification

Procedia PDF Downloads 137
204 Multimodal Sentiment Analysis With Web Based Application

Authors: Shreyansh Singh, Afroz Ahmed

Abstract:

Sentiment Analysis intends to naturally reveal the hidden mentality that we hold towards an entity. The total of this assumption over a populace addresses sentiment surveying and has various applications. Current text-based sentiment analysis depends on the development of word embeddings and Machine Learning models that take in conclusion from enormous text corpora. Sentiment Analysis from text is presently generally utilized for consumer loyalty appraisal and brand insight investigation. With the expansion of online media, multimodal assessment investigation is set to carry new freedoms with the appearance of integral information streams for improving and going past text-based feeling examination using the new transforms methods. Since supposition can be distinguished through compelling follows it leaves, like facial and vocal presentations, multimodal opinion investigation offers good roads for examining facial and vocal articulations notwithstanding the record or printed content. These methodologies use the Recurrent Neural Networks (RNNs) with the LSTM modes to increase their performance. In this study, we characterize feeling and the issue of multimodal assessment investigation and audit ongoing advancements in multimodal notion examination in various spaces, including spoken surveys, pictures, video websites, human-machine, and human-human connections. Difficulties and chances of this arising field are additionally examined, promoting our theory that multimodal feeling investigation holds critical undiscovered potential.

Keywords: sentiment analysis, RNN, LSTM, word embeddings

Procedia PDF Downloads 88
203 Antioxidant Face Mask from Purple Sweet Potato (Ipomea Batatas) with Oleum Cytrus

Authors: Lilis Kistriyani, Dine Olisvia, Lutfa Rahmawati

Abstract:

Facial mask is an important part of every beauty treatment because it will give a smooth and gentle effect on the face. This research is done to make edible film that will be applied for face mask. The main ingredient in making this edible film is purple sweet potato powder with the addition of glycerol as plasticizer. One of the ingredients in purple sweet potato is a flavonoid compound. The purpose of this study was to determine the effect of increasing the amount of glycerol to flavonoids release and the effect on the physical properties and biological properties of edible film produced. The stages of this research are the making of edible film, then perform some analysis, among others, spectrophotometer UV-vis analysis to find out how many flavonoids can be released into facial skin, tensile strength and elongation of break analysis, biodegradability analysis, and microbiological analysis. The variation of edible film is the volume of glycerol that is 1 ml, 2 ml, 3 ml. The results of spectrophotometer UV-vis analysis showed that the most flavonoid release concentration is 20.33 ppm in the 2 ml glycerol variation. The best tensile strength value is 8,502 N, and the greatest elongation of break value is 14% in 1 ml glycerol variation. In the biodegradability test, the more volume of glycerol added the faster the edible film is degraded. The results of microbiological analysis showed that purple sweet potato extract has the ability to inhibit the growth of Propionibacterium acnes seen in the presence of inhibiting zone which is 18.9 mm.

Keywords: face mask, edible film, plasticizer, flavonoid

Procedia PDF Downloads 152
202 Effect of Lullabies on Babies Stress and Relaxation Symptoms in the Neonatal Intensive Care Units

Authors: Meltem Kürtüncü, Işın Alkan

Abstract:

Objective: This study was carried out with an experimental design in order to determine whether the lullaby, which was listened from mother’s voice and a stranger’s voice to the babies born at term and hospitalized in neonatal intensive care unit, had an effect on stress and relaxation symptoms of the infants. Method: Data from the study were obtained from 90 newborn babies who were hospitalized in Neonatal Intensive Care Unit of Zonguldak Maternity And Children Hospital between September 2015-January 2016 and who met the eligibility criteria. Lullaby concert was performed by choosing one of the suitable care hours. Stress and relaxation symptoms were recorded by the researcher on “Newborn response follow-up form” at pre-care and post-care. Results: After lullaby concert when stress symptoms compared to infants in the experimental and control groups before the care was not detected statistically significant difference between crying, contraction, facial grimacing, flushing, cyanosis and the rates of increase in temperature. After care, crying, contractions, facial grimacing, flushing, and restlessness revealed a statistically significant difference between the groups, but as the cyanosis and temperature increased stress responses did not result in a significant difference between the groups. In the control group babies the crying, contraction, facial grimacing, flushing, and restlessness behaviors rates were found to be significantly higher than experimental group babies. After lullaby concert when relaxation symptoms compared to infants in the experimental and control groups before the care, eye contact rates who listen to lullaby from mother’s voice was found to be significantly higher than infants who listen to lullaby from stranger’s voice and infants in the control group. After care as eye contact, smiling, sucking/searching, yawning, non-crying and sleep behaviors relaxation symptoms revealed statistically significant results. In the control group, these behaviors were found statistically lower degree than the experimental groups. Conclusion: Lullaby concerts as masking the ambient noise, reducing the stress symptoms and increasing the relaxation symptoms, and also for soothing and stimulant affects, due to ease the transition to the sleep state should be preferred in the neonatal intensive care units.

Keywords: lullaby, mother voice, relaxation, stress

Procedia PDF Downloads 206
201 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls

Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac

Abstract:

No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.

Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations

Procedia PDF Downloads 289
200 Application of Infrared Thermal Imaging, Eye Tracking and Behavioral Analysis for Deception Detection

Authors: Petra Hypšová, Martin Seitl

Abstract:

One of the challenges of forensic psychology is to detect deception during a face-to-face interview. In addition to the classical approaches of monitoring the utterance and its components, detection is also sought by observing behavioral and physiological changes that occur as a result of the increased emotional and cognitive load caused by the production of distorted information. Typical are changes in facial temperature, eye movements and their fixation, pupil dilation, emotional micro-expression, heart rate and its variability. Expanding technological capabilities have opened the space to detect these psychophysiological changes and behavioral manifestations through non-contact technologies that do not interfere with face-to-face interaction. Non-contact deception detection methodology is still in development, and there is a lack of studies that combine multiple non-contact technologies to investigate their accuracy, as well as studies that show how different types of lies produced by different interviewers affect physiological and behavioral changes. The main objective of this study is to apply a specific non-contact technology for deception detection. The next objective is to investigate scenarios in which non-contact deception detection is possible. A series of psychophysiological experiments using infrared thermal imaging, eye tracking and behavioral analysis with FaceReader 9.0 software was used to achieve our goals. In the laboratory experiment, 16 adults (12 women, 4 men) between 18 and 35 years of age (SD = 4.42) were instructed to produce alternating prepared and spontaneous truths and lies. The baseline of each proband was also measured, and its results were compared to the experimental conditions. Because the personality of the examiner (particularly gender and facial appearance) to whom the subject is lying can influence physiological and behavioral changes, the experiment included four different interviewers. The interviewer was represented by a photograph of a face that met the required parameters in terms of gender and facial appearance (i.e., interviewer likability/antipathy) to follow standardized procedures. The subject provided all information to the simulated interviewer. During follow-up analyzes, facial temperature (main ROIs: forehead, cheeks, the tip of the nose, chin, and corners of the eyes), heart rate, emotional expression, intensity and fixation of eye movements and pupil dilation were observed. The results showed that the variables studied varied with respect to the production of prepared truths and lies versus the production of spontaneous truths and lies, as well as the variability of the simulated interviewer. The results also supported the assumption of variability in physiological and behavioural values during the subject's resting state, the so-called baseline, and the production of prepared and spontaneous truths and lies. A series of psychophysiological experiments provided evidence of variability in the areas of interest in the production of truths and lies to different interviewers. The combination of technologies used also led to a comprehensive assessment of the physiological and behavioral changes associated with false and true statements. The study presented here opens the space for further research in the field of lie detection with non-contact technologies.

Keywords: emotional expression decoding, eye-tracking, functional infrared thermal imaging, non-contact deception detection, psychophysiological experiment

Procedia PDF Downloads 75
199 Animations for Teaching Food Chemistry: A Design Approach for Linking Chemistry Theory to Everyday Food

Authors: Paulomi (Polly) Burey, Zoe Lynch

Abstract:

In STEM education, students often have difficulty linking static images and words from textbooks or online resources, to the underlying mechanisms of the topic of study. This can often dissuade some students from pursuing study in the physical and chemical sciences. A growing movement in current day students demonstrates that the YouTube generation feel they learn best from video or dynamic, interactive learning tools, and will seek these out as alternatives to their textbooks and the classroom learning environment. Chemistry, and in particular visualization of molecular structures in everyday materials, can prove difficult to comprehend without significant interaction with the teacher of the content and concepts, beyond the timeframe of a typical class. This can cause a learning hurdle for distance education students, and so it is necessary to provide strong electronic tools and resources to aid their learning. As one of the electronic resources, an animation design approach to link everyday materials to their underlying chemistry would be beneficial for student learning, with the focus here being on food. These animations were designed and storyboarded with a scaling approach and commence with a focus on the food material itself and its component parts. This is followed by animated transitions to its underlying microstructure and identifying features, and finally showing the molecules responsible for these microstructural features. The animation ends with a reverse transition back through the molecular structure, microstructure, all the way back to the original food material, and also animates some reactions that may occur during food processing to demonstrate the purpose of the underlying chemistry and how it affects the food we eat. Using this cyclical approach of linking students’ existing knowledge of food to help guide them to understanding more complex knowledge, and then reinforcing their learning by linking back to their prior knowledge again, enhances student understanding. Food is also an ideal material system for students to interact with, in a hands-on manner to further reinforce their learning. These animations were launched this year in a 2nd year University Food Chemistry course with improved learning outcomes for the cohort.

Keywords: chemistry, food science, future pedagogy, STEM Education

Procedia PDF Downloads 126
198 Gross and Clinical Anatomy of the Skull of Adult Chinkara, Gazella bennettii

Authors: Salahud Din, Saima Masood, Hafsa Zaneb, Habib Ur Rehman, Saima Ashraf, Imad Khan, Muqader Shah

Abstract:

The objective of this study was (1) to study gross morphological, osteometric and clinical important landmarks in the skull of adult Chinkara to obtain baseline data and (2) to study sexual dimorphism in male and female adult Chinkara through osteometry. For this purpose, after performing postmortem examination, the carcass of adult Chinkara of known sex and age was buried in the locality of the Manglot Wildlife Park and Ungulate Breeding Centre, Nizampur, Pakistan; after a specific period of time, the bones were unearthed. Gross morphological features and various osteometric parameters of the skull were studied in the University of Veterinary and Animal Sciences, Lahore, Pakistan. The shape of the Chinkara skull was elongated and had thirty-two bones. The skull was comprised of the cranial and the facial part. The facial region of the skull was formed by maxilla, incisive, palatine, vomar, pterygoid, frontal, parietal, nasal, incisive, turbinates, mandible and hyoid apparatus. The bony region of the cranium of Chinkara was comprised of occipital, ethmoid, sphenoid, interparietal, parietal, temporal, and frontal bone. The foramina identified in the facial region of the skull of Chinkara were infraorbital, supraorbital foramen, lacrimal, sphenopalatine, maxillary and caudal palatine foramina. The foramina of the cranium of the skull of the Chinkara were the internal acoustic meatus, external acoustic meatus, hypoglossal canal, transverse canal, sphenorbital fissure, carotid canal, foramen magnum, stylomastoid foramen, foramen rotundum, foramen ovale and jugular foramen, and the rostral and the caudal foramina that formed the pterygoid canal. The measured craniometric parameters did not show statistically significant differences (p > 0.05) between male and female adult Chinkara except Palatine bone, OI, DO, IOCDE, OCT, ICW, IPCW, and PCPL were significantly higher (p > 0.05) in male than female Chinkara and mean values of the mandibular parameters except b and h were significantly (p < 0.5) higher in male Chinkara than female Chinkara. Sexual dimorphism exists in some of the orbital and foramen magnum parameters, while high levels of sexual dimorphism identified in mandible. In conclusion, morphocraniometric studies of Chinkara skull made it possible to identify species-specific skull and use clinical measurements during practical application.

Keywords: Chinkara, skull, morphology, morphometrics, sexual dimorphism

Procedia PDF Downloads 260
197 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 51
196 Numerical Simulation of Two-Phase Flows Using a Pressure-Based Solver

Authors: Lei Zhang, Jean-Michel Ghidaglia, Anela Kumbaro

Abstract:

This work focuses on numerical simulation of two-phase flows based on the bi-fluid six-equation model widely used in many industrial areas, such as nuclear power plant safety analysis. A pressure-based numerical method is adopted in our studies due to the fact that in two-phase flows, it is common to have a large range of Mach numbers because of the mixture of liquid and gas, and density-based solvers experience stiffness problems as well as a loss of accuracy when approaching the low Mach number limit. This work extends the semi-implicit pressure solver in the nuclear component CUPID code, where the governing equations are solved on unstructured grids with co-located variables to accommodate complicated geometries. A conservative version of the solver is developed in order to capture exactly the shock in one-phase flows, and is extended to two-phase situations. An inter-facial pressure term is added to the bi-fluid model to make the system hyperbolic and to establish a well-posed mathematical problem that will allow us to obtain convergent solutions with refined meshes. The ability of the numerical method to treat phase appearance and disappearance as well as the behavior of the scheme at low Mach numbers will be demonstrated through several numerical results. Finally, inter-facial mass and heat transfer models are included to deal with situations when mass and energy transfer between phases is important, and associated industrial numerical benchmarks with tabulated EOS (equations of state) for fluids are performed.

Keywords: two-phase flows, numerical simulation, bi-fluid model, unstructured grids, phase appearance and disappearance

Procedia PDF Downloads 369
195 3D Multimedia Model for Educational Design Engineering

Authors: Mohanaad Talal Shakir

Abstract:

This paper tries to propose educational design by using multimedia technology for Engineering of computer Technology, Alma'ref University College in Iraq. This paper evaluates the acceptance, cognition, and interactiveness of the proposed model by students by using the statistical relationship to determine the stage of the model. Objectives of proposed education design are to develop a user-friendly software for education purposes using multimedia technology and to develop animation for 3D model to simulate assembling and disassembling process of high-speed flow.

Keywords: CAL, multimedia, shock tunnel, interactivity, engineering education

Procedia PDF Downloads 594