Search results for: 3D face recognition
4127 Haunted Pilgrims: The Absence of Touch and the Sounds of Silence in Online Communication
Authors: Karen Armstrong
Abstract:
This paper explores the impact of two aspects of online communication: the absence of touch and the sound of silence. In order to place the discussion in context, the paper begins with a brief description of communication itself and the many ways in which we communicate with each other both verbally and non-verbally. Next, the discussion moves to consider the general characteristics of online communication and the ways in which it is similar as well as very different from face to face communication. This examination considers the ways we communicate primarily in email, but also through texting, instagram stickers, and twitter—the primary modes of online communication aside from face to face videos, which are less common. With few exceptions of course, most such interactions take place without sound or physical contact. First to be examined is the absence of touch, followed by the presence of silence. The paper explores these issues, concluding with the ways in which both absence of touch and the prevalence of silence are important determinants shaping communication in our online universe.Keywords: absence of touch, communication, face-to-face, haptics, online, silence
Procedia PDF Downloads 3684126 Investigation of New Gait Representations for Improving Gait Recognition
Authors: Chirawat Wattanapanich, Hong Wei
Abstract:
This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%.Keywords: convolutional image, lower knee, gait
Procedia PDF Downloads 2014125 Offline Signature Verification in Punjabi Based On SURF Features and Critical Point Matching Using HMM
Authors: Rajpal Kaur, Pooja Choudhary
Abstract:
Biometrics, which refers to identifying an individual based on his or her physiological or behavioral characteristics, has the capabilities to the reliably distinguish between an authorized person and an imposter. The Signature recognition systems can categorized as offline (static) and online (dynamic). This paper presents Surf Feature based recognition of offline signatures system that is trained with low-resolution scanned signature images. The signature of a person is an important biometric attribute of a human being which can be used to authenticate human identity. However the signatures of human can be handled as an image and recognized using computer vision and HMM techniques. With modern computers, there is need to develop fast algorithms for signature recognition. There are multiple techniques are defined to signature recognition with a lot of scope of research. In this paper, (static signature) off-line signature recognition & verification using surf feature with HMM is proposed, where the signature is captured and presented to the user in an image format. Signatures are verified depended on parameters extracted from the signature using various image processing techniques. The Off-line Signature Verification and Recognition is implemented using Mat lab platform. This work has been analyzed or tested and found suitable for its purpose or result. The proposed method performs better than the other recently proposed methods.Keywords: offline signature verification, offline signature recognition, signatures, SURF features, HMM
Procedia PDF Downloads 3834124 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition
Authors: Mohamed Lotfy, Ghada Soliman
Abstract:
Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.Keywords: computer vision, pattern recognition, optical character recognition, deep learning
Procedia PDF Downloads 924123 Discovering the Real Psyche of Human Beings
Authors: Sheetla Prasad
Abstract:
The objective of this study is ‘discovering the real psyche of human beings for prediction of mode, direction and strength of the potential of actions of the individual. The human face was taken as a source of central point to search for the route of real psyche. Analysis of the face architecture (shape and salient features of face) was done by three directional photographs ( 600 left and right and camera facing) of human beings. The shapes and features of the human face were scaled in 177 units on the basis of face–features locations (FFL). The mathematical analysis was done of FFLs by self developed and standardized formula. At this phase, 800 samples were taken from the population of students, teachers, advocates, administrative officers, and common persons. The finding shows that real psyche has two external rings (ER). These ER are itself generator of two independent psyches (manifested and manipulated). Prima-facie, it was proved that micro differences in FFLs have potential to predict the state of art of the human psyche. The potential of psyches was determined by the saving and distribution of mental energy. It was also mathematically proved.Keywords: face architecture, psyche, potential, face functional ratio, external rings
Procedia PDF Downloads 5034122 Recognition of Grocery Products in Images Captured by Cellular Phones
Authors: Farshideh Einsele, Hassan Foroosh
Abstract:
In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation, style, illumination, and can suffer from perspective distortion. Pre-processing is performed to make the characters scale and rotation invariant. Since text degradations can not be appropriately defined using wellknown geometric transformations such as translation, rotation, affine transformation and shearing, we use the whole character black pixels as our feature vector. Classification is performed with minimum distance classifier using the maximum likelihood criterion, which delivers very promising Character Recognition Rate (CRR) of 89%. We achieve considerably higher Word Recognition Rate (WRR) of 99% when using lower level linguistic knowledge about product words during the recognition process.Keywords: camera-based OCR, feature extraction, document, image processing, grocery products
Procedia PDF Downloads 4054121 A Service Evaluation Exploring the Effectiveness of a Tier 3 Weight Management Programme Offering Face-To-Face and Remote Dietetic Support
Authors: Rosemary E. Huntriss, Lucy Jones
Abstract:
Obesity and excess weight continue to be significant health problems in England. Traditional weight management programmes offer face-to-face support or group education. Remote care is recognised as a viable means of support; however, its effectiveness has not previously been evaluated in a tier 3 weight management setting. This service evaluation explored the effectiveness of online coaching, telephone support, and face-to-face support as optional management strategies within a tier 3 weight management programme. Outcome data were collected for adults with a BMI ≥ 45 or ≥ 40 with complex comorbidity who were referred to a Tier 3 weight management programme from January 2018 and had been discharged before October 2018. Following an initial 45-minute consultation with a specialist weight management dietitian, patients were offered a choice of follow-up support in the form of online coaching supported by an app (8 x 15 minutes coaching), face-to-face or telephone appointments (4 x 30 minutes). All patients were invited to a final 30-minute face-to-face assessment. The planned intervention time was between 12 and 24 weeks. Patients were offered access to adjunct face-to-face or telephone psychological support. One hundred and thirty-nine patients were referred into the programme from January 2018 and discharged before October 2018. One hundred and twenty-four patients (89%) attended their initial assessment. Out of those who attended their initial assessment, 110 patients (88.0%) completed more than half of the programme and 77 patients (61.6%) completed all sessions. The average length of the completed programme (all sessions) was 17.2 (SD 4.2) weeks. Eighty-five (68.5%) patients were coached online, 28 (22.6%) patients were supported face-to-face support, and 11 (8.9%) chose telephone support. Two patients changed from online coaching to face-to-face support due to personal preference and were included in the face-to-face group for analysis. For those with data available (n=106), average weight loss across the programme was 4.85 (SD 3.49)%; average weight loss was 4.70 (SD 3.19)% for online coaching, 4.83 (SD 4.13)% for face-to-face support, and 6.28 (SD 4.15)% for telephone support. There was no significant difference between weight loss achieved with face-to-face vs. online coaching (4.83 (SD 4.13)% vs 4.70 (SD 3.19) (p=0.87) or face-to-face vs. remote support (online coaching and telephone support combined) (4.83 (SD 4.13)% vs 4.85 (SD 3.30)%) (p=0.98). Remote support has been shown to be as effective as face-to-face support provided by a dietitian in the short-term within a tier 3 weight management setting. The completion rates were high compared with another tier 3 weight management services suggesting that offering remote support as an option may improve completion rates within a weight management service.Keywords: dietitian, digital health, obesity, weight management
Procedia PDF Downloads 1404120 Vision-Based Hand Segmentation Techniques for Human-Computer Interaction
Abstract:
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image.Keywords: HCI, sign language recognition, object tracking, hand segmentation
Procedia PDF Downloads 4094119 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform
Abstract:
Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.Keywords: alexNet, VGG, googleNet, resNet, Jetson nano, CUDA, COCO-NET, cifar10, imageNet large scale visual recognition challenge (ILSVRC), google colab
Procedia PDF Downloads 894118 Deep Learning Based Unsupervised Sport Scene Recognition and Highlights Generation
Authors: Ksenia Meshkova
Abstract:
With increasing amount of multimedia data, it is very important to automate and speed up the process of obtaining meta. This process means not just recognition of some object or its movement, but recognition of the entire scene versus separate frames and having timeline segmentation as a final result. Labeling datasets is time consuming, besides, attributing characteristics to particular scenes is clearly difficult due to their nature. In this article, we will consider autoencoders application to unsupervised scene recognition and clusterization based on interpretable features. Further, we will focus on particular types of auto encoders that relevant to our study. We will take a look at the specificity of deep learning related to information theory and rate-distortion theory and describe the solutions empowering poor interpretability of deep learning in media content processing. As a conclusion, we will present the results of the work of custom framework, based on autoencoders, capable of scene recognition as was deeply studied above, with highlights generation resulted out of this recognition. We will not describe in detail the mathematical description of neural networks work but will clarify the necessary concepts and pay attention to important nuances.Keywords: neural networks, computer vision, representation learning, autoencoders
Procedia PDF Downloads 1254117 A Supervised Face Parts Labeling Framework
Authors: Khalil Khan, Ikram Syed, Muhammad Ehsan Mazhar, Iran Uddin, Nasir Ahmad
Abstract:
Face parts labeling is the process of assigning class labels to each face part. A face parts labeling method (FPL) which divides a given image into its constitutes parts is proposed in this paper. A database FaceD consisting of 564 images is labeled with hand and make publically available. A supervised learning model is built through extraction of features from the training data. The testing phase is performed with two semantic segmentation methods, i.e., pixel and super-pixel based segmentation. In pixel-based segmentation class label is provided to each pixel individually. In super-pixel based method class label is assigned to super-pixel only – as a result, the same class label is given to all pixels inside a super-pixel. Pixel labeling accuracy reported with pixel and super-pixel based methods is 97.68 % and 93.45% respectively.Keywords: face labeling, semantic segmentation, classification, face segmentation
Procedia PDF Downloads 2534116 Restoring Sagging Neck with Minimal Scar Face Lifting
Authors: Alessandro Marano
Abstract:
The author describes the use of deep plane face lifting and platysmaplasty to treat sagging neck with minimal scars. Series of case study. The author uses a selective deep plane face lift with a minimal access scar that not extend behind the ear lobe, neck liposuction and platysmaplasty to restore the sagging neck; the scars are minimal and no require drainage post-op. The deep plane face lifting can achieve a good result restoring vertical vectors in aging and sagging face, neck district can be treated without cutting the skin behind the ear lobe combining the SMAS vertical suspension and platysmaplasty; surgery can be performed in local anesthesia with sedation in day surgery and fast recovery. Restoring neck sagging without extend scars behind ear lobe is possible in selected patients, procedure is fast, safe, no drainage required, patients are satisfied and healing time is fast and comfortable.Keywords: face lifting, aesthetic, face, neck, platysmaplasty, deep plane
Procedia PDF Downloads 994115 A Weighted Approach to Unconstrained Iris Recognition
Authors: Yao-Hong Tsai
Abstract:
This paper presents a weighted approach to unconstrained iris recognition. Nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.Keywords: authentication, iris recognition, adaboost, local binary pattern
Procedia PDF Downloads 2234114 Reviewing Image Recognition and Anomaly Detection Methods Utilizing GANs
Authors: Agastya Pratap Singh
Abstract:
This review paper examines the emerging applications of generative adversarial networks (GANs) in the fields of image recognition and anomaly detection. With the rapid growth of digital image data, the need for efficient and accurate methodologies to identify and classify images has become increasingly critical. GANs, known for their ability to generate realistic data, have gained significant attention for their potential to enhance traditional image recognition systems and improve anomaly detection performance. The paper systematically analyzes various GAN architectures and their modifications tailored for image recognition tasks, highlighting their strengths and limitations. Additionally, it delves into the effectiveness of GANs in detecting anomalies in diverse datasets, including medical imaging, industrial inspection, and surveillance. The review also discusses the challenges faced in training GANs, such as mode collapse and stability issues, and presents recent advancements aimed at overcoming these obstacles.Keywords: generative adversarial networks, image recognition, anomaly detection, synthetic data generation, deep learning, computer vision, unsupervised learning, pattern recognition, model evaluation, machine learning applications
Procedia PDF Downloads 234113 Efficient Feature Fusion for Noise Iris in Unconstrained Environment
Authors: Yao-Hong Tsai
Abstract:
This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.Keywords: image fusion, iris recognition, local binary pattern, wavelet
Procedia PDF Downloads 3664112 Online Handwritten Character Recognition for South Indian Scripts Using Support Vector Machines
Authors: Steffy Maria Joseph, Abdu Rahiman V, Abdul Hameed K. M.
Abstract:
Online handwritten character recognition is a challenging field in Artificial Intelligence. The classification success rate of current techniques decreases when the dataset involves similarity and complexity in stroke styles, number of strokes and stroke characteristics variations. Malayalam is a complex south indian language spoken by about 35 million people especially in Kerala and Lakshadweep islands. In this paper, we consider the significant feature extraction for the similar stroke styles of Malayalam. This extracted feature set are suitable for the recognition of other handwritten south indian languages like Tamil, Telugu and Kannada. A classification scheme based on support vector machines (SVM) is proposed to improve the accuracy in classification and recognition of online malayalam handwritten characters. SVM Classifiers are the best for real world applications. The contribution of various features towards the accuracy in recognition is analysed. Performance for different kernels of SVM are also studied. A graphical user interface has developed for reading and displaying the character. Different writing styles are taken for each of the 44 alphabets. Various features are extracted and used for classification after the preprocessing of input data samples. Highest recognition accuracy of 97% is obtained experimentally at the best feature combination with polynomial kernel in SVM.Keywords: SVM, matlab, malayalam, South Indian scripts, onlinehandwritten character recognition
Procedia PDF Downloads 5744111 Gender Recognition with Deep Belief Networks
Authors: Xiaoqi Jia, Qing Zhu, Hao Zhang, Su Yang
Abstract:
A gender recognition system is able to tell the gender of the given person through a few of frontal facial images. An effective gender recognition approach enables to improve the performance of many other applications, including security monitoring, human-computer interaction, image or video retrieval and so on. In this paper, we present an effective method for gender classification task in frontal facial images based on deep belief networks (DBNs), which can pre-train model and improve accuracy a little bit. Our experiments have shown that the pre-training method with DBNs for gender classification task is feasible and achieves a little improvement of accuracy on FERET and CAS-PEAL-R1 facial datasets.Keywords: gender recognition, beep belief net-works, semi-supervised learning, greedy-layer wise RBMs
Procedia PDF Downloads 4514110 Emotion Recognition Using Artificial Intelligence
Authors: Rahul Mohite, Lahcen Ouarbya
Abstract:
This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed.Keywords: facial reputation, expression reputation, deep gaining knowledge of, photo reputation, facial technology, sign processing, photo type
Procedia PDF Downloads 1174109 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning
Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond
Abstract:
Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.Keywords: time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition
Procedia PDF Downloads 1214108 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition
Abstract:
The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network
Procedia PDF Downloads 904107 Recognition of Spelling Problems during the Text in Progress: A Case Study on the Comments Made by Portuguese Students Newly Literate
Authors: E. Calil, L. A. Pereira
Abstract:
The acquisition of orthography is a complex process, involving both lexical and grammatical questions. This learning occurs simultaneously with the domain of multiple textual aspects (e.g.: graphs, punctuation, etc.). However, most of the research on orthographic acquisition focus on this acquisition from an autonomous point of view, separated from the process of textual production. This means that their object of analysis is the production of words selected by the researcher or the requested sentences in an experimental and controlled setting. In addition, the analysis of the Spelling Problems (SP) are identified by the researcher on the sheet of paper. Considering the perspective of Textual Genetics, from an enunciative approach, this study will discuss the SPs recognized by dyads of newly literate students, while they are writing a text collaboratively. Six proposals of textual production were registered, requested by a 2nd year teacher of a Portuguese Primary School between January and March 2015. In our case study we discuss the SPs recognized by the dyad B and L (7 years old). We adopted as a methodological tool the Ramos System audiovisual record. This system allows real-time capture of the text in process and of the face-to-face dialogue between both students and their teacher, and also captures the body movements and facial expressions of the participants during textual production proposals in the classroom. In these ecological conditions of multimodal registration of collaborative writing, we could identify the emergence of SP in two dimensions: i. In the product (finished text): SP identification without recursive graphic marks (without erasures) and the identification of SPs with erasures, indicating the recognition of SP by the student; ii. In the process (text in progress): identification of comments made by students about recognized SPs. Given this, we’ve analyzed the comments on identified SPs during the text in progress. These comments characterize a type of reformulation referred to as Commented Oral Erasure (COE). The COE has two enunciative forms: Simple Comment (SC) such as ' 'X' is written with 'Y' '; or Unfolded Comment (UC), such as ' 'X' is written with 'Y' because...'. The spelling COE may also occur before or during the SP (Early Spelling Recognition - ESR) or after the SP has been entered (Later Spelling Recognition - LSR). There were 631 words entered in the 6 stories written by the B-L dyad, 145 of them containing some type of SP. During the text in progress, the students recognized orally 174 SP, 46 of which were identified in advance (ESRs) and 128 were identified later (LSPs). If we consider that the 88 erasure SPs in the product indicate some form of SP recognition, we can observe that there were twice as many SPs recognized orally. The ESR was characterized by SC when students asked their colleague or teacher how to spell a given word. The LSR presented predominantly UC, verbalizing meta-orthographic arguments, mostly made by L. These results indicate that writing in dyad is an important didactic strategy for the promotion of metalinguistic reflection, favoring the learning of spelling.Keywords: collaborative writing, erasure, learning, metalinguistic awareness, spelling, text production
Procedia PDF Downloads 1614106 The Effectiveness of Exchange of Tacit and Explicit Knowledge Using Digital and Face to Face Sharing
Authors: Delio I. Castaneda, Paul Toulson
Abstract:
The purpose of this study was to investigate the knowledge sharing effectiveness of two types of knowledge, tacit and explicit, depending on two channels: face to face or digital. Participants were 217 knowledge workers in New Zealand and researchers who attended a knowledge management conference in the United Kingdom. In the study, it was found that digital tools are effective to share explicit knowledge. In addition, digital tools that facilitated dialogue were effective to share tacit knowledge. It was also found that face to face communication was an effective way to share tacit and explicit knowledge. Results of this study contribute to clarify in what cases digital tools are effective to share tacit knowledge. Additionally, even though explicit knowledge can be easily shared using digital tools, this type of knowledge is also possible to be shared through dialogue. Result of this study may support practitioners to redesign programs and activities based on knowledge sharing to make strategies more effective.Keywords: digital knowledge, explicit knowledge, knowledge sharing, tacit knowledge
Procedia PDF Downloads 2534105 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach
Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed
Abstract:
Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model
Procedia PDF Downloads 4604104 Communication and Devices: Face to Face Communication versus Communication with Mobile Technologies
Authors: Nuran Öze
Abstract:
With the rapid changes occurring in the last twenty five years, mobile phone technology has influenced every aspect of life. Technological developments within the Internet and mobile phone areas have not only changed communication practices; it has also changed the everyday life practices of individuals. This article has focused on understanding how people’s communication practices and everyday life practices have changed with the smartphone usage. The study was conducted by using in-depth interview method and the research was conducted on twenty Turkish Cypriots who live in Northern Cyprus. According to the research results, communicating via Internet has rapidly replaced face to face communication in recent years. However, results have changed according to generations. Younger generations can easily adapt themselves to technological changes because they are already gaining everyday life practices right now. However, the older generations practices are already present in their everyday life.Keywords: face to face communication, internet, mobile technologies, north Cyprus
Procedia PDF Downloads 3944103 Fine Grained Action Recognition of Skateboarding Tricks
Authors: Frederik Calsius, Mirela Popa, Alexia Briassouli
Abstract:
In the field of machine learning, it is common practice to use benchmark datasets to prove the working of a method. The domain of action recognition in videos often uses datasets like Kinet-ics, Something-Something, UCF-101 and HMDB-51 to report results. Considering the properties of the datasets, there are no datasets that focus solely on very short clips (2 to 3 seconds), and on highly-similar fine-grained actions within one specific domain. This paper researches how current state-of-the-art action recognition methods perform on a dataset that consists of highly similar, fine-grained actions. To do so, a dataset of skateboarding tricks was created. The performed analysis highlights both benefits and limitations of state-of-the-art methods, while proposing future research directions in the activity recognition domain. The conducted research shows that the best results are obtained by fusing RGB data with OpenPose data for the Temporal Shift Module.Keywords: activity recognition, fused deep representations, fine-grained dataset, temporal modeling
Procedia PDF Downloads 2294102 Developing an AI-Driven Application for Real-Time Emotion Recognition from Human Vocal Patterns
Authors: Sayor Ajfar Aaron, Mushfiqur Rahman, Sajjat Hossain Abir, Ashif Newaz
Abstract:
This study delves into the development of an artificial intelligence application designed for real-time emotion recognition from human vocal patterns. Utilizing advanced machine learning algorithms, including deep learning and neural networks, the paper highlights both the technical challenges and potential opportunities in accurately interpreting emotional cues from speech. Key findings demonstrate the critical role of diverse training datasets and the impact of ambient noise on recognition accuracy, offering insights into future directions for improving robustness and applicability in real-world scenarios.Keywords: artificial intelligence, convolutional neural network, emotion recognition, vocal patterns
Procedia PDF Downloads 494101 Face Shield Design with Additive Manufacturing Practice Combating COVID-19 Pandemic
Authors: May M. Youssef
Abstract:
This article introduces a design, for additive manufacturing technology, face shield as Personal Protective Equipment from the respiratory viruses such as coronavirus 2. The face shields help to reduce ocular exposure and play a vital role in diverting away from the respiratory COVID-19 air droplets around the users' face. The proposed face shield comprises three assembled polymer parts. The frame with a transparency overhead projector sheet visor is suitable for frontline health care workers and ordinary citizens. The frame design allows tightening the shield around the user’s head and permits rubber elastic straps to be used if required. That ergonomically designed with a unique face mask support used in case of wearing extra protective mask was created using computer aided design (CAD) software package. The finite element analysis (FEA) structural verification of the proposed design is performed by an advanced simulation technique. Subsequently, the prototype model was fabricated by a 3D printing using Fused Deposition Modeling (FDM) as a globally developed face shield product. This study provides a different face shield designs for global production, which showed to be suitable and effective toward supply chain shortages and frequent needs of personal protective goods during coronavirus disease and similar viruses.Keywords: additive manufacturing, Coronavirus-19, face shield, personal protective equipment, 3D printing
Procedia PDF Downloads 2004100 Facial Biometric Privacy Using Visual Cryptography: A Fundamental Approach to Enhance the Security of Facial Biometric Data
Authors: Devika Tanna
Abstract:
'Biometrics' means 'life measurement' but the term is usually associated with the use of unique physiological characteristics to identify an individual. It is important to secure the privacy of digital face image that is stored in central database. To impart privacy to such biometric face images, first, the digital face image is split into two host face images such that, each of it gives no idea of existence of the original face image and, then each cover image is stored in two different databases geographically apart. When both the cover images are simultaneously available then only we can access that original image. This can be achieved by using the XM2VTS and IMM face database, an adaptive algorithm for spatial greyscale. The algorithm helps to select the appropriate host images which are most likely to be compatible with the secret image stored in the central database based on its geometry and appearance. The encryption is done using GEVCS which results in a reconstructed image identical to the original private image.Keywords: adaptive algorithm, database, host images, privacy, visual cryptography
Procedia PDF Downloads 1294099 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Authors: Kyi Pyar Zaw, Zin Mar Kyu
Abstract:
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation
Procedia PDF Downloads 3184098 Facial Pose Classification Using Hilbert Space Filling Curve and Multidimensional Scaling
Authors: Mekamı Hayet, Bounoua Nacer, Benabderrahmane Sidahmed, Taleb Ahmed
Abstract:
Pose estimation is an important task in computer vision. Though the majority of the existing solutions provide good accuracy results, they are often overly complex and computationally expensive. In this perspective, we propose the use of dimensionality reduction techniques to address the problem of facial pose estimation. Firstly, a face image is converted into one-dimensional time series using Hilbert space filling curve, then the approach converts these time series data to a symbolic representation. Furthermore, a distance matrix is calculated between symbolic series of an input learning dataset of images, to generate classifiers of frontal vs. profile face pose. The proposed method is evaluated with three public datasets. Experimental results have shown that our approach is able to achieve a correct classification rate exceeding 97% with K-NN algorithm.Keywords: machine learning, pattern recognition, facial pose classification, time series
Procedia PDF Downloads 349