Search results for: tactile gesture
47 Implementing a Visual Servoing System for Robot Controlling
Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari
Abstract:
Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167046 Interactive Shadow Play Animation System
Authors: Bo Wan, Xiu Wen, Lingling An, Xiaoling Ding
Abstract:
The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation.
Keywords: Gesture recognition, Kinect, shadow play animation, VRPN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 270745 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.
Keywords: Canny pruning, hand recognition, machine learning, skin tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130944 A Second Look at Gesture-Based Passwords: Usability and Vulnerability to Shoulder-Surfing Attacks
Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier
Abstract:
For security purposes, it is important to detect passwords entered by unauthorized users. With traditional alphanumeric passwords, if the content of a password is acquired and correctly entered by an intruder, it is impossible to differentiate the password entered by the intruder from those entered by the authorized user because the password entries contain precisely the same character set. However, no two entries for the gesture-based passwords, even those entered by the person who created the password, will be identical. There are always variations between entries, such as the shape and length of each stroke, the location of each stroke, and the speed of drawing. It is possible that passwords entered by the unauthorized user contain higher levels of variations when compared with those entered by the authorized user (the creator). The difference in the levels of variations may provide cues to detect unauthorized entries. To test this hypothesis, we designed an empirical study, collected and analyzed the data with the help of machine-learning algorithms. The results of the study are significant.
Keywords: Authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61743 Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network
Authors: M. Jazouli, S. Elhoufi, A. Majda, A. Zarghili, R. Aalouane
Abstract:
Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate.
Keywords: ASD, stereotypical motor movements, repetitive gesture, kinect, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190642 Real-time 3D Feature Extraction without Explicit 3D Object Reconstruction
Authors: Kwangjin Hong, Chulhan Lee, Keechul Jung, Kyoungsu Oh
Abstract:
For the communication between human and computer in an interactive computing environment, the gesture recognition is studied vigorously. Therefore, a lot of studies have proposed efficient methods about the recognition algorithm using 2D camera captured images. However, there is a limitation to these methods, such as the extracted features cannot fully represent the object in real world. Although many studies used 3D features instead of 2D features for more accurate gesture recognition, the problem, such as the processing time to generate 3D objects, is still unsolved in related researches. Therefore we propose a method to extract the 3D features combined with the 3D object reconstruction. This method uses the modified GPU-based visual hull generation algorithm which disables unnecessary processes, such as the texture calculation to generate three kinds of 3D projection maps as the 3D feature: a nearest boundary, a farthest boundary, and a thickness of the object projected on the base-plane. In the section of experimental results, we present results of proposed method on eight human postures: T shape, both hands up, right hand up, left hand up, hands front, stand, sit and bend, and compare the computational time of the proposed method with that of the previous methods.Keywords: Fast 3D Feature Extraction, Gesture Recognition, Computer Vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163841 Parametric Primitives for Hand Gesture Recognition
Authors: Sanmohan Krüger, Volker Krüger
Abstract:
Imitation learning is considered to be an effective way of teaching humanoid robots and action recognition is the key step to imitation learning. In this paper an online algorithm to recognize parametric actions with object context is presented. Objects are key instruments in understanding an action when there is uncertainty. Ambiguities arising in similar actions can be resolved with objectn context. We classify actions according to the changes they make to the object space. Actions that produce the same state change in the object movement space are classified to belong to the same class. This allow us to define several classes of actions where members of each class are connected with a semantic interpretation.Keywords: Parametric actions, Action primitives, Hand gesture recognition, Imitation learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148740 Hands-off Parking: Deep Learning Gesture-Based System for Individuals with Mobility Needs
Authors: Javier Romera, Alberto Justo, Ignacio Fidalgo, Javier Araluce, Joshué Pérez
Abstract:
Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, this paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for in-depth gesture classification. This tandem of MediaPipe’s extraction prowess and MPL’s analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System 2 (ROS2), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real-vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs.
Keywords: Gesture detection, MediaPipe, MultiLayer Perceptron Layer, Robot Operating System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14039 Virtual Gesture Screen System Based on 3D Visual Information and Multi-Layer Perceptron
Authors: Yang-Keun Ahn, Min-Wook Kim, Young-Choong Park, Kwang-Soon Choi, Woo-Chool Park, Hae-Moon Seo, Kwang-Mo Jung
Abstract:
Active research is underway on virtual touch screens that complement the physical limitations of conventional touch screens. This paper discusses a virtual touch screen that uses a multi-layer perceptron to recognize and control three-dimensional (3D) depth information from a time of flight (TOF) camera. This system extracts an object-s area from the image input and compares it with the trajectory of the object, which is learned in advance, to recognize gestures. The system enables the maneuvering of content in virtual space by utilizing human actions.Keywords: Gesture Recognition, Depth Sensor, Virtual Touch Screen
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164838 Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language
Authors: Marie Alaghband, Niloofar Yousefi, Ivan Garibay
Abstract:
Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image’s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.Keywords: Annotated Facial Expression Dataset, Sign Language Recognition, Gesture Recognition, Sequenced Facial Expression Dataset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72437 Inferring the Dynamics of “Hidden“ Neurons from Electrophysiological Recordings
Authors: Valeri A. Makarov, Nazareth P. Castellanos
Abstract:
Statistical analysis of electrophysiological recordings obtained under, e.g. tactile, stimulation frequently suggests participation in the network dynamics of experimentally unobserved “hidden" neurons. Such interneurons making synapses to experimentally recorded neurons may strongly alter their dynamical responses to the stimuli. We propose a mathematical method that formalizes this possibility and provides an algorithm for inferring on the presence and dynamics of hidden neurons based on fitting of the experimental data to spike trains generated by the network model. The model makes use of Integrate and Fire neurons “chemically" coupled through exponentially decaying synaptic currents. We test the method on simulated data and also provide an example of its application to the experimental recording from the Dorsal Column Nuclei neurons of the rat under tactile stimulation of a hind limb.Keywords: Integrate and fire neuron, neural network models, spike trains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134236 Hand Controlled Mobile Robot Applied in Virtual Environment
Authors: Jozsef Katona, Attila Kovari, Tibor Ujbanyi, Gergely Sziladi
Abstract:
By the development of IT systems, human-computer interaction is also developing even faster and newer communication methods become available in human-machine interaction. In this article, the application of a hand gesture controlled human-computer interface is being introduced through the example of a mobile robot. The control of the mobile robot is implemented in a realistic virtual environment that is advantageous regarding the aspect of different tests, parallel examinations, so the purchase of expensive equipment is unnecessary. The usability of the implemented hand gesture control has been evaluated by test subjects. According to the opinion of the testing subjects, the system can be well used, and its application would be recommended on other application fields too.
Keywords: Human-machine interface, hand control, mobile robot, virtual environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 100735 Pakistan Sign Language Recognition Using Statistical Template Matching
Authors: Aleem Khalid Alvi, M. Yousuf Bin Azhar, Mehmood Usman, Suleman Mumtaz, Sameer Rafiq, RaziUr Rehman, Israr Ahmed
Abstract:
Sign language recognition has been a topic of research since the first data glove was developed. Many researchers have attempted to recognize sign language through various techniques. However none of them have ventured into the area of Pakistan Sign Language (PSL). The Boltay Haath project aims at recognizing PSL gestures using Statistical Template Matching. The primary input device is the DataGlove5 developed by 5DT. Alternative approaches use camera-based recognition which, being sensitive to environmental changes are not always a good choice.This paper explains the use of Statistical Template Matching for gesture recognition in Boltay Haath. The system recognizes one handed alphabet signs from PSL.Keywords: Gesture Recognition, Pakistan Sign Language, DataGlove, Human Computer Interaction, Template Matching, BoltayHaath
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 302634 Development of a Computer Vision System for the Blind and Visually Impaired Person
Authors: Roselyn A. Maaño
Abstract:
Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may results from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments.
Keywords: Algorithms, Blind, Computer Vision, Embedded Systems, Image Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 361133 A Robust Method for Hand Tracking Using Mean-shift Algorithm and Kalman Filter in Stereo Color Image Sequences
Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Robert Niese, Bernd Michaelis
Abstract:
Real-time hand tracking is a challenging task in many computer vision applications such as gesture recognition. This paper proposes a robust method for hand tracking in a complex environment using Mean-shift analysis and Kalman filter in conjunction with 3D depth map. The depth information solve the overlapping problem between hands and face, which is obtained by passive stereo measuring based on cross correlation and the known calibration data of the cameras. Mean-shift analysis uses the gradient of Bhattacharyya coefficient as a similarity function to derive the candidate of the hand that is most similar to a given hand target model. And then, Kalman filter is used to estimate the position of the hand target. The results of hand tracking, tested on various video sequences, are robust to changes in shape as well as partial occlusion.Keywords: Computer Vision and Image Analysis, Object Tracking, Gesture Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 292032 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic
Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman
Abstract:
An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.
Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163331 Development of UiTM Robotic Prosthetic Hand
Authors: M. Amlie A. Kasim, Ahsana Aqilah, Ahmed Jaffar, Cheng Yee Low, Roseleena Jaafar, M. Saiful Bahari, Armansyah
Abstract:
The study of human hand morphology reveals that developing an artificial hand with the capabilities of human hand is an extremely challenging task. This paper presents the development of a robotic prosthetic hand focusing on the improvement of a tendon driven mechanism towards a biomimetic prosthetic hand. The design of this prosthesis hand is geared towards achieving high level of dexterity and anthropomorphism by means of a new hybrid mechanism that integrates a miniature motor driven actuation mechanism, a Shape Memory Alloy actuated mechanism and a passive mechanical linkage. The synergy of these actuators enables the flexion-extension movement at each of the finger joints within a limited size, shape and weight constraints. Tactile sensors are integrated on the finger tips and the finger phalanges area. This prosthesis hand is developed with an exact size ratio that mimics a biological hand. Its behavior resembles the human counterpart in terms of working envelope, speed and torque, and thus resembles both the key physical features and the grasping functionality of an adult hand.
Keywords: Prosthetic hand, Biomimetic actuation, Shape Memory Alloy, Tactile sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 264330 Vision Based Hand Gesture Recognition Using Generative and Discriminative Stochastic Models
Authors: Mahmoud Elmezain, Samar El-shinawy
Abstract:
Many approaches to pattern recognition are founded on probability theory, and can be broadly characterized as either generative or discriminative according to whether or not the distribution of the image features. Generative and discriminative models have very different characteristics, as well as complementary strengths and weaknesses. In this paper, we study these models to recognize the patterns of alphabet characters (A-Z) and numbers (0-9). To handle isolated pattern, generative model as Hidden Markov Model (HMM) and discriminative models like Conditional Random Field (CRF), Hidden Conditional Random Field (HCRF) and Latent-Dynamic Conditional Random Field (LDCRF) with different number of window size are applied on extracted pattern features. The gesture recognition rate is improved initially as the window size increase, but degrades as window size increase further. Experimental results show that the LDCRF is the best in terms of results than CRF, HCRF and HMM at window size equal 4. Additionally, our results show that; an overall recognition rates are 91.52%, 95.28%, 96.94% and 98.05% for CRF, HCRF, HMM and LDCRF respectively.
Keywords: Statistical Pattern Recognition, Generative Model, Discriminative Model, Human Computer Interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 293729 Eye-Gesture Analysis for Driver Hazard Awareness
Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman
Abstract:
Because road traffic accidents are a major source of death worldwide, attempts have been made to create Advanced Driver Assistance Systems (ADAS) able to detect vehicle, driver and environmental conditions that are cues for possible potential accidents. This paper presents continued work on a novel Nonintrusive Intelligent Driver Assistance and Safety System (Ni-DASS) for assessing driver attention and hazard awareness. It uses two onboard CCD cameras – one observing the road and the other observing the driver-s face. The windscreen is divided into cells and analysis of the driver-s eye-gaze patterns allows Ni-DASS to determine the windscreen cell the driver is focusing on using eye-gesture templates. Intersecting the driver-s field of view through the observed windscreen cell with subsections of the camera-s field of view containing a potential hazard allows Ni-DASS to estimate the probability that the driver has actually observed the hazard. Results have shown that the proposed technique is an accurate enough measure of driver observation to be useful in ADAS systems.Keywords: Advanced Driver Assistance Systems (ADAS), Driver Hazard Awareness, Driver Vigilance, Eye Tracking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 218128 Eye Gesture Analysis with Head Movement for Advanced Driver Assistance Systems
Authors: Siti Nor Hafizah bt Mohd Zaid, Mohamed Abdel Maguid, Abdel Hamid Soliman
Abstract:
Road traffic accidents are a major cause of death worldwide. In an attempt to reduce accidents, some research efforts have focused on creating Advanced Driver Assistance Systems (ADAS) able to detect vehicle, driver and environmental conditions and to use this information to identify cues for potential accidents. This paper presents continued work on a novel Non-intrusive Intelligent Driver Assistance and Safety System (Ni-DASS) for assessing driver point of regard within vehicles. It uses an on-board CCD camera to observe the driver-s face. A template matching approach is used to compare the driver-s eye-gaze pattern with a set of eye-gesture templates of the driver looking at different focal points within the vehicle. The windscreen is divided into cells and comparison of the driver-s eye-gaze pattern with templates of a driver-s eyes looking at each cell is used to determine the driver-s point of regard on the windscreen. Results indicate that the proposed technique could be useful in situations where low resolution estimates of driver point of regard are adequate. For instance, To allow ADAS systems to alert the driver if he/she has positively failed to observe a hazard.
Keywords: Head rotation, Eye-gestures, Windscreen, Template matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179727 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: Input performance, mobile device, slim keyboard, tactile feedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156926 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System
Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur
Abstract:
Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.
Keywords: Avatar, dictionary, HamNoSys, hearing-impaired, Indian Sign Language, sign language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 135525 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.Keywords: Human Motion Recognition, Motion representation, Laban Movement Analysis, Discrete Hidden Markov Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72824 Magnetic Field Based Near Surface Haptic and Pointing Interface
Authors: Kasun Karunanayaka, Sanath Siriwardana, Chamari Edirisinghe, Ryohei Nakatsu, PonnampalamGopalakrishnakone
Abstract:
In this paper, we are presenting a new type of pointing interface for computers which provides mouse functionalities with near surface haptic feedback. Further, it can be configured as a haptic display where users may feel the basic geometrical shapes in the GUI by moving the finger on top of the device surface. These functionalities are achieved by tracking three dimensional positions of the neodymium magnet using Hall Effect sensors grid and generating like polarity haptic feedback using an electromagnet array. This interface brings the haptic sensations to the 3D space where previously it is felt only on top of the buttons of the haptic mouse implementations.
Keywords: Pointing interface, near surface haptic feedback, tactile display, tangible user interface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207223 Usability Evaluation Framework for Computer Vision Based Interfaces
Authors: Muhammad Raza Ali, Tim Morris
Abstract:
Human computer interaction has progressed considerably from the traditional modes of interaction. Vision based interfaces are a revolutionary technology, allowing interaction through human actions, gestures. Researchers have developed numerous accurate techniques, however, with an exception to few these techniques are not evaluated using standard HCI techniques. In this paper we present a comprehensive framework to address this issue. Our evaluation of a computer vision application shows that in addition to the accuracy, it is vital to address human factorsKeywords: Usability evaluation, cognitive walkthrough, think aloud, gesture recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167222 Combining Skin Color and Optical Flow for Computer Vision Systems
Authors: Muhammad Raza Ali, Tim Morris
Abstract:
Skin color is an important visual cue for computer vision systems involving human users. In this paper we combine skin color and optical flow for detection and tracking of skin regions. We apply these techniques to gesture recognition with encouraging results. We propose a novel skin similarity measure. For grouping detected skin regions we propose a novel skin region grouping mechanism. The proposed techniques work with any number of skin regions making them suitable for a multiuser scenario.Keywords: Bayesian tracking, chromaticity space, optical flowgesture recognition
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192821 Haptics Enabled of ine AFM Image Analysis
Authors: Bhatti A., Nahavandi S., Hossny M.
Abstract:
Current advancements in nanotechnology are dependent on the capabilities that can enable nano-scientists to extend their eyes and hands into the nano-world. For this purpose, a haptics (devices capable of recreating tactile or force sensations) based system for AFM (Atomic Force Microscope) is proposed. The system enables the nano-scientists to touch and feel the sample surfaces, viewed through AFM, in order to provide them with better understanding of the physical properties of the surface, such as roughness, stiffness and shape of molecular architecture. At this stage, the proposed work uses of ine images produced using AFM and perform image analysis to create virtual surfaces suitable for haptics force analysis. The research work is in the process of extension from of ine to online process where interaction will be done directly on the material surface for realistic analysis.
Keywords: Haptics, AFM, force feedback, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157220 Haptics Enabled Offline AFM Image Analysis
Authors: Bhatti A., Nahavandi S., Hossny M.
Abstract:
Current advancements in nanotechnology are dependent on the capabilities that can enable nano-scientists to extend their eyes and hands into the nano-world. For this purpose, a haptics (devices capable of recreating tactile or force sensations) based system for AFM (Atomic Force Microscope) is proposed. The system enables the nano-scientists to touch and feel the sample surfaces, viewed through AFM, in order to provide them with better understanding of the physical properties of the surface, such as roughness, stiffness and shape of molecular architecture. At this stage, the proposed work uses of ine images produced using AFM and perform image analysis to create virtual surfaces suitable for haptics force analysis. The research work is in the process of extension from of ine to online process where interaction will be done directly on the material surface for realistic analysis.Keywords: Haptics, AFM, force feedback, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150819 Impairments Correction of Six-Port Based Millimeter-Wave Radar
Authors: Dan Ohev Zion, Alon Cohen
Abstract:
In recent years, the presence of short-range millimeter-wave radar in civil application has increased significantly. Autonomous driving, security, 3D imaging and high data rate communication systems are a few examples. The next challenge is the integration inside small form-factor devices, such as smartphones (e.g. gesture recognition). The main challenge is implementation of a truly low-power, low-complexity high-resolution radar. The most popular approach is the Frequency Modulated Continuous Wave (FMCW) radar, with an analog multiplication front-end. In this paper, we present an approach for adaptive estimation and correction of impairments of such front-end, specifically implemented using the Six-Port Device (SPD) as the multiplier element. The proposed algorithm was simulated and implemented on a 60 GHz radar lab prototype.Keywords: Radar, millimeter-wave, six-port, FMCW Radar, IQ mismatch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49318 Augmented Reality Interaction System in 3D Environment
Authors: Sunhyoung Lee, Askar Akshabayev, Beisenbek Baisakov, Youngjoon Han, Hernsoo Hahn
Abstract:
It is important to give input information without other device in AR system. One solution is using hand for augmented reality application. Many researchers have proposed different solutions for hand interface in augmented reality. Analyze Histogram and connecting factor is can be example for that. Various Direction searching is one of robust way to recognition hand but it takes too much calculating time. And background should be distinguished with skin color. This paper proposes a hand tracking method to control the 3D object in augmented reality using depth device and skin color. Also in this work discussed relationship between several markers, which is based on relationship between camera and marker. One marker used for displaying virtual object and three markers for detecting hand gesture and manipulating the virtual object.
Keywords: Augmented Reality, depth map, hand recognition, kinect, marker, YCbCr color model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873