Search results for: Hand Gesture
967 Users’ Preferences for Map Navigation Gestures
Authors: Y. Y. Pang, N. A. Ismail
Abstract:
Map is a powerful and convenient tool in helping us to navigate to different places, but the use of indirect devices often makes its usage cumbersome. This study intends to propose a new map navigation dialogue that uses hand gesture. A set of dialogue was developed from users’ perspective to provide users complete freedom for panning, zooming, rotate, tilt and find direction operations. A participatory design experiment was involved here where one hand gesture and two hand gesture dialogues had been analysed in the forms of hand gestures to develop a set of usable dialogues. The major finding was that users prefer one-hand gesture compared to two-hand gesture in map navigation.
Keywords: Hand gesture, map navigation, participatory design, intuitive interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437966 FSM-based Recognition of Dynamic Hand Gestures via Gesture Summarization Using Key Video Object Planes
Authors: M. K. Bhuyan
Abstract:
The use of human hand as a natural interface for humancomputer interaction (HCI) serves as the motivation for research in hand gesture recognition. Vision-based hand gesture recognition involves visual analysis of hand shape, position and/or movement. In this paper, we use the concept of object-based video abstraction for segmenting the frames into video object planes (VOPs), as used in MPEG-4, with each VOP corresponding to one semantically meaningful hand position. Next, the key VOPs are selected on the basis of the amount of change in hand shape – for a given key frame in the sequence the next key frame is the one in which the hand changes its shape significantly. Thus, an entire video clip is transformed into a small number of representative frames that are sufficient to represent a gesture sequence. Subsequently, we model a particular gesture as a sequence of key frames each bearing information about its duration. These constitute a finite state machine. For recognition, the states of the incoming gesture sequence are matched with the states of all different FSMs contained in the database of gesture vocabulary. The core idea of our proposed representation is that redundant frames of the gesture video sequence bear only the temporal information of a gesture and hence discarded for computational efficiency. Experimental results obtained demonstrate the effectiveness of our proposed scheme for key frame extraction, subsequent gesture summarization and finally gesture recognition.
Keywords: Hand gesture, MPEG-4, Hausdorff distance, finite state machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027965 A Hidden Markov Model-Based Isolated and Meaningful Hand Gesture Recognition
Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Jörg Appenrodt, Bernd Michaelis
Abstract:
Gesture recognition is a challenging task for extracting meaningful gesture from continuous hand motion. In this paper, we propose an automatic system that recognizes isolated gesture, in addition meaningful gesture from continuous hand motion for Arabic numbers from 0 to 9 in real-time based on Hidden Markov Models (HMM). In order to handle isolated gesture, HMM using Ergodic, Left-Right (LR) and Left-Right Banded (LRB) topologies is applied over the discrete vector feature that is extracted from stereo color image sequences. These topologies are considered to different number of states ranging from 3 to 10. A new system is developed to recognize the meaningful gesture based on zero-codeword detection with static velocity motion for continuous gesture. Therefore, the LRB topology in conjunction with Baum-Welch (BW) algorithm for training and forward algorithm with Viterbi path for testing presents the best performance. Experimental results show that the proposed system can successfully recognize isolated and meaningful gesture and achieve average rate recognition 98.6% and 94.29% respectively.Keywords: Computer Vision & Image Processing, Gesture Recognition, Pattern Recognition, Application
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2250964 Trajectory Guided Recognition of Hand Gestures having only Global Motions
Authors: M. K. Bhuyan, P. K. Bora, D. Ghosh
Abstract:
One very interesting field of research in Pattern Recognition that has gained much attention in recent times is Gesture Recognition. In this paper, we consider a form of dynamic hand gestures that are characterized by total movement of the hand (arm) in space. For these types of gestures, the shape of the hand (palm) during gesturing does not bear any significance. In our work, we propose a model-based method for tracking hand motion in space, thereby estimating the hand motion trajectory. We employ the dynamic time warping (DTW) algorithm for time alignment and normalization of spatio-temporal variations that exist among samples belonging to the same gesture class. During training, one template trajectory and one prototype feature vector are generated for every gesture class. Features used in our work include some static and dynamic motion trajectory features. Recognition is accomplished in two stages. In the first stage, all unlikely gesture classes are eliminated by comparing the input gesture trajectory to all the template trajectories. In the next stage, feature vector extracted from the input gesture is compared to all the class prototype feature vectors using a distance classifier. Experimental results demonstrate that our proposed trajectory estimator and classifier is suitable for Human Computer Interaction (HCI) platform.
Keywords: Hand gesture, human computer interaction, key video object plane, dynamic time warping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2742963 Real-Time Hand Tracking and Gesture Recognition System Using Neural Networks
Authors: Tin Hninn Hninn Maung
Abstract:
This paper introduces a hand gesture recognition system to recognize real time gesture in unstrained environments. Efforts should be made to adapt computers to our natural means of communication: Speech and body language. A simple and fast algorithm using orientation histograms will be developed. It will recognize a subset of MAL static hand gestures. A pattern recognition system will be using a transforrn that converts an image into a feature vector, which will be compared with the feature vectors of a training set of gestures. The final system will be Perceptron implementation in MATLAB. This paper includes experiments of 33 hand postures and discusses the results. Experiments shows that the system can achieve a 90% recognition average rate and is suitable for real time applications.
Keywords: Hand gesture recognition, Orientation Histogram, Myanmar Alphabet Language, Perceptronnetwork, MATLAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4697962 Hand Gestures Based Emotion Identification Using Flex Sensors
Authors: S. Ali, R. Yunus, A. Arif, Y. Ayaz, M. Baber Sial, R. Asif, N. Naseer, M. Jawad Khan
Abstract:
In this study, we have proposed a gesture to emotion recognition method using flex sensors mounted on metacarpophalangeal joints. The flex sensors are fixed in a wearable glove. The data from the glove are sent to PC using Wi-Fi. Four gestures: finger pointing, thumbs up, fist open and fist close are performed by five subjects. Each gesture is categorized into sad, happy, and excited class based on the velocity and acceleration of the hand gesture. Seventeen inspectors observed the emotions and hand gestures of the five subjects. The emotional state based on the investigators assessment and acquired movement speed data is compared. Overall, we achieved 77% accurate results. Therefore, the proposed design can be used for emotional state detection applications.
Keywords: Emotion identification, emotion models, gesture recognition, user perception.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940961 Hand Motion and Gesture Control of Laboratory Test Equipment Using the Leap Motion Controller
Authors: Ian A. Grout
Abstract:
In this paper, the design and development of a system to provide hand motion and gesture control of laboratory test equipment is considered and discussed. The Leap Motion controller is used to provide an input to control a laboratory power supply as part of an electronic circuit experiment. By suitable hand motions and gestures, control of the power supply is provided remotely and without the need to physically touch the equipment used. As such, it provides an alternative manner in which to control electronic equipment via a PC and is considered here within the field of human computer interaction (HCI).
Keywords: Control, hand gesture, human computer interaction, test equipment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976960 Vision Based Hand Gesture Recognition
Authors: Pragati Garg, Naveen Aggarwal, Sanjeev Sofat
Abstract:
With the development of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient. Due to the limitation of these devices the useable command set is also limited. Direct use of hands as an input device is an attractive method for providing natural Human Computer Interaction which has evolved from text-based interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to fully fledged multi-participant Virtual Environment (VE) systems. Imagine the human-computer interaction of the future: A 3Dapplication where you can move and rotate objects simply by moving and rotating your hand - all without touching any input device. In this paper a review of vision based hand gesture recognition is presented. The existing approaches are categorized into 3D model based approaches and appearance based approaches, highlighting their advantages and shortcomings and identifying the open issues.Keywords: Computer Vision, Hand Gesture, Hand Posture, Human Computer Interface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6341959 Hand Gesture Recognition using Blob Detection for Immersive Projection Display System
Authors: Hasup Lee, Yoshisuke Tateyama, Tetsuro Ogi
Abstract:
We developed a vision interface immersive projection system, CAVE in virtual rea using hand gesture recognition with computer vis background image was subtracted from current webcam and we convert the color space of the imag Then we mask skin regions using skin color range t a noise reduction operation. We made blobs fro gestures were recognized using these blobs. Using recognition, we could implement an effective bothering devices for CAVE. e framework for an reality research field vision techniques. ent image frame age into HSV space. e threshold and apply from the image and ing our hand gesture e interface without
Keywords: CAVE, Computer Vision, Ges Virtual Reality esture Recognition,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2754958 Infrared Camera-Based Hand Gesture Space Touch System Implementation of Smart Device Environment
Authors: Yang-Keun Ahn, Kwang-Soon Choi, Young-Choong Park, Kwang-Mo Jung
Abstract:
This paper proposes a method to recognize the tip of a finger and space touch hand gesture using an infrared camera in a smart device environment. The proposed method estimates the tip of a finger with a curvature-based ellipse fitting algorithm, and verifies that the estimated object is indeed a finger with an ellipse fitting rectangular area. The feature extracted from the verified finger tip is used to implement the movement of a mouse and clicking gesture. The proposed algorithm was implemented with an actual smart device to test the proposed method. Empirical parameters were obtained from the keypad software and an image analysis tool for the performance optimization, and a comparative analysis with conventional research showed improved performance with the proposed method.
Keywords: Infrared camera, Hand gesture, Smart device, Space touch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2337957 Hand Gesture Recognition: Sign to Voice System (S2V)
Authors: Oi Mean Foong, Tan Jung Low, Satrio Wibowo
Abstract:
Hand gesture is one of the typical methods used in sign language for non-verbal communication. It is most commonly used by people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems have been developed by manufacturers around the globe but they are neither flexible nor cost-effective for the end users. This paper presents a system prototype that is able to automatically recognize sign language to help normal people to communicate more effectively with the hearing or speech impaired people. The Sign to Voice system prototype, S2V, was developed using Feed Forward Neural Network for two-sequence signs detection. Different sets of universal hand gestures were captured from video camera and utilized to train the neural network for classification purpose. The experimental results have shown that neural network has achieved satisfactory result for sign-to-voice translation.Keywords: Hand gesture detection, neural network, signlanguage, sequence detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856956 Hand Gesture Recognition Based on Combined Features Extraction
Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Bernd Michaelis
Abstract:
Hand gesture is an active area of research in the vision community, mainly for the purpose of sign language recognition and Human Computer Interaction. In this paper, we propose a system to recognize alphabet characters (A-Z) and numbers (0-9) in real-time from stereo color image sequences using Hidden Markov Models (HMMs). Our system is based on three main stages; automatic segmentation and preprocessing of the hand regions, feature extraction and classification. In automatic segmentation and preprocessing stage, color and 3D depth map are used to detect hands where the hand trajectory will take place in further step using Mean-shift algorithm and Kalman filter. In the feature extraction stage, 3D combined features of location, orientation and velocity with respected to Cartesian systems are used. And then, k-means clustering is employed for HMMs codeword. The final stage so-called classification, Baum- Welch algorithm is used to do a full train for HMMs parameters. The gesture of alphabets and numbers is recognized using Left-Right Banded model in conjunction with Viterbi algorithm. Experimental results demonstrate that, our system can successfully recognize hand gestures with 98.33% recognition rate.Keywords: Gesture Recognition, Computer Vision & Image Processing, Pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4032955 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.
Keywords: Canny pruning, hand recognition, machine learning, skin tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1309954 A Robust Method for Hand Tracking Using Mean-shift Algorithm and Kalman Filter in Stereo Color Image Sequences
Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Robert Niese, Bernd Michaelis
Abstract:
Real-time hand tracking is a challenging task in many computer vision applications such as gesture recognition. This paper proposes a robust method for hand tracking in a complex environment using Mean-shift analysis and Kalman filter in conjunction with 3D depth map. The depth information solve the overlapping problem between hands and face, which is obtained by passive stereo measuring based on cross correlation and the known calibration data of the cameras. Mean-shift analysis uses the gradient of Bhattacharyya coefficient as a similarity function to derive the candidate of the hand that is most similar to a given hand target model. And then, Kalman filter is used to estimate the position of the hand target. The results of hand tracking, tested on various video sequences, are robust to changes in shape as well as partial occlusion.Keywords: Computer Vision and Image Analysis, Object Tracking, Gesture Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2919953 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms
Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna
Abstract:
In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2731952 Hand Controlled Mobile Robot Applied in Virtual Environment
Authors: Jozsef Katona, Attila Kovari, Tibor Ujbanyi, Gergely Sziladi
Abstract:
By the development of IT systems, human-computer interaction is also developing even faster and newer communication methods become available in human-machine interaction. In this article, the application of a hand gesture controlled human-computer interface is being introduced through the example of a mobile robot. The control of the mobile robot is implemented in a realistic virtual environment that is advantageous regarding the aspect of different tests, parallel examinations, so the purchase of expensive equipment is unnecessary. The usability of the implemented hand gesture control has been evaluated by test subjects. According to the opinion of the testing subjects, the system can be well used, and its application would be recommended on other application fields too.
Keywords: Human-machine interface, hand control, mobile robot, virtual environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1007951 Gesture Recognition by Data Fusion of Time-of-Flight and Color Cameras
Authors: Piercarlo Dondi, Luca Lombardi, Marco Porta
Abstract:
In the last years numerous applications of Human- Computer Interaction have exploited the capabilities of Time-of- Flight cameras for achieving more and more comfortable and precise interactions. In particular, gesture recognition is one of the most active fields. This work presents a new method for interacting with a virtual object in a 3D space. Our approach is based on the fusion of depth data, supplied by a ToF camera, with color information, supplied by a HD webcam. The hand detection procedure does not require any learning phase and is able to concurrently manage gestures of two hands. The system is robust to the presence in the scene of other objects or people, thanks to the use of the Kalman filter for maintaining the tracking of the hands.Keywords: Gesture recognition, human-computer interaction, Time-of-Flight camera.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1941950 Parametric Primitives for Hand Gesture Recognition
Authors: Sanmohan Krüger, Volker Krüger
Abstract:
Imitation learning is considered to be an effective way of teaching humanoid robots and action recognition is the key step to imitation learning. In this paper an online algorithm to recognize parametric actions with object context is presented. Objects are key instruments in understanding an action when there is uncertainty. Ambiguities arising in similar actions can be resolved with objectn context. We classify actions according to the changes they make to the object space. Actions that produce the same state change in the object movement space are classified to belong to the same class. This allow us to define several classes of actions where members of each class are connected with a semantic interpretation.Keywords: Parametric actions, Action primitives, Hand gesture recognition, Imitation learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487949 Implementing a Visual Servoing System for Robot Controlling
Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari
Abstract:
Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670948 ConductHome: Gesture Interface Control of Home Automation Boxes
Authors: J. Branstett, V. Gagneux, A. Leleu, B. Levadoux, J. Pascale
Abstract:
This paper presents the interface ConductHome which controls home automation systems with a Leap Motion using “invariant gesture protocols”. This interface is meant to simplify the interaction of the user with its environment. A hardware part allows the Leap Motion to be carried around the house. A software part interacts with the home automation box and displays the useful information for the user. An objective of this work is the development of a natural/invariant/simple gesture control interface to help elder people/people with disabilities.Keywords: Automation, ergonomics, gesture recognition, interoperability, leap motion, invariant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070947 Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network
Authors: M. Jazouli, S. Elhoufi, A. Majda, A. Zarghili, R. Aalouane
Abstract:
Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate.
Keywords: ASD, stereotypical motor movements, repetitive gesture, kinect, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906946 Real-time 3D Feature Extraction without Explicit 3D Object Reconstruction
Authors: Kwangjin Hong, Chulhan Lee, Keechul Jung, Kyoungsu Oh
Abstract:
For the communication between human and computer in an interactive computing environment, the gesture recognition is studied vigorously. Therefore, a lot of studies have proposed efficient methods about the recognition algorithm using 2D camera captured images. However, there is a limitation to these methods, such as the extracted features cannot fully represent the object in real world. Although many studies used 3D features instead of 2D features for more accurate gesture recognition, the problem, such as the processing time to generate 3D objects, is still unsolved in related researches. Therefore we propose a method to extract the 3D features combined with the 3D object reconstruction. This method uses the modified GPU-based visual hull generation algorithm which disables unnecessary processes, such as the texture calculation to generate three kinds of 3D projection maps as the 3D feature: a nearest boundary, a farthest boundary, and a thickness of the object projected on the base-plane. In the section of experimental results, we present results of proposed method on eight human postures: T shape, both hands up, right hand up, left hand up, hands front, stand, sit and bend, and compare the computational time of the proposed method with that of the previous methods.Keywords: Fast 3D Feature Extraction, Gesture Recognition, Computer Vision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638945 2-Dimensional Finger Gesture Based Mobile Robot Control Using Touch Screen
Authors: O. Ejale, N.B. Siddique, R. Seals
Abstract:
The purpose of this study was to present a reliable mean for human-computer interfacing based on finger gestures made in two dimensions, which could be interpreted and adequately used in controlling a remote robot's movement. The gestures were captured and interpreted using an algorithm based on trigonometric functions, in calculating the angular displacement from one point of touch to another as the user-s finger moved within a time interval; thereby allowing for pattern spotting of the captured gesture. In this paper the design and implementation of such a gesture based user interface was presented, utilizing the aforementioned algorithm. These techniques were then used to control a remote mobile robot's movement. A resistive touch screen was selected as the gesture sensor, then utilizing a programmed microcontroller to interpret them respectively.
Keywords: 2-Dimensional interface, finger gesture, mobile robot control, touch screen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928944 Augmented Reality Interaction System in 3D Environment
Authors: Sunhyoung Lee, Askar Akshabayev, Beisenbek Baisakov, Youngjoon Han, Hernsoo Hahn
Abstract:
It is important to give input information without other device in AR system. One solution is using hand for augmented reality application. Many researchers have proposed different solutions for hand interface in augmented reality. Analyze Histogram and connecting factor is can be example for that. Various Direction searching is one of robust way to recognition hand but it takes too much calculating time. And background should be distinguished with skin color. This paper proposes a hand tracking method to control the 3D object in augmented reality using depth device and skin color. Also in this work discussed relationship between several markers, which is based on relationship between camera and marker. One marker used for displaying virtual object and three markers for detecting hand gesture and manipulating the virtual object.
Keywords: Augmented Reality, depth map, hand recognition, kinect, marker, YCbCr color model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873943 Hands-off Parking: Deep Learning Gesture-Based System for Individuals with Mobility Needs
Authors: Javier Romera, Alberto Justo, Ignacio Fidalgo, Javier Araluce, Joshué Pérez
Abstract:
Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, this paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for in-depth gesture classification. This tandem of MediaPipe’s extraction prowess and MPL’s analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System 2 (ROS2), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real-vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs.
Keywords: Gesture detection, MediaPipe, MultiLayer Perceptron Layer, Robot Operating System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136942 Authoring Tactile Gestures: Case Study for Emotion Stimulation
Authors: Rodrigo Lentini, Beatrice Ionascu, Friederike A. Eyssel, Scandar Copti, Mohamad Eid
Abstract:
The haptic modality has brought a new dimension to human computer interaction by engaging the human sense of touch. However, designing appropriate haptic stimuli, and in particular tactile stimuli, for various applications is still challenging. To tackle this issue, we present an intuitive system that facilitates the authoring of tactile gestures for various applications. The system transforms a hand gesture into a tactile gesture that can be rendering using a home-made haptic jacket. A case study is presented to demonstrate the ability of the system to develop tactile gestures that are recognizable by human subjects. Four tactile gestures are identified and tested to intensify the following four emotional responses: high valence – high arousal, high valence – low arousal, low valence – high arousal, and low valence – low arousal. A usability study with 20 participants demonstrated high correlation between the selected tactile gestures and the intended emotional reaction. Results from this study can be used in a wide spectrum of applications ranging from gaming to interpersonal communication and multimodal simulations.
Keywords: Tactile stimulation, tactile gesture, emotion reactions, arousal, valence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1329941 A Holographic Infotainment System for Connected and Driverless Cars: An Exploratory Study of Gesture Based Interaction
Authors: Nicholas Lambert, Seungyeon Ryu, Mehmet Mulla, Albert Kim
Abstract:
In this paper, an interactive in-car interface called HoloDash is presented. It is intended to provide information and infotainment in both autonomous vehicles and ‘connected cars’, vehicles equipped with Internet access via cellular services. The research focuses on the development of interactive avatars for this system and its gesture-based control system. This is a case study for the development of a possible human-centred means of presenting a connected or autonomous vehicle’s On-Board Diagnostics through a projected ‘holographic’ infotainment system. This system is termed a Holographic Human Vehicle Interface (HHIV), as it utilises a dashboard projection unit and gesture detection. The research also examines the suitability for gestures in an automotive environment, given that it might be used in both driver-controlled and driverless vehicles. Using Human Centred Design methods, questions were posed to test subjects and preferences discovered in terms of the gesture interface and the user experience for passengers within the vehicle. These affirm the benefits of this mode of visual communication for both connected and driverless cars.
Keywords: Holographic interface, human-computer interaction, user-centered design, Gesture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067940 Sound Selection for Gesture Sonification and Manipulation of Virtual Objects
Authors: Benjamin Bressolette, S´ebastien Denjean, Vincent Roussarie, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland-Martinet
Abstract:
New sensors and technologies – such as microphones, touchscreens or infrared sensors – are currently making their appearance in the automotive sector, introducing new kinds of Human-Machine Interfaces (HMIs). The interactions with such tools might be cognitively expensive, thus unsuitable for driving tasks. It could for instance be dangerous to use touchscreens with a visual feedback while driving, as it distracts the driver’s visual attention away from the road. Furthermore, new technologies in car cockpits modify the interactions of the users with the central system. In particular, touchscreens are preferred to arrays of buttons for space improvement and design purposes. However, the buttons’ tactile feedback is no more available to the driver, which makes such interfaces more difficult to manipulate while driving. Gestures combined with an auditory feedback might therefore constitute an interesting alternative to interact with the HMI. Indeed, gestures can be performed without vision, which means that the driver’s visual attention can be totally dedicated to the driving task. In fact, the auditory feedback can both inform the driver with respect to the task performed on the interface and on the performed gesture, which might constitute a possible solution to the lack of tactile information. As audition is a relatively unused sense in automotive contexts, gesture sonification can contribute to reducing the cognitive load thanks to the proposed multisensory exploitation. Our approach consists in using a virtual object (VO) to sonify the consequences of the gesture rather than the gesture itself. This approach is motivated by an ecological point of view: Gestures do not make sound, but their consequences do. In this experiment, the aim was to identify efficient sound strategies, to transmit dynamic information of VOs to users through sound. The swipe gesture was chosen for this purpose, as it is commonly used in current and new interfaces. We chose two VO parameters to sonify, the hand-VO distance and the VO velocity. Two kinds of sound parameters can be chosen to sonify the VO behavior: Spectral or temporal parameters. Pitch and brightness were tested as spectral parameters, and amplitude modulation as a temporal parameter. Performances showed a positive effect of sound compared to a no-sound situation, revealing the usefulness of sounds to accomplish the task.Keywords: Auditory feedback, gesture, sonification, sound perception, virtual object.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966939 Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect
Authors: Maha Jazouli
Abstract:
Suicide is one of the leading causes of death among prisoners, both in Canada and internationally. In recent years, rates of attempts of suicide and self-harm suicide have increased, with hangings being the most frequently used method. The objective of this article is to propose a method to automatically detect suicidal behaviors in real time. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Tests show that the proposed system gives satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives.
Keywords: Suicide detection, Kinect Azure, RGB-D camera, SVM, gesture recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449938 Vision Based Hand Gesture Recognition Using Generative and Discriminative Stochastic Models
Authors: Mahmoud Elmezain, Samar El-shinawy
Abstract:
Many approaches to pattern recognition are founded on probability theory, and can be broadly characterized as either generative or discriminative according to whether or not the distribution of the image features. Generative and discriminative models have very different characteristics, as well as complementary strengths and weaknesses. In this paper, we study these models to recognize the patterns of alphabet characters (A-Z) and numbers (0-9). To handle isolated pattern, generative model as Hidden Markov Model (HMM) and discriminative models like Conditional Random Field (CRF), Hidden Conditional Random Field (HCRF) and Latent-Dynamic Conditional Random Field (LDCRF) with different number of window size are applied on extracted pattern features. The gesture recognition rate is improved initially as the window size increase, but degrades as window size increase further. Experimental results show that the LDCRF is the best in terms of results than CRF, HCRF and HMM at window size equal 4. Additionally, our results show that; an overall recognition rates are 91.52%, 95.28%, 96.94% and 98.05% for CRF, HCRF, HMM and LDCRF respectively.
Keywords: Statistical Pattern Recognition, Generative Model, Discriminative Model, Human Computer Interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2936