Search results for: sign language
934 Using Different Aspects of the Signings for Appearance-based Sign Language Recognition
Authors: Morteza Zahedi, Philippe Dreuw, Thomas Deselaers, Hermann Ney
Abstract:
Sign language is used by the deaf and hard of hearing people for communication. Automatic sign language recognition is a challenging research area since sign language often is the only way of communication for the deaf people. Sign language includes different components of visual actions made by the signer using the hands, the face, and the torso, to convey his/her meaning. To use different aspects of signs, we combine the different groups of features which have been extracted from the image frames recorded directly by a stationary camera. We combine the features in two levels by employing three techniques. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, or by concatenating feature groups over time and using LDA to choose the most discriminant elements. At the model level, a late fusion of differently trained models can be carried out by a log-linear model combination. In this paper, we investigate these three combination techniques in an automatic sign language recognition system and show that the recognition rate can be significantly improved.
Keywords: American sign language, appearance-based features, Feature combination, Sign language recognition
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398933 Pakistan Sign Language Recognition Using Statistical Template Matching
Authors: Aleem Khalid Alvi, M. Yousuf Bin Azhar, Mehmood Usman, Suleman Mumtaz, Sameer Rafiq, RaziUr Rehman, Israr Ahmed
Abstract:
Sign language recognition has been a topic of research since the first data glove was developed. Many researchers have attempted to recognize sign language through various techniques. However none of them have ventured into the area of Pakistan Sign Language (PSL). The Boltay Haath project aims at recognizing PSL gestures using Statistical Template Matching. The primary input device is the DataGlove5 developed by 5DT. Alternative approaches use camera-based recognition which, being sensitive to environmental changes are not always a good choice.This paper explains the use of Statistical Template Matching for gesture recognition in Boltay Haath. The system recognizes one handed alphabet signs from PSL.Keywords: Gesture Recognition, Pakistan Sign Language, DataGlove, Human Computer Interaction, Template Matching, BoltayHaath
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3024932 A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network
Authors: Marcio Leal, Marta Villamil
Abstract:
Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%.Keywords: Sign language recognition, computer vision, infrared, artificial neural network, dynamic time warping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878931 Hand Gesture Recognition: Sign to Voice System (S2V)
Authors: Oi Mean Foong, Tan Jung Low, Satrio Wibowo
Abstract:
Hand gesture is one of the typical methods used in sign language for non-verbal communication. It is most commonly used by people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems have been developed by manufacturers around the globe but they are neither flexible nor cost-effective for the end users. This paper presents a system prototype that is able to automatically recognize sign language to help normal people to communicate more effectively with the hearing or speech impaired people. The Sign to Voice system prototype, S2V, was developed using Feed Forward Neural Network for two-sequence signs detection. Different sets of universal hand gestures were captured from video camera and utilized to train the neural network for classification purpose. The experimental results have shown that neural network has achieved satisfactory result for sign-to-voice translation.Keywords: Hand gesture detection, neural network, signlanguage, sequence detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856930 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System
Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur
Abstract:
Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.
Keywords: Avatar, dictionary, HamNoSys, hearing-impaired, Indian Sign Language, sign language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353929 Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language
Authors: Marie Alaghband, Niloofar Yousefi, Ivan Garibay
Abstract:
Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image’s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.Keywords: Annotated Facial Expression Dataset, Sign Language Recognition, Gesture Recognition, Sequenced Facial Expression Dataset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719928 The Effect of Ambient Occlusion Shading on Perception of Sign Language Animations
Authors: Nicoletta Adamo-Villani, Joe Kasenga, Tiffany Jen, Bryan Colbourn
Abstract:
The goal of the study reported in the paper was to determine whether Ambient Occlusion Shading (AOS) has a significant effect on users' perception of American Sign Language (ASL) finger spelling animations. Seventy-one (71) subjects participated in the study; all subjects were fluent in ASL. The participants were asked to watch forty (40) sign language animation clips representing twenty (20) finger spelled words. Twenty (20) clips did not show ambient occlusion, whereas the other twenty (20) were rendered using ambient occlusion shading. After viewing each animation, subjects were asked to type the word being finger-spelled and rate its legibility. Findings show that the presence of AOS had a significant effect on the subjects perception of the signed words. Subjects were able to recognize the animated words rendered with AOS with higher level of accuracy, and the legibility ratings of the animations showing AOS were consistently higher across subjects.Keywords: Sign Language, Animation, Ambient Occlusion Shading, Deaf Education
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686927 Video Quality Assessment using Visual Attention Approach for Sign Language
Authors: Julia Kucerova, Jaroslav Polec, Darina Tarcsiova
Abstract:
Visual information is very important in human perception of surrounding world. Video is one of the most common ways to capture visual information. The video capability has many benefits and can be used in various applications. For the most part, the video information is used to bring entertainment and help to relax, moreover, it can improve the quality of life of deaf people. Visual information is crucial for hearing impaired people, it allows them to communicate personally, using the sign language; some parts of the person being spoken to, are more important than others (e.g. hands, face). Therefore, the information about visually relevant parts of the image, allows us to design objective metric for this specific case. In this paper, we present an example of an objective metric based on human visual attention and detection of salient object in the observed scene.Keywords: sign language, objective video quality, visual attention, saliency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579926 Tree Sign Patterns of Small Order that Allow an Eventually Positive Matrix
Authors: Ber-Lin Yu, Jie Cui, Hong Cheng, Zhengfeng Yu
Abstract:
A sign pattern is a matrix whose entries belong to the set {+,−, 0}. An n-by-n sign pattern A is said to allow an eventually positive matrix if there exist some real matrices A with the same sign pattern as A and a positive integer k0 such that Ak > 0 for all k ≥ k0. It is well known that identifying and classifying the n-by-n sign patterns that allow an eventually positive matrix are posed as two open problems. In this article, the tree sign patterns of small order that allow an eventually positive matrix are classified completely.Keywords: Eventually positive matrix, sign pattern, tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267925 A Note on Potentially Power-Positive Sign Patterns
Authors: Ber-Lin Yu, Ting-Zhu Huang
Abstract:
In this note, some properties of potentially powerpositive sign patterns are established, and all the potentially powerpositive sign patterns of order ≤ 3 are classified completely.
Keywords: Sign pattern, potentially eventually positive sign pattern, potentially power-positive sign pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1118924 Continuous Text Translation Using Text Modeling in the Thetos System
Authors: Nina Suszczanska, Przemyslaw Szmal, Slawomir Kulikow
Abstract:
In the paper a method of modeling text for Polish is discussed. The method is aimed at transforming continuous input text into a text consisting of sentences in so called canonical form, whose characteristic is, among others, a complete structure as well as no anaphora or ellipses. The transformation is lossless as to the content of text being transformed. The modeling method has been worked out for the needs of the Thetos system, which translates Polish written texts into the Polish sign language. We believe that the method can be also used in various applications that deal with the natural language, e.g. in a text summary generator for Polish.Keywords: anaphora, machine translation, NLP, sign language, text syntax.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656923 Key Frames Extraction for Sign Language Video Analysis and Recognition
Authors: Jaroslav Polec, Petra Heribanová, Tomáš Hirner
Abstract:
In this paper we proposed a method for finding video frames representing one sign in the finger alphabet. The method is based on determining hands location, segmentation and the use of standard video quality evaluation metrics. Metric calculation is performed only in regions of interest. Sliding mechanism for finding local extrema and adaptive threshold based on local averaging is used for key frames selection. The success rate is evaluated by recall, precision and F1 measure. The method effectiveness is compared with metrics applied to all frames. Proposed method is fast, effective and relatively easy to realize by simple input video preprocessing and subsequent use of tools designed for video quality measuring.Keywords: Key frame, video, quality, metric, MSE, MSAD, SSIM, VQM, sign language, finger alphabet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2032922 A Preliminary Study on the Eventual Positivity of Irreducible Tridiagonal Sign Patterns
Authors: Berlin Yu
Abstract:
Motivated by Berman et al. [Sign patterns that allow eventual positivity, ELA, 19(2010): 108-120], we concentrate on the potential eventual positivity of irreducible tridiagonal sign patterns. The minimal potential eventual positivity of irreducible tridiagonal sign patterns of order less than six is established, and all the minimal potentially eventually positive tridiagonal sign patterns of order · 5 are identified. Our results indicate that if an irreducible tridiagonal sign pattern of order less than six A is minimal potentially eventually positive, then A requires the eventual positivity.
Keywords: Eventual positivity, potentially positive sign pattern, tridiagnoal sign pattern, minimal potentially positive sign pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271921 Some Results of Sign patterns Allowing Simultaneous Unitary Diagonalizability
Authors: Xin-Lei Feng, Ting-Zhu Huang
Abstract:
Allowing diagonalizability of sign pattern is still an open problem. In this paper, we make a carefully discussion about allowing unitary diagonalizability of two sign pattern. Some sufficient and necessary conditions of allowing unitary diagonalizability are also obtained.
Keywords: Sign pattern, unitary diagonalizability, eigenvalue, allowing diagonalizability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1202920 Construction Methods for Sign Patterns Allowing Nilpotence of Index k
Authors: Jun Luo
Abstract:
In this paper, the smallest such integer k is called by the index (of nilpotence) of B such that Bk = 0. In this paper, we study sign patterns allowing nilpotence of index k and obtain four methods to construct sign patterns allowing nilpotence of index at most k, which generalizes some recent results.
Keywords: Sign pattern, Nilpotence, Jordan block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651919 A Fast Sign Localization System Using Discriminative Color Invariant Segmentation
Authors: G.P. Nguyen, H.J. Andersen
Abstract:
Building intelligent traffic guide systems has been an interesting subject recently. A good system should be able to observe all important visual information to be able to analyze the context of the scene. To do so, signs in general, and traffic signs in particular, are usually taken into account as they contain rich information to these systems. Therefore, many researchers have put an effort on sign recognition field. Sign localization or sign detection is the most important step in the sign recognition process. This step filters out non informative area in the scene, and locates candidates in later steps. In this paper, we apply a new approach in detecting sign locations using a new color invariant model. Experiments are carried out with different datasets introduced in other works where authors claimed the difficulty in detecting signs under unfavorable imaging conditions. Our method is simple, fast and most importantly it gives a high detection rate in locating signs.Keywords: Sign localization, color-based segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1293918 Effect of Visual Speech in Sign Speech Synthesis
Authors: Zdenek Krnoul
Abstract:
This article investigates a contribution of synthesized visual speech. Synthesis of visual speech expressed by a computer consists in an animation in particular movements of lips. Visual speech is also necessary part of the non-manual component of a sign language. Appropriate methodology is proposed to determine the quality and the accuracy of synthesized visual speech. Proposed methodology is inspected on Czech speech. Hence, this article presents a procedure of recording of speech data in order to set a synthesis system as well as to evaluate synthesized speech. Furthermore, one option of the evaluation process is elaborated in the form of a perceptual test. This test procedure is verified on the measured data with two settings of the synthesis system. The results of the perceptual test are presented as a statistically significant increase of intelligibility evoked by real and synthesized visual speech. Now, the aim is to show one part of evaluation process which leads to more comprehensive evaluation of the sign speech synthesis system.
Keywords: Perception test, Sign speech synthesis, Talking head, Visual speech.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477917 Extracting Road Signs using the Color Information
Authors: Wen-Yen Wu, Tsung-Cheng Hsieh, Ching-Sung Lai
Abstract:
In this paper, we propose a method to extract the road signs. Firstly, the grabbed image is converted into the HSV color space to detect the road signs. Secondly, the morphological operations are used to reduce noise. Finally, extract the road sign using the geometric property. The feature extraction of road sign is done by using the color information. The proposed method has been tested for the real situations. From the experimental results, it is seen that the proposed method can extract the road sign features effectively.Keywords: Color information, image processing, road sign.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241916 Sign Pattern Matrices that Admit P0 Matrices
Authors: Ling Zhang, Ting-Zhu Huang
Abstract:
A P0-matrix is a real square matrix all of whose principle minors are nonnegative. In this paper, we consider the class of P0-matrix. Our main aim is to determine which sign pattern matrices are admissible for this class of real matrices.
Keywords: Sign pattern matrices, P0 matrices, graph, digraph.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1219915 3D Rendering of American Sign Language Finger-Spelling: A Comparative Study of Two Animation Techniques
Authors: Nicoletta Adamo-Villani
Abstract:
In this paper we report a study aimed at determining the most effective animation technique for representing ASL (American Sign Language) finger-spelling. Specifically, in the study we compare two commonly used 3D computer animation methods (keyframe animation and motion capture) in order to ascertain which technique produces the most 'accurate', 'readable', and 'close to actual signing' (i.e. realistic) rendering of ASL finger-spelling. To accomplish this goal we have developed 20 animated clips of fingerspelled words and we have designed an experiment consisting of a web survey with rating questions. 71 subjects ages 19-45 participated in the study. Results showed that recognition of the words was correlated with the method used to animate the signs. In particular, keyframe technique produced the most accurate representation of the signs (i.e., participants were more likely to identify the words correctly in keyframed sequences rather than in motion captured ones). Further, findings showed that the animation method had an effect on the reported scores for readability and closeness to actual signing; the estimated marginal mean readability and closeness was greater for keyframed signs than for motion captured signs. To our knowledge, this is the first study aimed at measuring and comparing accuracy, readability and realism of ASL animations produced with different techniques.Keywords: 3D Animation, American Sign Language, DeafEducation, Motion Capture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1998914 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms
Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna
Abstract:
In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2731913 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.
Keywords: Canny pruning, hand recognition, machine learning, skin tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1309912 Solving Definition and Relation Problems in English Navigation Terminology
Authors: Ayşe Yurdakul, Eckehard Schnieder
Abstract:
Because of the increasing multidisciplinarity and multilinguality, communication problems in different technical fields grow more and more. Therefore, each technical field has its own specific language, terminology which is characterized by the different definition of terms. In addition to definition problems, there are also relation problems between terms. Among these problems of relation, there are the synonymy, antonymy, hypernymy/hyponymy, ambiguity, risk of confusion and translation problems etc.
Thus, the terminology management system iglos of the Institute for Traffic Safety and Automation Engineering of the Technische Universität Braunschweig has the target to solve these problems by a methodological standardisation of term definitions with the aid of the iglos sign model and iglos relation types. The focus of this paper should be on solving definition and relation problems between terms in English navigation terminology.
Keywords: Iglos, iglos sign model, methodological resolutions, navigation terminology, common language, technical language, positioning, definition problems, relation problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660911 Segmentation of Korean Words on Korean Road Signs
Authors: Lae-Jeong Park, Kyusoo Chung, Jungho Moon
Abstract:
This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image.Keywords: Segmentation, road signs, characters, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750910 Working Memory Capacity in Australian Sign Language (Auslan)/English Interpreters and Deaf Signers
Authors: Jihong Wang
Abstract:
Little research has examined working memory capacity (WMC) in signed language interpreters and deaf signers. This paper presents the findings of a study that investigated WMC in professional Australian Sign Language (Auslan)/English interpreters and deaf signers. Thirty-one professional Auslan/English interpreters (14 hearing native signers and 17 hearing non-native signers) completed an English listening span task and then an Auslan working memory span task, which tested their English WMC and their Auslan WMC, respectively. Moreover, 26 deaf signers (6 deaf native signers and 20 deaf non-native signers) completed the Auslan working memory span task. The results revealed a non-significant difference between the hearing native signers and the hearing non-native signers in their English WMC, and a non-significant difference between the hearing native signers and the hearing non-native signers in their Auslan WMC. Moreover, the results yielded a non-significant difference between the hearing native signers- English WMC and their Auslan WMC, and a non-significant difference between the hearing non-native signers- English WMC and their Auslan WMC. Furthermore, a non-significant difference was found between the deaf native signers and the deaf non-native signers in their Auslan WMC.Keywords: deaf signers, signed language interpreters, working memory capacity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507909 The Sign in the Communication Process
Authors: S. Pesina, T. Solonchak
Abstract:
In the process of information transmission (concept verbalization) we deal mostly with the substance (contents), and then pay attention to the form. Recalling events from the remote past, often we cannot exactly reproduce specific heard or pronounced words, as well as the syntactic structures. We remember events, feelings, images; we recall the general contents of the discourse. The thought gets a specific language form only during the concept verbalization phase. With minimum time for pondering, depending on the language competence level, the grammar and syntactic shaping often occurs automatically with the use of famous models and stereotypes. This means that the language form adapts itself to the consciousness, and not vice versa.
Keywords: Lexical eidos, phenomenology, noema, polysemantic word, semantic core.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942908 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients
Authors: Mbainaibeye Jérôme, Noureddine Ellouze
Abstract:
Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Keywords: Image compression, wavelet transform, sign coding, magnitude coding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671907 On the Relationship between Language Output and Second Language Acquisition
Authors: Haiyan Wang
Abstract:
Many researchers have been discussing the importance of language input in second language acquisition. The author holds that the bigger problem lies in how to activate language learners' language knowledge and raise their language output consciousness and competence. Analyzing the importance of language output based on theory and reality, this paper mainly explores the essence of language output and its revelation for second language acquisition in order to make second language learners really raise their communicative competence.
Keywords: Language output, second language acquisition, communicative competence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3704906 An Effective Noise Resistant FM Continuous-Wave Radar Vital Sign Signal Detection Method
Authors: Lu Yang, Meiyang Song, Xiang Yu, Wenhao Zhou, Chuntao Feng
Abstract:
To address the problem that the FM continuous-wave (FMCW) radar extracts human vital sign signals which are susceptible to noise interference and low reconstruction accuracy, a detection scheme for the sign signals is proposed. Firstly, an improved complete ensemble empirical modal decomposition with adaptive noise (ICEEMDAN) algorithm is applied to decompose the radar-extracted thoracic signals to obtain several intrinsic modal functions (IMF) with different spatial scales, and then the IMF components are optimized by a backpropagation (BP) neural network improved by immune genetic algorithm (IGA). The simulation results show that this scheme can effectively separate the noise, accurately extract the respiratory and heartbeat signals and improve the reconstruction accuracy and signal to-noise ratio of the sign signals.
Keywords: Frequency modulated continuous wave radar, ICEEMDAN, BP Neural Network, vital signs signal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 478905 Evaluation of Wind Fragility for Set Anchor Used in Sign Structure in Korea
Authors: WooYoung Jung, Buntheng Chhorn, Min-Gi Kim
Abstract:
Recently, damage to domestic facilities by strong winds and typhoons are growing. Therefore, this study focused on sign structure among various vulnerable facilities. The evaluation of the wind fragility was carried out considering the destruction of the anchor, which is one of the various failure modes of the sign structure. The performance evaluation of the anchor was carried out to derive the wind fragility. Two parameters were set and four anchor types were selected to perform the pull-out and shear tests. The resistance capacity was estimated based on the experimental results. Wind loads were estimated using Monte Carlo simulation method. Based on these results, we derived the wind fragility according to anchor type and wind exposure category. Finally, the evaluation of the wind fragility was performed according to the experimental parameters such as anchor length and anchor diameter. This study shows that the depth of anchor was more significant for the safety of structure compare to diameter of anchor.
Keywords: Sign structure, wind fragility, set anchor, pull-out test, shear test, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 791