Search results for: lexical matching
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 721

Search results for: lexical matching

601 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 34
600 Reduplication in Dhiyan: An Indo-Aryan Language of Assam

Authors: S. Sulochana Singha

Abstract:

Dhiyan or Dehan is the name of the community and language spoken by the Koch-Rajbangshi people of Barak Valley of Assam. Ethnically, they are Mongoloids, and their language belongs to the Indo-Aryan language family. However, Dhiyan is absent in any classification of Indo-Aryan languages. So the classification of Dhiyan language under the Indo-Aryan language family is completely based on the shared typological features of the other Indo-Aryan languages. Typologically, Dhiyan is an agglutinating language, and it shares many features of Indo-Aryan languages like presence of aspirated voiced stops, non-tonal, verb-person agreement, adjectives as different word class, prominent tense and subject object verb word order. Reduplication is a productive word-formation process in Dhiyan. Besides it also expresses plurality, intensification, and distributive. Generally, reduplication in Dhiyan can be at the morphological or lexical level. Morphological reduplication in Dhiyan involves expressives which includes onomatopoeias, sound symbolism, idiophones, and imitatives. Lexical reduplication in the language can be formed by echo formations and word reduplication. Echo formation in Dhiyan is formed by partial repetition from the base word which can be either consonant alternation or vowel alternation. The consonant alternation is basically found in onset position while the alternation of vowel is basically found in open syllable particularly in final syllable. Word reduplication involves reduplication of nouns, interrogatives, adjectives, and numerals which further can be class changing or class maintaining reduplication. The process of reduplication can be partial or complete whether it is lexical or morphological. The present paper is an attempt to describe some aspects of the formation, function, and usage of reduplications in Dhiyan which is mainly spoken in ten villages in the Eastern part of Barak River in the Cachar District of Assam.

Keywords: Barak-Valley, Dhiyan, Indo-Aryan, reduplication

Procedia PDF Downloads 188
599 Effects of Reversible Watermarking on Iris Recognition Performance

Authors: Andrew Lock, Alastair Allen

Abstract:

Fragile watermarking has been proposed as a means of adding additional security or functionality to biometric systems, particularly for authentication and tamper detection. In this paper we describe an experimental study on the effect of watermarking iris images with a particular class of fragile algorithm, reversible algorithms, and the ability to correctly perform iris recognition. We investigate two scenarios, matching watermarked images to unmodified images, and matching watermarked images to watermarked images. We show that different watermarking schemes give very different results for a given capacity, highlighting the importance of investigation. At high embedding rates most algorithms cause significant reduction in recognition performance. However, in many cases, for low embedding rates, recognition accuracy is improved by the watermarking process.

Keywords: biometrics, iris recognition, reversible watermarking, vision engineering

Procedia PDF Downloads 421
598 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 134
597 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 196
596 Size-Reduction Strategies for Iris Codes

Authors: Jutta Hämmerle-Uhl, Georg Penn, Gerhard Pötzelsberger, Andreas Uhl

Abstract:

Iris codes contain bits with different entropy. This work investigates different strategies to reduce the size of iris code templates with the aim of reducing storage requirements and computational demand in the matching process. Besides simple sub-sampling schemes, also a binary multi-resolution representation as used in the JBIG hierarchical coding mode is assessed. We find that iris code template size can be reduced significantly while maintaining recognition accuracy. Besides, we propose a two stage identification approach, using small-sized iris code templates in a pre-selection satge, and full resolution templates for final identification, which shows promising recognition behaviour.

Keywords: iris recognition, compact iris code, fast matching, best bits, pre-selection identification, two-stage identification

Procedia PDF Downloads 414
595 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 290
594 The Perception and Integration of Lexical Tone and Vowel in Mandarin-speaking Children with Autism: An Event-Related Potential Study

Authors: Rui Wang, Luodi Yu, Dan Huang, Hsuan-Chih Chen, Yang Zhang, Suiping Wang

Abstract:

Enhanced discrimination of pure tones but diminished discrimination of speech pitch (i.e., lexical tone) were found in children with autism who speak a tonal language (Mandarin), suggesting a speech-specific impairment of pitch perception in these children. However, in tonal languages, both lexical tone and vowel are phonemic cues and integrally dependent on each other. Therefore, it is unclear whether the presence of phonemic vowel dimension contributes to the observed lexical tone deficits in Mandarin-speaking children with autism. The current study employed a multi-feature oddball paradigm to examine how vowel and tone dimensions contribute to the neural responses for syllable change detection and involuntary attentional orienting in school-age Mandarin-speaking children with autism. In the oddball sequence, syllable /da1/ served as the standard stimulus. There were three deviant stimulus conditions, representing tone-only change (TO, /da4/), vowel-only change (VO, /du1/), and change of tone and vowel simultaneously (TV, /du4/). EEG data were collected from 25 children with autism and 20 age-matched normal controls during passive listening to the stimulation. For each deviant condition, difference waveform measuring mismatch negativity (MMN) was derived from subtracting the ERP waveform to the standard sound from that to the deviant sound for each participant. Additionally, the linear summation of TO and VO difference waveforms was compared to the TV difference waveform, to examine whether neural sensitivity for TV change detection reflects simple summation or nonlinear integration of the two individual dimensions. The MMN results showed that the autism group had smaller amplitude compared with the control group in the TO and VO conditions, suggesting impaired discriminative sensitivity for both dimensions. In the control group, amplitude of the TV difference waveform approximated the linear summation of the TO and VO waveforms only in the early time window but not in the late window, suggesting a time course from dimensional summation to nonlinear integration. In the autism group, however, the nonlinear TV integration was already present in the early window. These findings suggest that speech perception atypicality in children with autism rests not only in the processing of single phonemic dimensions, but also in the dimensional integration process.

Keywords: autism, event-related potentials , mismatch negativity, speech perception

Procedia PDF Downloads 172
593 Dimension Free Rigid Point Set Registration in Linear Time

Authors: Jianqin Qu

Abstract:

This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.

Keywords: covariant point, point matching, dimension free, rigid registration

Procedia PDF Downloads 143
592 Reduplication In Urdu-Hindi Nonsensical Words: An OT Analysis

Authors: Riaz Ahmed Mangrio

Abstract:

Reduplication in Urdu-Hindi affects all major word categories, particles, and even nonsensical words. It conveys a variety of meanings, including distribution, emphasis, iteration, adjectival and adverbial. This study will primarily discuss reduplicative structures of nonsensical words in Urdu-Hindi and then briefly look at some examples from other Indo-Aryan languages to introduce the debate regarding the same structures in them. The goal of this study is to present counter-evidence against Keane (2005: 241), who claims “the base in the cases of lexical and phrasal echo reduplication is always independently meaningful”. However, Urdu-Hindi reduplication derives meaningful compounds from nonsensical words e.g. gũ mgũ (A) ‘silent and confused’ and d̪əb d̪əb-a (N) ‘one’s fear over others’. This needs a comprehensive examination to see whether and how the various structures form patterns of a base-reduplicant relationship or, rather, they are merely sub lexical items joining together to form a word pattern of any grammatical category in content words. Another interesting theoretical question arises within the Optimality framework: in an OT analysis, is it necessary to identify one of the two constituents as the base and the other as reduplicant? Or is it best to consider this a pattern, but then how does this fit in with an OT analysis? This may be an even more interesting theoretical question. Looking for the solution to such questions can serve to make an important contribution. In the case at hand, each of the two constituents is an independent nonsensical word, but their echo reduplication is nonetheless meaningful. This casts significant doubt upon Keane’s (2005: 241) observation of some examples from Hindi and Tamil reduplication that “the base in cases of lexical and phrasal echo reduplication is always independently meaningful”. The debate on the point becomes further interesting when the triplication of nonsensical words in Urdu-Hindi e.g. aẽ baẽ ʃaẽ (N) ‘useless talk’ is also seen, which is equally important to discuss. The example is challenging to Harrison’s (1973) claim that only the monosyllabic verbs in their progressive forms reduplicate twice to result in triplication, which is not the case with the example presented. The study will consist of a thorough descriptive analysis of the data for the purpose of documentation, and then there will be OT analysis.

Keywords: reduplication, urdu-hindi, nonsensical, optimality theory

Procedia PDF Downloads 40
591 Investigating Translations of Websites of Pakistani Public Offices

Authors: Sufia Maroof

Abstract:

This empirical study investigated the web-translations of five Pakistani public offices (FPSC, FIA, HEC, USB, and Ministry of Finance) offering Urdu tab as an option to access information on their official websites. Triangulation of quantitative and qualitative research design informed the researcher of the semantic, lexical and syntactic caveats in these translations. The study hypothesized that majority of the Pakistani population is oblivious of the Supreme Court’s amendments in language policy concerning national and official language; hence, Urdu web-translations of the public departments have not been accessed effectively. Firstly, the researcher conducted an online survey, comprising of two sections, close ended and short answer based questions. Secondly, the researcher compiled corpus of the five selected websites in a tabular form to compare the data. Thirdly, the administrators of the departments had been contacted regarding the methods of translation and the expertise of the personnel involved. The corpus was assessed for TQA after examining the lexical, semantic, syntactical and technical alignment inaccuracies and imperfections. The study suggests the public offices to invest in their Urdu webs by either hiring expert translators or engaging expertise of a translation agency for this project to offer quality translation to public.

Keywords: machine translations, public offices, Urdu translations, websites

Procedia PDF Downloads 98
590 Contentious Issues Concerning the Methodology of Using the Lexical Approach in Teaching ESP

Authors: Elena Krutskikh, Elena Khvatova

Abstract:

In tertiary settings expanding students’ vocabulary and teaching discursive competence is seen as one of the chief goals of a professional development course. However, such a focus often is detrimental to students’ cognitive competences, such as analysis, synthesis, and creative processing of information, and deprives students of motivation for self-improvement and self-development of language skills. The presentation is going to argue that in an ESP course special attention should be paid to reading/listening which can promote understanding and using the language as a tool for solving significant real world problems, including professional ones. It is claimed that in the learning process it is necessary to maintain a balance between the content and the linguistic aspect of the educational process as language acquisition is inextricably linked with mental activity and the need to express oneself is a primary stimulus for using a language. A study conducted among undergraduates indicates that they place a premium on quality materials that motivate them and stimulate their further linguistic and professional development. Thus, more demands are placed on study materials that should contain new information for students and serve not only as a source of new vocabulary but also prepare them for real tasks related to professional activities.

Keywords: critical reading, english for professional development, english for specific purposes, high order thinking skills, lexical approach, vocabulary acquisition

Procedia PDF Downloads 139
589 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 403
588 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 66
587 Matching on Bipartite Graphs with Applications to School Course Registration Systems

Authors: Zhihan Li

Abstract:

Nowadays, most universities use the course enrollment system considering students’ registration orders. However, the students’ preference level to certain courses is also one important factor to consider. In this research, the possibility of applying a preference-first system has been discussed and analyzed compared to the order-first system. A bipartite graph is applied to resemble the relationship between students and courses they tend to register. With the graph set up, we apply Ford-Fulkerson (F.F.) Algorithm to maximize parings between two sets of nodes, in our case, students and courses. Two models are proposed in this paper: the one considered students’ order first, and the one considered students’ preference first. By comparing and contrasting the two models, we highlight the usability of models which potentially leads to better designs for school course registration systems.

Keywords: bipartite graph, Ford-Fulkerson (F.F.) algorithm, graph theory, maximum matching

Procedia PDF Downloads 87
586 Discovering Word-Class Deficits in Persons with Aphasia

Authors: Yashaswini Channabasavegowda, Hema Nagaraj

Abstract:

Aim: The current study aims at discovering word-class deficits concerning the noun-verb ratio in confrontation naming, picture description, and picture-word matching tasks. A total of ten persons with aphasia (PWA) and ten age-matched neurotypical individuals (NTI) were recruited for the study. The research includes both behavioural and objective measures to assess the word class deficits in PWA. Objective: The main objective of the research is to identify word class deficits seen in persons with aphasia, using various speech eliciting tasks. Method: The study was conducted in the L1 of the participants, considered to be Kannada. Action naming test and Boston naming test adapted to the Kannada version are administered to the participants; also, a picture description task is carried out. Picture-word matching task was carried out using e-prime software (version 2) to measure the accuracy and reaction time with respect to identification verbs and nouns. The stimulus was presented through auditory and visual modes. Data were analysed to identify errors noticed in the naming of nouns versus verbs, with respect to the Boston naming test and action naming test and also usage of nouns and verbs in the picture description task. Reaction time and accuracy for picture-word matching were extracted from the software. Results: PWA showed a significant difference in sentence structure compared to age-matched NTI. Also, PWA showed impairment in syntactic measures in the picture description task, with fewer correct grammatical sentences and fewer correct usage of verbs and nouns, and they produced a greater proportion of nouns compared to verbs. PWA had poorer accuracy and lesser reaction time in the picture-word matching task compared to NTI, and accuracy was higher for nouns compared to verbs in PWA. The deficits were noticed irrespective of the cause leading to aphasia.

Keywords: nouns, verbs, aphasia, naming, description

Procedia PDF Downloads 76
585 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.

Keywords: algorithm, LiDAR, object recognition, OBIA

Procedia PDF Downloads 223
584 The Presence of Anglicisms in Italian Fashion Magazines and Fashion Blogs

Authors: Vivian Orsi

Abstract:

The present research investigates the lexicon of a fashion magazine, whose universe is very receptive to lexical loans, especially those from English, called Anglicisms. Specifically, we intend to discuss the presence of English items and expressions in the Vogue Italia fashion magazine. Besides, we aim to study the anglicisms used in an Italian fashion blog called The Blonde Salad. Within the discussion of fashion blogs and their contributions to scientific studies, we adopt the theories of Lexicology / Lexicography to define Anglicism (BIDERMAN, 2001), and the observation of its prestige in the Italian Language (ROGATO, 2008; BISETTO, 2003). According to the theoretical basis mentioned, we intend to make a brief analysis of the Anglicisms collected from posts of the first year of existence of such fashion blog, emphasizing also the keywords that have the role to encapsulate the content of the text, allowing the reader to retrieve information from the post of the blog. About the use of English in Italian magazines and blogs, we can affirm that it seems to represent sophistication, assuming the value of prerequisite to participate in the fashion centers of the world. Besides, we believe, as Barthes says (1990, p. 215), that “Fashion does not evolve, it changes: its lexicon is new each year, like that of a language which always keeps the same system but suddenly and regularly ‘changes’ the currency of its words”. Fashion is a mode of communication: it is present in man's interaction with the world, which means that such lexical universe is represented according to the particularities of each culture.

Keywords: anglicism, lexicology, magazines, blogs, fashion

Procedia PDF Downloads 301
583 Aspects of Semantics of Standard British English and Nigerian English: A Contrastive Study

Authors: Chris Adetuyi, Adeola Adeniran

Abstract:

The concept of meaning is a complex one in language study when cultural features are added. This is mandatory because language cannot be completely separated from the culture in which case language and culture complement each other. When there are two varieties of a language in a society, i.e. two varieties functioning side by side in a speech community, there is a tendency to view one of the varieties with each other. There is, therefore, the need to make a linguistic comparative study of varieties of such languages. In this paper, a semantic contrastive study is made between Standard British English (SBE) and Nigerian English (NB). The semantic study is limited to aspects of semantics: semantic extension (Kinship terms, metaphors), semantic shift (lexical items considered are ‘drop’ ‘befriend’ ‘dowry’ and escort) acronyms (NEPA, JAMB, NTA) linguistic borrowing or loan words (Seriki, Agbada, Eba, Dodo, Iroko) coinages (long leg, bush meat; bottom power and juju). In the study of these aspects of semantics of SBE and NE lexical terms, conservative statements are made, problems areas and hierarchy of difficulties are highlighted with a view to bringing out areas of differences are highlighted in this paper are concerned. The study will also serve as a guide in further contrastive studies in some other area of languages.

Keywords: aspect, British, English, Nigeria, semantics

Procedia PDF Downloads 319
582 Parameters of Minimalistic Mosque in India within Minimalism

Authors: Hafila Banu

Abstract:

Minimalism is a postmodern style movement which emerged in the 50s of the twentieth century, but it was rapidly growing in the years of 60s and 70s. Minimalism is defined as the concept of minimizing distractions from what is truly valuable or essential. On the same grounds, works of minimalism offer a direct view at and raises questions about the true nature of the subject or object inviting the viewer to consider it for it for the real shape, a thought, a movement reminding us to focus on what is really important. The Architecture of Minimalism is characterized by an economy with materials , focusing on building quality with considerations for ‘essences’ as light, form, detail of material, texture, space and scale, place and human conditions . The research of this paper is mainly into the basis of designing a minimalistic mosque in India while analysing the parameters for the design from the matching characteristics of Islamic architecture in specific to a mosque and the minimalism. Therefore, the paper is about the mosque architecture and minimalism and of their underlying principles, matching characteristics and design goals.

Keywords: Islamic architecture, minimalism, minimalistic mosque, mosque in India

Procedia PDF Downloads 173
581 Revisiting the Swadesh Wordlist: How Long Should It Be

Authors: Feda Negesse

Abstract:

One of the most important indicators of research quality is a good data - collection instrument that can yield reliable and valid data. The Swadesh wordlist has been used for more than half a century for collecting data in comparative and historical linguistics though arbitrariness is observed in its application and size. This research compare s the classification results of the 100 Swadesh wordlist with those of its subsets to determine if reducing the size of the wordlist impact s its effectiveness. In the comparison, the 100, 50 and 40 wordlists were used to compute lexical distances of 29 Cushitic and Semitic languages spoken in Ethiopia and neighbouring countries. Gabmap, a based application, was employed to compute the lexical distances and to divide the languages into related clusters. The study shows that the subsets are not as effective as the 100 wordlist in clustering languages into smaller subgroups but they are equally effective in di viding languages into bigger groups such as subfamilies. It is noted that the subsets may lead to an erroneous classification whereby unrelated languages by chance form a cluster which is not attested by a comparative study. The chance to get a wrong result is higher when the subsets are used to classify languages which are not closely related. Though a further study is still needed to settle the issues around the size of the Swadesh wordlist, this study indicates that the 50 and 40 wordlists cannot be recommended as reliable substitute s for the 100 wordlist under all circumstances. The choice seems to be determined by the objective of a researcher and the degree of affiliation among the languages to be classified.

Keywords: classification, Cushitic, Swadesh, wordlist

Procedia PDF Downloads 271
580 Analysis of Matching Pursuit Features of EEG Signal for Mental Tasks Classification

Authors: Zin Mar Lwin

Abstract:

Brain Computer Interface (BCI) Systems have developed for people who suffer from severe motor disabilities and challenging to communicate with their environment. BCI allows them for communication by a non-muscular way. For communication between human and computer, BCI uses a type of signal called Electroencephalogram (EEG) signal which is recorded from the human„s brain by means of an electrode. The electroencephalogram (EEG) signal is an important information source for knowing brain processes for the non-invasive BCI. Translating human‟s thought, it needs to classify acquired EEG signal accurately. This paper proposed a typical EEG signal classification system which experiments the Dataset from “Purdue University.” Independent Component Analysis (ICA) method via EEGLab Tools for removing artifacts which are caused by eye blinks. For features extraction, the Time and Frequency features of non-stationary EEG signals are extracted by Matching Pursuit (MP) algorithm. The classification of one of five mental tasks is performed by Multi_Class Support Vector Machine (SVM). For SVMs, the comparisons have been carried out for both 1-against-1 and 1-against-all methods.

Keywords: BCI, EEG, ICA, SVM

Procedia PDF Downloads 248
579 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: block matching, digital evidence, hash list, evaluation of digital evidence

Procedia PDF Downloads 228
578 A POX Controller Module to Prepare a List of Flow Header Information Extracted from SDN Traffic

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a paradigm designed to facilitate the way of controlling the network dynamically and with more agility. Network traffic is a set of flows, each of which contains a set of packets. In SDN, a matching process is performed on every packet coming to the network in the SDN switch. Only the headers of the new packets will be forwarded to the SDN controller. In terminology, the flow header fields are called tuples. Basically, these tuples are 5-tuple: the source and destination IP addresses, source and destination ports, and protocol number. This flow information is used to provide an overview of the network traffic. Our module is meant to extract this 5-tuple with the packets and flows numbers and show them as a list. Therefore, this list can be used as a first step in the way of detecting the DDoS attack. Thus, this module can be considered as the beginning stage of any flow-based DDoS detection method.

Keywords: matching, OpenFlow tables, POX controller, SDN, table-miss

Procedia PDF Downloads 169
577 Learning Physics Concepts through Language Syntagmatic Paradigmatic Relations

Authors: C. E. Laburu, M. A. Barros, A. F. Zompero, O. H. M. Silva

Abstract:

The work presents a teaching strategy that employs syntagmatic and paradigmatic linguistic relations in order to monitor the understanding of physics students’ concepts. Syntagmatic and paradigmatic relations are theoretical elements of semiotics studies and our research circumstances and justified them within the research program of multi-modal representations. Among the multi-modal representations to learning scientific knowledge, the scope of action of syntagmatic and paradigmatic relations belongs to the discursive writing form. The use of such relations has the purpose to seek innovate didactic work with discourse representation in the write form before translate to another different representational form. The research was conducted with a sample of first year high school students. The students were asked to produce syntagmatic and paradigmatic of Newton’ first law statement. This statement was delivered in paper for each student that should individually write the relations. The student’s records were collected for analysis. It was possible observed in one student used here as example that their monemes replaced and rearrangements produced by, respectively, syntagmatic and paradigmatic relations, kept the original meaning of the law. In paradigmatic production he specified relevant significant units of the linguistic signs, the monemas, which constitute the first articulation and each word substituted kept equivalence to the original meaning of original monema. Also, it was noted a number of diverse and many monemas were chosen, with balanced combination of grammatical (grammatical monema is what changes the meaning of a word, in certain positions of the syntagma, along with a relatively small number of other monemes. It is the smallest linguistic unit that has grammatical meaning) and lexical (lexical monema is what belongs to unlimited inventories; is the monema endowed with lexical meaning) monemas. In syntagmatic production, monemas ordinations were syntactically coherent, being linked with semantic conservation and preserved number. In general, the results showed that the written representation mode based on linguistic relations paradigmatic and syntagmatic qualifies itself to be used in the classroom as a potential identifier and accompanist of meanings acquired from students in the process of scientific inquiry.

Keywords: semiotics, language, high school, physics teaching

Procedia PDF Downloads 106
576 An Electrically Small Silver Ink Printed FR4 Antenna for RF Transceiver Chip CC1101

Authors: F. Majeed, D. V. Thiel, M. Shahpari

Abstract:

An electrically small meander line antenna is designed for impedance matching with RF transceiver chip CC1101. The design provides the flexibility of tuning the reactance of the antenna over a wide range of values: highly capacitive to highly inductive. The antenna was printed with silver ink on FR4 substrate using the screen printing design process. The antenna impedance was perfectly matched to CC1101 at 433 MHz. The measured radiation efficiency of the antenna was 81.3% at resonance. The 3 dB and 10 dB fractional bandwidth of the antenna was 14.5% and 4.78%, respectively. The read range of the antenna was compared with a copper wire monopole antenna over a distance of five meters. The antenna, with a perfect impedance match with RF transceiver chip CC1101, shows improvement in the read range compared to a monopole antenna over the specified distance.

Keywords: meander line antenna, RFID, silver ink printing, impedance matching

Procedia PDF Downloads 237
575 Organisationmatcher: An Organisation Ranking System for Student Placement Using Preference Weights

Authors: Nor Sahida Ibrahim, Ruhaila Maskat, Aishah Ahmad

Abstract:

Almost all tertiary-level students will undergo some form of training in organisations prior to their graduation. This practice provides the necessary exposure and experience to allow students to cope with actual working environment and culture in the future. Nevertheless, a particular degree of “matching” between what is expected and what can be offered between students and organisations underpins how effective and enriching the experience is. This matching of students and organisations is challenging when preferences from both parties must be satisfied. This work developed a web-based system, namely the OrganisationMatcher, which leverage on the use of preference weights to score each organisation and rank them based on “suitability”. OrganisationMatcher has been implemented on a relational database, designed using object-oriented methods and developed using PHP programming language for browser front-end access. We outline the challenges and limitations of our system and discuss future improvements to the system, specifically in the utilisation of intelligent methods.

Keywords: student industrial placement, information system, web-based, ranking

Procedia PDF Downloads 252
574 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 141
573 Topic Prominence and Temporal Encoding in Mandarin Chinese

Authors: Tzu-I Chiang

Abstract:

A central question for finite-nonfinite distinction in Mandarin Chinese is how does Mandarin encode temporal information without the grammatical contrast between past and present tense. Moreover, how do L2 learners of Mandarin whose native language is English and whose L1 system has tense morphology, acquire the temporal encoding system in L2 Mandarin? The current study reports preliminary findings on the relationship between topic prominence and the temporal encoding in L1 and L2 Chinese. Oral narratives data from 30 natives and learners of Mandarin Chinese were collected via a film-retell task. In terms of coding, predicates collected from the narratives were transcribed and then coded based on four major verb types: n-degree Statives (quality-STA), point-scale Statives (status-STA), n-atom EVENT (ACT), and point EVENT (resultative-ACT). How native speakers and non-native speakers started retelling the story was calculated. Results of the study show that native speakers of Chinese tend to express Topic Time (TT) syntactically at the topic position; whereas L2 learners of Chinese across levels rely mainly on the default time encoded in the event types. Moreover, as the proficiency level of the learner increases, learners’ appropriate use of the event predicates increased, which supports the argument that L2 development of temporal encoding is affected by lexical aspect.

Keywords: topic prominence, temporal encoding, lexical aspect, L2 acquisition

Procedia PDF Downloads 171
572 12x12 MIMO Terminal Antennas Covering the Whole LTE and WiFi Spectrum

Authors: Mohamed Sanad, Noha Hassan

Abstract:

A broadband resonant terminal antenna has been developed. It can be used in different MIMO arrangements such as 2x2, 4x4, 8x8, or even 12x12 MIMO configurations. The antenna covers the whole LTE and WiFi bands besides the existing 2G/3G bands (700-5800 MHz), without using any matching/tuning circuits. Matching circuits significantly reduce the efficiency of any antenna and reduce the battery life. They also reduce the bandwidth because they are frequency dependent. The antenna can be implemented in smartphone handsets, tablets, laptops, notebooks or any other terminal. It is also suitable for different IoT and vehicle applications. The antenna is manufactured from a flexible material and can be bent or folded and shaped in any form to fit any available space in any terminal. It is self-contained and does not need to use the ground plane, the chassis or any other component of the terminal. Hence, it can be mounted on any terminal at different positions and configurations. Its performance does not get affected by the terminal, regardless of its type, shape or size. Moreover, its performance does not get affected by the human body of the terminal’s users. Because of all these unique features of the antenna, multiples of them can be simultaneously used for MIMO diversity coverage in any terminal device with a high isolation and a low correlation factor between them.

Keywords: IOT, LTE, MIMO, terminal antenna, WiFi

Procedia PDF Downloads 161