Search results for: morpho-semantic and syntactic analysis
27850 Modeling False Statements in Texts
Authors: Francielle A. Vargas, Thiago A. S. Pardo
Abstract:
According to the standard philosophical definition, lying is saying something that you believe to be false with the intent to deceive. For deception detection, the FBI trains its agents in a technique named statement analysis, which attempts to detect deception based on parts of speech (i.e., linguistics style). This method is employed in interrogations, where the suspects are first asked to make a written statement. In this poster, we model false statements using linguistics style. In order to achieve this, we methodically analyze linguistic features in a corpus of fake news in the Portuguese language. The results show that they present substantial lexical, syntactic and semantic variations, as well as punctuation and emotion distinctions.Keywords: deception detection, linguistics style, computational linguistics, natural language processing
Procedia PDF Downloads 21827849 Design and Landscape Architecture in the Vernacular Housing of Algiers
Authors: Leila Chebaiki-Adli, Naima Chabbi-Chemrouk
Abstract:
In the Algiers context, the historical city (the old medina) was in the middle age surrounded by several residencies and gardens. They were built in the aim to spend hot days of the year. Among these later, the residences of AbdelTif and the gardens of the dey (which exist always), benefit from important criteria which increase interior comfort. Their know-how is today in trend and can give us several considerations to the architectural design and to the landscape architecture. Their particularity is seen in the built-garden interactions and the design solutions. These later let the user live with vegetation, sky and water through maximum of places in the constructions. On the basis on an aesthetic-tectonic approach, which make in evidence the architectural criteria of the two quoted cases studies (the AbdelTif residence and the gardens of the dey), we will explain in the proposed paper, some important characteristics and design solutions, which contribute strongly to the concretisation and the materialisation of a landscape architecture, and which can be used in all the Mediterranean area. The proposed aesthetic-tectonic approach is based on the fusion between interior and exterior, in the aim to distinguish syntactic criteria. The syntactic criteria correspond to: The composition and the articulation between interior and exterior spaces, the employed materials in the quoted spaces, the manifestation processes. The major finding of this study is the identification of paradigmatic processes related to the architectural design. These later reveal more figurative (direct) than expressive (no direct) way of design and creativeness. While the figurative way benefits from a high level of manifestation, the expressive one benefits from more composed and articulated materials.Keywords: aesthetic/tectonic approach, Algiers context, design, landscape architecture
Procedia PDF Downloads 40427848 A Syntactic Approach to Applied and Socio-Linguistics in Arabic Language in Modern Communications
Authors: Adeyemo Abduljeeel Taiwo
Abstract:
This research is an attempt that creates a conducive atmosphere of a phonological and morphological compendium of Arabic language in Modern Standard Arabic (MSA) for modern day communications. The research is carried out with the chief aim of grammatical analysis of the two broad fields of Arabic linguistics namely: Applied and Socio-Linguistics. It draws a pictorial record of Applied and Socio-Linguistics in Arabic phonology and morphology. Thematically, it postulates and contemplates to a large degree, the theory of concord in contemporary modern Arabic language acquisition. It utilizes an analytical method while it portrays Arabic as a Semitic language that promotes linguistics and syntax among the scholars of the fields.Keywords: Arabic language, applied linguistics, socio-linguistics, modern communications
Procedia PDF Downloads 33127847 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 17727846 Chinese Event Detection Technique Based on Dependency Parsing and Rule Matching
Authors: Weitao Lin
Abstract:
To quickly extract adequate information from large-scale unstructured text data, this paper studies the representation of events in Chinese scenarios and performs the regularized abstraction. It proposes a Chinese event detection technique based on dependency parsing and rule matching. The method first performs dependency parsing on the original utterance, then performs pattern matching at the word or phrase granularity based on the results of dependent syntactic analysis, filters out the utterances with prominent non-event characteristics, and obtains the final results. The experimental results show the effectiveness of the method.Keywords: natural language processing, Chinese event detection, rules matching, dependency parsing
Procedia PDF Downloads 14127845 Hierarchical Tree Long Short-Term Memory for Sentence Representations
Authors: Xiuying Wang, Changliang Li, Bo Xu
Abstract:
A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they cannot capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and long-term dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly.Keywords: deep learning, hierarchical tree long short-term memory, sentence representation, sentiment analysis
Procedia PDF Downloads 34927844 An Investigation into Slow ESL Reading Speed in Pakistani Students
Authors: Hina Javed
Abstract:
This study investigated the different strategies used by Pakistani students learning English as a second language at secondary level school. The basic premise of the study is that ESL students face tremendous difficulty while they are reading a text in English. It also purports to dig into the different causes of their slow reading. They might range from word reading accuracy, mental translation, lexical density, cultural gaps, complex syntactic constructions, and back skipping. Sixty Grade 7 students from two secondary mainstream schools in Lahore were selected for the study, thirty being boys and thirty girls. They were administered reading-related and reading speed pre and post-tests. The purpose of the tests was to gauge their performance on different reading tasks so as to be able to see how they used strategies, if any, and also to ascertain the causes hampering their performance on those tests. In the pretests, they were given simple texts with considerable lexical density and moderately complex sentential layout. In the post-tests, the reading tasks contained comic strips, texts with visuals, texts with controlled vocabulary, and an evenly distributed varied range of simple, compound, and complex sentences. Both the tests were timed. The results gleaned through the data gathered corroborated the researchers’ basic hunch that they performed significantly better than pretests. The findings suggest that the morphological structure of words and lexical density are the main sources of reading comprehension difficulties in poor ESL readers. It is also confirmed that if the texts are accompanied by pictorial visuals, it greatly facilitates students’ reading speed and comprehension. There is no substantial evidence that ESL readers adopt any specific strategy while reading in English.Keywords: slow ESL reading speed, mental translation, complex syntactic constructions, back skipping
Procedia PDF Downloads 7127843 Research on Road Openness in the Old Urban Residential District Based on Space Syntax: A Case Study on Kunming within the First Loop Road
Authors: Haoyang Liang, Dandong Ge
Abstract:
With the rapid development of Chinese cities, traffic congestion has become more and more serious. At the same time, there are many closed old residential area in Chinese cities, which seriously affect the connectivity of urban roads and reduce the density of urban road networks. After reopening the restricted old residential area, the internal roads in the original residential area were transformed into urban roads, which was of great help to alleviate traffic congestion. This paper uses the spatial syntactic theory to analyze the urban road network and compares the roads with the integration and connectivity degree to evaluate whether the opening of the roads in the residential areas can improve the urban traffic. Based on the road network system within the first loop road in Kunming, the Space Syntax evaluation model is established for status analysis. And comparative analysis method will be used to compare the change of the model before and after the road openness of the old urban residential district within the first-ring road in Kunming. Then it will pick out the areas which indicate a significant difference for the small dimensions model analysis. According to the analyzed results and traffic situation, the evaluation of road openness in the old urban residential district will be proposed to improve the urban residential districts.Keywords: Space Syntax, Kunming, urban renovation, traffic jam
Procedia PDF Downloads 16227842 Distinguishing Borrowings from Code Mixes: An Analysis of English Lexical Items Used in the Print Media in Sri Lanka
Authors: Chamindi Dilkushi Senaratne
Abstract:
Borrowing is the morphological, syntactic and (usually) phonological integration of lexical items from one language into the structure of another language. Borrowings show complete linguistic integration and due to the frequency of use become fossilized in the recipient language differentiating them from switches and mixes. Code mixes are different to borrowings. Code mixing takes place when speakers use lexical items in casual conversation to serve a variety of functions. This study presents an analysis of lexical items used in English newspapers in Sri Lanka in 2017 which reveal characteristics of borrowing or code mixes. Both phenomena arise due to language contact. The study will also use data from social media websites that comment on newspaper articles available on the web. The study reiterates that borrowings are distinguishable from code mixes and that they are two different phenomena that occur in language contact situations. The study also shows how existing morphological processes are used to create new vocabulary in language use. The study sheds light into how existing morphological processes are used by the bilingual to be creative, innovative and convey a bilingual identity.Keywords: borrowing, code mixing, morphological processes
Procedia PDF Downloads 21927841 5iD Viewer: Observation of Fish School Behaviour in Labyrinths and Use of Semantic and Syntactic Entropy for School Structure Definition
Authors: Dalibor Štys, Kryštof M. Stys, Maryia Chkalova, Petr Kouba, Aliaxandr Pautsina, Dalibor Štys Jr., Jana Pečenková, Denis Durniev, Tomáš Náhlík, Petr Císař
Abstract:
In this article, a construction and some properties of the 5iD viewer, the system recording simultaneously five views of a given experimental object is reported. Properties of the system are demonstrated on the analysis of fish schooling behavior. It is demonstrated the method of instrument calibration which allows inclusion of image distortion and it is proposed and partly tested also the method of distance assessment in the case that only two opposite cameras are available. Finally, we demonstrate how the state trajectory of the behavior of the fish school may be constructed from the entropy of the system.Keywords: 3D positioning, school behavior, distance calibration, space vision, space distortion
Procedia PDF Downloads 38927840 Articles, Delimitation of Speech and Perception
Authors: Nataliya L. Ogurechnikova
Abstract:
The paper aims to clarify the function of articles in the English speech and specify their place and role in the English language, taking into account the use of articles for delimitation of speech. A focus of the paper is the use of the definite and the indefinite articles with different types of noun phrases which comprise either one noun with or without attributes, such as the King, the Queen, the Lion, the Unicorn, a dimple, a smile, a new language, an unknown dialect, or several nouns with or without attributes, such as the King and Queen of Hearts, the Lion and Unicorn, a dimple or smile, a completely isolated language or dialect. It is stated that the function of delimitation is related to perception: the number of speech units in a text correlates with the way the speaker perceives and segments the denotation. The two following combinations of words the house and garden and the house and the garden contain different numbers of speech units, one and two respectively, and reveal two different perception modes which correspond to the use of the definite article in the examples given. Thus, the function of delimitation is twofold, it is related to perception and cognition, on the one hand, and, on the other hand, to grammar, if the subject of grammar is the structure of speech. Analysis of speech units in the paper is not limited by noun phrases and is amplified by discussion of peripheral phenomena which are nevertheless important because they enable to qualify articles as a syntactic phenomenon whereas they are not infrequently described in terms of noun morphology. With this regard attention is given to the history of linguistic studies, specifically to the description of English articles by Niels Haislund, a disciple of Otto Jespersen. A discrepancy is noted between the initial plan of Jespersen who intended to describe articles as a syntactic phenomenon in ‘A Modern English Grammar on Historical Principles’ and the interpretation of articles in terms of noun morphology, finally given by Haislund. Another issue of the paper is correlation between description and denotation, being a traditional aspect of linguistic studies focused on articles. An overview of relevant studies, given in the paper, goes back to the works of G. Frege, which gave rise to a series of scientific works where the meaning of articles was described within the scope of logical semantics. Correlation between denotation and description is treated in the paper as the meaning of article, i.e. a component in its semantic structure, which differs from the function of delimitation and is similar to the meaning of other quantifiers. The paper further explains why the relation between description and denotation, i.e. the meaning of English article, is irrelevant for noun morphology and has nothing to do with nominal categories of the English language.Keywords: delimitation of speech, denotation, description, perception, speech units, syntax
Procedia PDF Downloads 24027839 Towards a Large Scale Deep Semantically Analyzed Corpus for Arabic: Annotation and Evaluation
Authors: S. Alansary, M. Nagi
Abstract:
This paper presents an approach of conducting semantic annotation of Arabic corpus using the Universal Networking Language (UNL) framework. UNL is intended to be a promising strategy for providing a large collection of semantically annotated texts with formal, deep semantics rather than shallow. The result would constitute a semantic resource (semantic graphs) that is editable and that integrates various phenomena, including predicate-argument structure, scope, tense, thematic roles and rhetorical relations, into a single semantic formalism for knowledge representation. The paper will also present the Interactive Analysis tool for automatic semantic annotation (IAN). In addition, the cornerstone of the proposed methodology which are the disambiguation and transformation rules, will be presented. Semantic annotation using UNL has been applied to a corpus of 20,000 Arabic sentences representing the most frequent structures in the Arabic Wikipedia. The representation, at different linguistic levels was illustrated starting from the morphological level passing through the syntactic level till the semantic representation is reached. The output has been evaluated using the F-measure. It is 90% accurate. This demonstrates how powerful the formal environment is, as it enables intelligent text processing and search.Keywords: semantic analysis, semantic annotation, Arabic, universal networking language
Procedia PDF Downloads 58227838 Cognitive Stylistics and Horror Fiction: A Case Study of Stephen King’s Misery
Authors: Kriangkrai Vathanalaoha
Abstract:
Misery generates fear and anxiety in readers through its intense plot associated with the unpredictable emotional states of the nurse, Annie Wilkes. At the same time, she mentally and physically abuses the novelist victim, Paul Sheldon. The suspense is not only at the story level, where the violent expressions are used but also at the discourse level, where the linguistic structures may intentionally cause the reader to view language as disturbing performative. This performativity could be reflected through linguistic choices where the writer triggers a new imaginative world through experiential metafunction and schema disruption. This study explores striking excerpts from the fiction through mind style and transitivity analysis to demonstrate how the horrific experience contrasts when the protagonist and the antagonist converse extensively. The results reveal that stylistic deviation can be found at the syntactic levels, where the intensity of emotions can be apparent when the protagonist is verbally abused. In addition, transitivity can flesh out how the protagonist is expressed chiefly through the internalized process, whereas the antagonist is eminent with the externalized process. The findings suggest that the application of cognitive stylistics, such as mind style and transitivity analysis, could contribute to the mental representation of horrific reality.Keywords: horror, mind style, misery, stylistics, transitivity
Procedia PDF Downloads 14027837 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features
Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim
Abstract:
The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.Keywords: data mining, Korean linguistic feature, literary fiction, relationship extraction
Procedia PDF Downloads 38027836 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English
Authors: Duong Thuy Nguyen, Giulia Bencini
Abstract:
The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing
Procedia PDF Downloads 15227835 Composite Kernels for Public Emotion Recognition from Twitter
Authors: Chien-Hung Chen, Yan-Chun Hsing, Yung-Chun Chang
Abstract:
The Internet has grown into a powerful medium for information dispersion and social interaction that leads to a rapid growth of social media which allows users to easily post their emotions and perspectives regarding certain topics online. Our research aims at using natural language processing and text mining techniques to explore the public emotions expressed on Twitter by analyzing the sentiment behind tweets. In this paper, we propose a composite kernel method that integrates tree kernel with the linear kernel to simultaneously exploit both the tree representation and the distributed emotion keyword representation to analyze the syntactic and content information in tweets. The experiment results demonstrate that our method can effectively detect public emotion of tweets while outperforming the other compared methods.Keywords: emotion recognition, natural language processing, composite kernel, sentiment analysis, text mining
Procedia PDF Downloads 21827834 Communicative Strategies in Colombian Political Speech: On the Example of the Speeches of Francia Marquez
Authors: Danila Arbuzov
Abstract:
In this article the author examines the communicative strategies used in the Colombian political discourse, following the example of the speeches of the Vice President of Colombia Francia Marquez, who took office in 2022 and marked a new development vector for the Colombian nation. The lexical and syntactic means are analyzed to achieve the communicative objectives. The material presented may be useful for those who are interested in investigating various aspects of discursive linguistics, particularly political discourse, as well as the implementation of communicative strategies in certain types of discourse.Keywords: political discourse, communication strategies, Colombian political discourse, Colombia, manipulation
Procedia PDF Downloads 11327833 Contrastive Focus Marking in Brazilian Children under Typical and Atypical Phonological Development
Authors: Geovana Soncin, Larissa Berti
Abstract:
Some aspects of prosody acquisition remain still unclear, especially regarding atypical speech development processes. This work deals with prosody acquisition and its implications for clinical purposes. Therefore, we analyze speech samples produced by adult speakers, children in typical language development, and children with phonological disorders. Phonological disorder comprises deviating manifestations characterized by inconsistencies in the phonological representation of a linguistic system under acquisition. The clinical assessment is performed mostly based on contrasts whose manifestations occur in the segmental level of a phonological system. Prosodic organization of spoken utterances is not included in the standard assessment. However, assuming that prosody is part of the phonological system, it was hypothesized that children with Phonological Disorders could present inconsistencies that also occur at a prosodic level. Based on this hypothesis, the paper aims to analyze contrastive focus marking in the speech of children with Phonological Disorders in comparison with the speech of children under Typical Language Development and adults. The participants of all groups were native speakers of Brazilian Portuguese. The investigation was designed in such a way as to identify differences and similarities among the groups that could be interpreted as clues of normal or deviant processes of prosody acquisition. Contrastive focus in Brazilian Portuguese is marked by increasing duration, f0, and intensity on the focused element as well as by a particular type of pitch accent (L*+H). Thirty-nine subjects participated, thirteen from each group. Acoustic analysis was performed, considering duration, intensity, and intonation as parameters. Children with PD were recruited in sessions from a service provided by Speech-Language Pathology Therapy; children in TD, paired in age and sex with the first group, were recruited in a regular school; and 20-24 years old adults were recruited from a University class. In a game prepared to elicit focused sentences, all of them produced the sentence “Girls love red dress,” marking focus on different syntactic positions: subject, verb, and object. Results showed that adults, children in typical language development, and children with Phonological Disorders marked contrastive focus differently: typical children used all parameters like adults do; however, in comparison with them, they exaggerated duration and, in the opposite direction, they did not increase f0 in a sufficient magnitude as adults; children with Phonological Disorder presented inconsistencies in duration, not increasing it in some syntactic positions, and also in intonation, not producing the representative pitch accent of contrastive focus. The results suggest prosody is also affected by phonological disorder and give clues of developmental processes of prosody acquisition.Keywords: Brazilian Portuguese, contrastive focus, phonological disorder, prosody acquisition
Procedia PDF Downloads 8627832 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 9327831 Age and Second Language Acquisition: A Case Study from Maldives
Authors: Aaidha Hammad
Abstract:
The age a child to be exposed to a second language is a controversial issue in communities such as the Maldives where English is taught as a second language. It has been observed that different stakeholders have different viewpoints towards the issue. Some believe that the earlier children are exposed to a second language, the better they learn, while others disagree with the notion. Hence, this case study investigates whether children learn a second language better when they are exposed at an earlier age or not. The spoken and written data collected confirm that earlier exposure helps in mastering the sound pattern and speaking fluency with more native-like accent, while a later age is better for learning more abstract and concrete aspects such as grammar and syntactic rules.Keywords: age, fluency, second language acquisition, development of language skills
Procedia PDF Downloads 42527830 Analysis of Spatiotemporal Efficiency and Fairness of Railway Passenger Transport Network Based on Space Syntax: Taking Yangtze River Delta as an Example
Abstract:
Based on the railway network and the principles of space syntax, the study attempts to reconstruct the spatial relationship of the passenger network connections from space and time perspective. According to the travel time data of main stations in the Yangtze River Delta urban agglomeration obtained by the Internet, the topological drawing of railway network under different time sections is constructed. With the comprehensive index composed of connection and integration, the accessibility and network operation efficiency of the railway network in different time periods is calculated, while the fairness of the network is analyzed by the fairness indicators constructed with the integration and location entropy from the perspective of horizontal and vertical fairness respectively. From the analysis of the efficiency and fairness of the railway passenger transport network, the study finds: (1) There is a strong regularity in regional system accessibility change; (2) The problems of efficiency and fairness are different in different time periods; (3) The improvement of efficiency will lead to the decline of horizontal fairness to a certain extent, while from the perspective of vertical fairness, the supply-demand situation has changed smoothly with time; (4) The network connection efficiency of Shanghai, Jiangsu and Zhejiang regions is higher than that of the western regions such as Anqing and Chizhou; (5) The marginalization of Nantong, Yancheng, Yangzhou, Taizhou is obvious. The study explores the application of spatial syntactic theory in regional traffic analysis, in order to provide a reference for the development of urban agglomeration transportation network.Keywords: spatial syntax, the Yangtze River Delta, railway passenger time, efficiency and fairness
Procedia PDF Downloads 13627829 The Grammatical Dictionary Compiler: A System for Kartvelian Languages
Authors: Liana Lortkipanidze, Nino Amirezashvili, Nino Javashvili
Abstract:
The purpose of the grammatical dictionary is to provide information on the morphological and syntactic characteristics of the basic word in the dictionary entry. The electronic grammatical dictionaries are used as a tool of automated morphological analysis for texts processing. The Georgian Grammatical Dictionary should contain grammatical information for each word: part of speech, type of declension/conjugation, grammatical forms of the word (paradigm), alternative variants of basic word/lemma. In this paper, we present the system for compiling the Georgian Grammatical Dictionary automatically. We propose dictionary-based methods for extending grammatical lexicons. The input lexicon contains only a few number of words with identical grammatical features. The extension is based on similarity measures between features of words; more precisely, we add words to the extended lexicons, which are similar to those, which are already in the grammatical dictionary. Our dictionaries are corpora-based, and for the compiling, we introduce the method for lemmatization of unknown words, i.e., words of which neither full form nor lemma is in the grammatical dictionary.Keywords: acquisition of lexicon, Georgian grammatical dictionary, lemmatization rules, morphological processor
Procedia PDF Downloads 14527828 Translating the Gendered Discourse: A Corpus-Based Study of the Chinese Science Fiction The Three Body Problem
Authors: Yi Gu
Abstract:
The Three-Body Problem by Cixin Liu has been a bestseller Chinese Sci-Fi novel for years since 2008. The book was translated into English by Ken Liu in 2014 and won the prestigious 2015 science fiction and fantasy writing Hugo Award, drawing greater attention from wider international communities. The story exposes the horrors of the Chinese Cultural Revolution in the 1960s, in an intriguing narrative for readers at home and abroad. However, without the access to the source text, western readers may not be aware that the original Chinese version of the book is rich in gender-bias. Some Chinese scholars have applied feminist translation theories to their analysis on this book before, based on isolated selected, cherry-picking examples. Thus this paper aims to obtain a more thorough picture of how translators can cope with gender discrimination and reshape the gendered discourse from the source text, by systematically investigating the lexical and syntactic patterns in the translation of Liu’s entire book of 400 pages. The source text and the translation were downloaded into digital files, automatically aligned at paragraph level and then manually post-edited. They were then compiled into a parallel corpus of 114,629 English words and 204,145 Chinese characters using Sketch Engine. Gender-discrimination markers such as the overuse of ‘girl’ to describe an adult woman were searched in the source text, and the alignment made it possible to identify the strategies adopted by the translator to mitigate gender discrimination. The results provide a framework for translators to address gender bias. The study also shows how corpus methods can be used to further research in feminist translation and critical discourse analysis.Keywords: corpus, discourse analysis, feminist translation, science fiction translation
Procedia PDF Downloads 25627827 Subtitled Based-Approach for Learning Foreign Arabic Language
Authors: Elleuch Imen
Abstract:
In this paper, it propose a new approach for learning Arabic as a foreign language via audio-visual translation, particularly subtitling. The approach consists of developing video sequences appropriate to different levels of learning (from A1 to C2) containing conversations, quizzes, games and others. Each video aims to achieve a specific objective, such as the correct pronunciation of Arabic words, the correct syntactic structuring of Arabic sentences, the recognition of the morphological characteristics of terms and the semantic understanding of statements. The subtitled videos obtained can be incorporated into different Arabic second language learning tools such as Moocs, websites, platforms, etc.Keywords: arabic foreign language, learning, audio-visuel translation, subtitled videos
Procedia PDF Downloads 6027826 Processing Mild versus Strong Violations in Music: A Pilot Study Using Event-Related Potentials
Authors: Marie-Eve Joret, Marijn Van Vliet, Flavio Camarrone, Marc M. Van Hulle
Abstract:
Event-related potentials (ERPs) provide evidence that the human brain can process and understand music at a pre-attentive level. Music-specific ERPs include the Early Right Anterior Negativity (ERAN) and a late Negativity (N5). This study aims to further investigate this issue using two types of syntactic manipulations in music: mild violations, containing no out-of-key tones and strong violations, containing out-of-key tones. We will examine whether both manipulations will elicit the same ERPs.Keywords: ERAN ERPs, Music, N5, P3, ERPs, Music, N5 component, P3 component
Procedia PDF Downloads 27527825 Language Transfer in Graduate Candidates’ Essays
Authors: Erika Martínez Lugo
Abstract:
Candidates to some graduate studies are asked to write essays in English to prove their competence to write essays and to do it in English. In the present study, language transfer (LT) in 15 written essays is identified, documented, analyzed, and classified. The essays were written in 2019, and the graduate program is a Masters in Modern Languages in a North-Western Mexican city border with USA. This study is of interest since it is important to determine whether or not some errors have been fossilized and have become mistakes, or if it is part of the candidates’ interlanguage. The results show that most language transfer is negative and syntactic, where the influence of candidates L1 (Spanish) is evident in their use of L2 (English).Keywords: language transfer, cross-linguistic influence, interlanguage, error vs mistake
Procedia PDF Downloads 17727824 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches
Authors: Mariam Matiashvili
Abstract:
Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon
Procedia PDF Downloads 7027823 Morphological and Syntactic Meaning: An Interactive Crossword Puzzle Approach
Authors: Ibrahim Garba
Abstract:
This research involved the use of word distributions and morphological knowledge by speakers of Arabic learning English connected different allomorphs in order to realize how the morphology and syntax of English gives meaning through using interactive crossword puzzles (ICP). Fifteen chapters covered with a class of nine learners over an academic year of an intensive English program were reviewed using the ICP. Learners were questioned about how the use of this gaming element enhanced and motivated their learning of English. The findings were positive indicating a successful implementation of ICP both at creational and user levels. This indicated a positive role technology had when learning and teaching English through adopting an interactive gaming element for learning English.Keywords: distribution, gaming, interactive-crossword-puzzle, morphology
Procedia PDF Downloads 33127822 Semantic Data Schema Recognition
Authors: Aïcha Ben Salem, Faouzi Boufares, Sebastiao Correia
Abstract:
The subject covered in this paper aims at assisting the user in its quality approach. The goal is to better extract, mix, interpret and reuse data. It deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.Keywords: schema recognition, semantic data profiling, meta-categorisation, semantic dependencies inter columns
Procedia PDF Downloads 41827821 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study
Authors: Insiya Bhalloo
Abstract:
It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition
Procedia PDF Downloads 357