Search results for: linguistic similarity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1530

Search results for: linguistic similarity

1080 A Hybrid Watermarking Scheme Using Discrete and Discrete Stationary Wavelet Transformation For Color Images

Authors: Bülent Kantar, Numan Ünaldı

Abstract:

This paper presents a new method which includes robust and invisible digital watermarking on images that is colored. Colored images are used as watermark. Frequency region is used for digital watermarking. Discrete wavelet transform and discrete stationary wavelet transform are used for frequency region transformation. Low, medium and high frequency coefficients are obtained by applying the two-level discrete wavelet transform to the original image. Low frequency coefficients are obtained by applying one level discrete stationary wavelet transform separately to all frequency coefficient of the two-level discrete wavelet transformation of the original image. For every low frequency coefficient obtained from one level discrete stationary wavelet transformation, watermarks are added. Watermarks are added to all frequency coefficients of two-level discrete wavelet transform. Totally, four watermarks are added to original image. In order to get back the watermark, the original and watermarked images are applied with two-level discrete wavelet transform and one level discrete stationary wavelet transform. The watermark is obtained from difference of the discrete stationary wavelet transform of the low frequency coefficients. A total of four watermarks are obtained from all frequency of two-level discrete wavelet transform. Obtained watermark results are compared with real watermark results, and a similarity result is obtained. A watermark is obtained from the highest similarity values. Proposed methods of watermarking are tested against attacks of the geometric and image processing. The results show that proposed watermarking method is robust and invisible. All features of frequencies of two level discrete wavelet transform watermarking are combined to get back the watermark from the watermarked image. Watermarks have been added to the image by converting the binary image. These operations provide us with better results in getting back the watermark from watermarked image by attacking of the geometric and image processing.

Keywords: watermarking, DWT, DSWT, copy right protection, RGB

Procedia PDF Downloads 509
1079 When the Rubber Hits the Road: The Enactment of Well-Intentioned Language Policy in Digital vs. In Situ Spaces on Washington, DC Public Transportation

Authors: Austin Vander Wel, Katherin Vargas Henao

Abstract:

Washington, DC, is a city in which Spanish, along with several other minority languages, is prevalent not only among tourists but also those living within city limits. In response to this linguistic diversity and DC’s adoption of the Language Access Act in 2004, the Washington Metropolitan Area Transit Authority (WMATA) committed to addressing the need for equal linguistic representation and established a five-step plan to provide the best multilingual information possible for public transportation users. The current study, however, strongly suggests that this de jure policy does not align with the reality of Spanish’s representation on DC public transportation–although perhaps doing so in an unexpected way. In order to investigate Spanish’s de facto representation and how it contrasts with de jure policy, this study implements a linguistic landscapes methodology that takes critical language-policy as its theoretical framework (Tollefson, 2005). Specifically concerning de facto representation, it focuses on the discrepancies between digital spaces and the actual physical spaces through which users travel. These digital vs. in situ conditions are further analyzed by separately addressing aural and visual modalities. In digital spaces, data was collected from WMATA’s website (visual) and their bilingual hotline (aural). For in situ spaces, both bus and metro areas of DC public transportation were explored, with signs comprising the visual modality and recordings, driver announcements, and interactions with metro kiosk workers comprising the aural modality. While digital spaces were considered to successfully fulfill WMATA’s commitment to representing Spanish as outlined in the de jure policy, physical spaces show a large discrepancy between what is said and what is done, particularly regarding the bus system, in addition to the aural modality overall. These discrepancies in situ spaces place Spanish speakers at a clear disadvantage, demanding additional resources and knowledge on the part of residents with limited or no English proficiency in order to have equal access to this public good. Based on our critical language-policy analysis, while Spanish is represented as a right in the de jure policy, its implementation in situ clearly portrays Spanish as a problem since those seeking bilingual information can not expect it to be present when and where they need it most (Ruíz, 1984; Tollefson, 2005). This study concludes with practical, data-based steps to improve the current situation facing DC’s public transportation context and serves as a model for responding to inadequate enactment of de jure policy in other language policy settings.

Keywords: Urban landscape, language access, critical-language policy, spanish, public transportation

Procedia PDF Downloads 50
1078 Different Views and Evaluations of IT Artifacts

Authors: Sameh Al-Natour, Izak Benbasat

Abstract:

The introduction of a multitude of new and interactive e-commerce information technology (IT) artifacts has impacted adoption research. Rather than solely functioning as productivity tools, new IT artifacts assume the roles of interaction mediators and social actors. This paper describes the varying roles assumed by IT artifacts, and proposes and distinguishes between four distinct foci of how the artifacts are evaluated. It further proposes a theoretical model that maps the different views of IT artifacts to four distinct types of evaluations.

Keywords: IT adoption, IT artifacts, similarity, social actor

Procedia PDF Downloads 366
1077 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 119
1076 Cognitive Linguistic Features Underlying Spelling Development in a Second Language: A Case Study of L2 Spellers in South Africa

Authors: A. Van Staden, A. Tolmie, E. Vorster

Abstract:

Research confirms the multifaceted nature of spelling development and underscores the importance of both cognitive and linguistic skills that affect sound spelling development such as working and long-term memory, phonological and orthographic awareness, mental orthographic images, semantic knowledge and morphological awareness. This has clear implications for many South African English second language spellers (L2) who attempt to become proficient spellers. Since English has an opaque orthography, with irregular spelling patterns and insufficient sound/grapheme correspondences, L2 spellers can neither rely, nor draw on the phonological awareness skills of their first language (for example Sesotho and many other African languages), to assist them to spell the majority of English words. Epistemologically, this research is informed by social constructivism. In addition the researchers also hypothesized that the principles of the Overlapping Waves Theory was an appropriate lens through which to investigate whether L2 spellers could significantly improve their spelling skills via the implementation of an alternative route to spelling development, namely the orthographic route, and more specifically via the application of visual imagery. Post-test results confirmed the results of previous research that argues for the interactive nature of different cognitive and linguistic systems such as working memory and its subsystems and long-term memory, as learners were systematically guided to store visual orthographic images of words in their long-term lexicons. Moreover, the results have shown that L2 spellers in the experimental group (n = 9) significantly outperformed L2 spellers (n = 9) in the control group whose intervention involved phonological awareness (and coding) including the teaching of spelling rules. Consequently, L2 learners in the experimental group significantly improved in all the post-test measures included in this investigation, namely the four sub-tests of short-term memory; as well as two spelling measures (i.e. diagnostic and standardized measures). Against this background, the findings of this study look promising and have shown that, within a social-constructivist learning environment, learners can be systematically guided to apply higher-order thinking processes such as visual imagery to successfully store and retrieve mental images of spelling words from their output lexicons. Moreover, results from the present study could play an important role in directing research into this under-researched aspect of L2 literacy development within the South African education context.

Keywords: English second language spellers, phonological and orthographic coding, social constructivism, visual imagery as spelling strategy

Procedia PDF Downloads 327
1075 Portuguese Teachers in Bilingual Schools in Brazil: Professional Identities and Intercultural Conflicts

Authors: Antonieta Heyden Megale

Abstract:

With the advent of globalization, the social, cultural and linguistic situation of the whole world has changed. In this scenario, the teaching of English, in Brazil, has become a booming business and the belief that this language is essential to a successful life is played by the media that sees it as a commodity and spares no effort to sell it. In this context, it has become evident the growth of bilingual and international schools that have English and Portuguese as languages of instruction. According to federal legislation, all schools in the country must follow the Curriculum guidelines proposed by the Ministry of Education of Brazil. It is then mandatory that, in addition to the specific foreign curriculum an international school subscribes to, it must also teach all subjects of the official minimum curriculum and these subjects have to be taught in Portuguese. It is important to emphasize that, in these schools, English is the most prestigious language. Therefore, firstly, Brazilian teachers who teach Portuguese in such contexts find themselves in a situation in which they teach in a low-status language. Secondly, because such teachers’ actions are guided by a different cultural matrix, which differs considerably from Anglo-Saxon values and beliefs, they often experience intercultural conflict in their workplace. Taking it consideration, this research, focusing on the trajectories of a specific group of Brazilian teachers of Portuguese in international and bilingual schools located in the city of São Paulo, intends to analyze how they discursively represent their own professional identities and practices. More specifically the objectives of this research are to understand, from the perspective of the investigated teachers, how they (i) rebuilt narratively their professional careers and explain the factors that led them to an international or to an immersion bilingual school; (ii) position themselves with respect to their linguistic repertoire; (iii) interpret the intercultural practices they are involved with in school and (v) position themselves by foregrounding categories to determine their membership in the group of Portuguese teachers. We have worked with these teachers’ autobiographical narratives. The autobiographical approach assumes that the stories told by teachers are systems of meaning involved in the production of identities and subjectivities in the context of power relations. The teachers' narratives were elicited by the following trigger: "I would like you to tell me how you became a teacher in a bilingual/international school and what your impressions are about your work and about the context in which it is inserted". These narratives were produced orally, recorded, and transcribed for analysis. The teachers were also invited to draw their "linguistic portraits". The theoretical concepts of positioning and the indexical cues were taken into consideration in data analysis. The narratives produced by the teachers point to intercultural conflicts related to their expectations and representations of others, which are never neutral or objective truths but discursive constructions.

Keywords: bilingual schools, identity, interculturality, narrative

Procedia PDF Downloads 319
1074 Screening Diversity: Artificial Intelligence and Virtual Reality Strategies for Elevating Endangered African Languages in the Film and Television Industry

Authors: Samuel Ntsanwisi

Abstract:

This study investigates the transformative role of Artificial Intelligence (AI) and Virtual Reality (VR) in the preservation of endangered African languages. The study is contextualized within the film and television industry, highlighting disparities in screen representation for certain languages in South Africa, underscoring the need for increased visibility and preservation efforts; with globalization and cultural shifts posing significant threats to linguistic diversity, this research explores approaches to language preservation. By leveraging AI technologies, such as speech recognition, translation, and adaptive learning applications, and integrating VR for immersive and interactive experiences, the study aims to create a framework for teaching and passing on endangered African languages. Through digital documentation, interactive language learning applications, storytelling, and community engagement, the research demonstrates how these technologies can empower communities to revitalize their linguistic heritage. This study employs a dual-method approach, combining a rigorous literature review to analyse existing research on the convergence of AI, VR, and language preservation with primary data collection through interviews and surveys with ten filmmakers. The literature review establishes a solid foundation for understanding the current landscape, while interviews with filmmakers provide crucial real-world insights, enriching the study's depth. This balanced methodology ensures a comprehensive exploration of the intersection between AI, VR, and language preservation, offering both theoretical insights and practical perspectives from industry professionals.

Keywords: language preservation, endangered languages, artificial intelligence, virtual reality, interactive learning

Procedia PDF Downloads 32
1073 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: townscape, urban design, visual assessment, visual consistency

Procedia PDF Downloads 290
1072 Identification of Analogues to EGCG for the Inhibition of HPV E7: A Fundamental Insights through Structural Dynamics Study

Authors: Murali Aarthy, Sanjeev Kumar Singh

Abstract:

High risk human papillomaviruses are highly associated with the carcinoma of the cervix and the other genital tumors. Cervical cancer develops through the multistep process in which increasingly severe premalignant dysplastic lesions called cervical intraepithelial neoplastic progress to invasive cancer. The oncoprotein E7 of human papillomavirus expressed in the lower epithelial layers drives the cells into S-phase creating an environment conducive for viral genome replication and cell proliferation. The replication of the virus occurs in the terminally differentiating epithelium and requires the activation of cellular DNA replication proteins. To date, no suitable drug molecule is available to treat HPV infection whereas identification of potential drug targets and development of novel anti-HPV chemotherapies with unique mode of actions are expected. Hence, our present study aimed to identify the potential inhibitors analogous to EGCG, a green tea molecule which is considered to be safe to use for mammalian systems. A 3D similarity search on the natural small molecule library from natural product database using EGCG identified 11 potential hits based on their similarity score. The structure based docking strategies were implemented in the potential hits and the key interacting residues of protein with compounds were identified through simulation studies and binding free energy calculations. The conformational changes between the apoprotein and the complex were analyzed with the simulation and the results demonstrated that the dynamical and structural effects observed in the protein were induced by the compounds and indicated the dominance to the oncoprotein. Overall, our study provides the basis for the structural insights of the identified potential hits and EGCG and hence, the analogous compounds identified can be potent inhibitors against the HPV 16 E7 oncoprotein.

Keywords: EGCG, oncoprotein, molecular dynamics simulation, analogues

Procedia PDF Downloads 103
1071 The Ideology of the Jordanian Media Women’s Discourse: Lana Mamkgh as an Example

Authors: Amani Hassan Abu Atieh

Abstract:

This study aims at examining the patterns of ideology reflected in the written discourse of women writers in the media of Jordan; Lana Mamkgh is taken as an example. This study critically analyzes the discursive, linguistic, and cognitive representations that she employs as an agent in the institutionalized discourse of the media. Grounded in van Dijk’s critical discourse analysis approach to Sociocognitive Discourse Studies, the present study builds a multilayer framework that encompasses van Dijk’s triangle: discourse, society, and cognition. Specifically, the study attempts to analyze, at both micro and macro levels, the underlying cognitive processes and structures, mainly ideology and discursive strategies, which are functional in the production of women’s discourse in terms of meaning, forms, and functions. Cognitive processes that social actors adopt are underlined by experience/context and semantic mental models on the one hand and social cognition on the other. This study is based on qualitative research and adopts purposive sampling, taking as an example a sample of an opinion article written by Lana Mamkgh in the Arabic Jordanian Daily, Al Rai. Taking her role as an agent in the public sphere, she stresses the National and feminist ideologies, demonstrating the use of assertive, evaluative, and expressive linguistic and rhetorical devices that appeal to the logic, ethics, and emotions of the addressee. Highlighting the agency of Jordanian writers in the media, the study sought to achieve the macro goal of dispensing political and social justice to the underprivileged. Further, the study seeks to prove that the voice of Jordanian women, viewed as underrepresented and invisible in the public arena, has come through clearly.

Keywords: critical discourse analysis, sociocognitive theory, ideology, women discourse, media

Procedia PDF Downloads 75
1070 Practical Ways to Acquire the Arabic Language through Electronic Means

Authors: Hondozi Jahja

Abstract:

There is an obvious need to learn Arabic language and teach it to other speakers through the new curricula. The idea is to bridge the gap between theory and practice. To that end, we have sought to offer some means of help to master the Arabic language, in addition to our efforts to apply these means, enriching the culture of the student and develop his vocabulary. There is no doubt that taking care of the practical aspect of the grammar was our constant goal, and this particular aspect is what builds the student’s positive values and refine his taste and develop his language. In addressing these issues, we have adopted a school-based approach based primarily on the active and positive participation of the student. The theoretical linguistic issues - in our opinion - are not a primary goal, but the goal is to be used them by students through speaking and applying them. Among the objectives of this research is to establish the basic language skills of the students using new means that help the student to acquire these skills and apply them in various subjects of interest in his progress and development. Unfortunately, some of our students consider the grammar as ‘difficult’, ‘complex’ and ‘heavy’ in itself. This is one of the obstacles that stand in the way of their desired results. As a consequence, they end up talking – mumbling - about the difficulties they face in applying those rules. Therefore, some of our students finish their university studies and are unable to express what they feel using language correctly. For this purpose, we have sought in this research to follow a new integrated approach, which is to study the grammar of the language through modern means of the consolidation of the principle of functional language, and that the rule implies to control tongues and linguistic expressions properly. This research is a result of a practical experience as a teacher of Arabic language for non-native speakers at the ‘Hassan Pristina’ University, located in Pristina, the capital of Kosovo and at the Qatar Training Center since its establishment in 2012.

Keywords: arabic, applied methods, acquire, learning

Procedia PDF Downloads 131
1069 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs

Authors: Lexi Li

Abstract:

This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.

Keywords: EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 99
1068 Cognition in Crisis: Unravelling the Link Between COVID-19 and Cognitive-Linguistic Impairments

Authors: Celine Davis

Abstract:

The novel coronavirus 2019 (COVID-19) is an infectious disease caused by the virus SARS-CoV-2, which has detrimental respiratory, cardiovascular, and neurological effects impacting over one million lives in the United States. New researches has emerged indicating long-term neurologic consequences in those who survive COVID-19 infections, including more than seven million Americans and another 27 million people worldwide. These consequences include attentional deficits, memory impairments, executive function deficits and aphasia-like symptoms which fall within the purview of speech-language pathology. The National Health Interview Survey (NHIS) is a comprehensive annual survey conducted by the National Center for Health Statistics (NCHS), a branch of the Centers for Disease Control and Prevention (CDC) in the United States. The NHIS is one of the most significant sources of health-related data in the country and has been conducted since 1957. The longitudinal nature of the study allows for analysis of trends in various variables over the years, which can be essential for understanding societal changes and making treatment recommendations. This current study will utilize NHIS data from 2020-2022 which contained interview questions specifically related to COVID-19. Adult cases of individuals between the ages of 18-50 diagnosed with COVID-19 in the United States during 2020-2022 will be identified using the National Health Interview Survey (NHIS). Multiple regression analysis of self-reported data confirming COVID-19 infection status and challenges with concentration, communication, and memory will be performed. Latent class analysis will be utilized to identify subgroups in the population to indicate whether certain demographic groups have higher susceptibility to cognitive-linguistic deficits associated with COVID-19. Completion of this study will reveal whether there is an association between confirmed COVID-19 diagnosis and heightened incidence of cognitive deficits and subsequent implications, if any, on activities of daily living. This study is distinct in its aim to utilize national survey data to explore the relationship between confirmed COVID-19 diagnosis and the prevalence of cognitive-communication deficits with a secondary focus on resulting activity limitations. To the best of the author’s knowledge, this will be the first large-scale epidemiological study investigating the associations between cognitive-linguistic deficits, COVID-19 and implications on activities of daily living in the United States population. These findings will highlight the need for targeted interventions and support services to address the cognitive-communication needs of individuals recovering from COVID-19, thereby enhancing their overall well-being and functional outcomes.

Keywords: cognition, COVID-19, language, limitations, memory, NHIS

Procedia PDF Downloads 25
1067 Brown-Spot Needle Blight: An Emerging Threat Causing Loblolly Pine Needle Defoliation in Alabama, USA

Authors: Debit Datta, Jeffrey J. Coleman, Scott A. Enebak, Lori G. Eckhardt

Abstract:

Loblolly pine (Pinus taeda) is a leading productive timber species in the southeastern USA. Over the past three years, an emerging threat is expressed by successive needle defoliation followed by stunted growth and tree mortality in loblolly pine plantations. Considering economic significance, it has now become a rising concern among landowners, forest managers, and forest health state cooperators. However, the symptoms of the disease were perplexed somewhat with root disease(s) and recurrently attributed to invasive Phytophthora species due to the similarity of disease nature and devastation. Therefore, the study investigated the potential causal agent of this disease and characterized the fungi associated with loblolly pine needle defoliation in the southeastern USA. Besides, 70 trees were selected at seven long-term monitoring plots at Chatom, Alabama, to monitor and record the annual disease incidence and severity. Based on colony morphology and ITS-rDNA sequence data, a total of 28 species of fungi representing 17 families have been recovered from diseased loblolly pine needles. The native brown-spot pathogen, Lecanosticta acicola, was the species most frequently recovered from unhealthy loblolly pine needles in combination with some other common needle cast and rust pathogen(s). Identification was confirmed using morphological similarity and amplification of translation elongation factor 1-alpha gene region of interest. Tagged trees were consistently found chlorotic and defoliated from 2019 to 2020. The current emergence of the brown-spot pathogen causing loblolly pine mortality necessitates the investigation of the role of changing climatic conditions, which might be associated with increased pathogen pressure to loblolly pines in the southeastern USA.

Keywords: brown-spot needle blight, loblolly pine, needle defoliation, plantation forestry

Procedia PDF Downloads 120
1066 A Comparative Analysis of (De)legitimation Strategies in Selected African Inaugural Speeches

Authors: Lily Chimuanya, Ehioghae Esther

Abstract:

Language, a versatile and sophisticated tool, is fundamentally sacrosanct to mankind especially within the realm of politics. In this dynamic world, political leaders adroitly use language to engage in a strategic show aimed at manipulating or mechanising the opinion of discerning people. This nuanced synergy is marked by different rhetorical strategies, meticulously synced with contextual factors ranging from cultural, ideological, and political to achieve multifaceted persuasive objectives. This study investigates the (de)legitimation strategies inherent in African presidential inaugural speeches, as African leaders not only state their policy agenda through inaugural speeches but also subtly indulge in a dance of legitimation and delegitimation, performing a twofold objective of strengthening the credibility of their administration and, at times, undermining the performance of the past administration. Drawing insights from two different legitimation models and a dataset of 4 African presidential inaugural speeches obtained from authentic websites, the study describes the roles of authorisation, rationalisation, moral evaluation, altruism, and mythopoesis in unmasking the structure of political discourse. The analysis takes a mixed-method approach to unpack the (de)legitimation strategy embedded in the carefully chosen speeches. The focus extends beyond a superficial exploration and delves into the linguistic elements that form the basis of presidential discourse. In conclusion, this examination goes beyond the nuanced landscape of language as a potent tool in politics, with each strategy contributing to the overall rhetorical impact and shaping the narrative. From this perspective, the study argues that presidential inaugural speeches are not only linguistic exercises but also viable weapons that influence perceptions and legitimise authority.

Keywords: CDA, legitimation, inaugural speeches, delegitmation

Procedia PDF Downloads 26
1065 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 45
1064 Analyzing Apposition and the Typology of Specific Reference in Newspaper Discourse in Nigeria

Authors: Monday Agbonica Bello Eje

Abstract:

The language of the print media is characterized by the use of apposition. This linguistic element function strategically in journalistic discourse where it is communicatively necessary to name individuals and provide information about them. Linguistic studies on the language of the print media with bias for apposition have largely dwelt on other areas but the examination of the typology of appositive reference in newspaper discourse. Yet, it is capable of revealing ways writers communicate and provide information necessary for readers to follow and understand the message. The study, therefore, analyses the patterns of appositional occurrences and the typology of reference in newspaper articles. The data were obtained from The Punch and Daily Trust Newspapers. A total of six editions of these newspapers were collected randomly spread over three months. News and feature articles were used in the analysis. Guided by the referential theory of meaning in discourse, the appositions identified were subjected to analysis. The findings show that the semantic relation of coreference and speaker coreference have the highest percentage and frequency of occurrence in the data. This is because the subject matter of news reports and feature articles focuses on humans and the events around them; as a result, readers need to be provided with some form of detail and background information in order to identify as well as follow the discourse. Also, the non-referential relation of absolute synonymy and speaker synonymy no doubt have fewer occurrences and percentages in the analysis. This is tied to a major feature of the language of the media: simplicity. The paper concludes that appositions is mainly used for the purpose of providing the reader with much detail. In this way, the writer transmits information which helps him not only to give detailed yet concise descriptions but also in some way help the reader to follow the discourse.

Keywords: apposition, discourse, newspaper, Nigeria, reference

Procedia PDF Downloads 133
1063 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators

Authors: Nur Aziza Luxfiati

Abstract:

Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.

Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development

Procedia PDF Downloads 133
1062 English Language Proficiency and Use as Determinants of Transactional Success in Gbagi Market, Ibadan, Nigeria

Authors: A. Robbin

Abstract:

Language selection can be an efficient negotiation strategy employed by both service or product providers and their customers to achieve transactional success. The transactional scenario in Gbagi Market, Ibadan, Nigeria provides an appropriate setting for the exploration of the Nigerian multilingual situation with its own interesting linguistic peculiarities which questions the functionality of the ‘Lingua Franca’ in trade situations. This study examined English Language proficiency among Yoruba Traders in Gbagi Market, Ibadan and its use as determinants of transactional success during service encounters. Randomly selected Yoruba-English bilingual traders and customers were administered questionnaires and the data subjected to statistical and descriptive analysis using Giles Communication Accommodation Theory. Findings reveal that only fifty percent of the traders used for the study were proficient in speaking English language. Traders with minimal proficiency in Standard English, however, resulted in the use of the Nigerian Pidgin English. Both traders and customers select the Mother Tongue, which is the Yoruba Language during service encounters but are quick to converge to the other’s preferred language as the transactional exchange demands. The English language selection is not so much for the prestige or lingua franca status of the language as it is for its functions, which include ease of communication, negotiation, and increased sales. The use of English during service encounters is mostly determined by customer’s linguistic preference which the trader accommodates to for better negotiation and never as a first choice. This convergence is found to be beneficial as it ensures sales and return patronage. Although the English language is not a preferred code choice in Gbagi Market, it serves a functional trade strategy for transactional success during service encounters in the market.

Keywords: communication accommodation theory, language selection, proficiency, service encounter, transaction

Procedia PDF Downloads 130
1061 Translating Silence: An Analysis of Dhofar University Student Translations of Elliptical Structures from English into Arabic

Authors: Ali Algryani

Abstract:

Ellipsis involves the omission of an item or items that can be recovered from the preceding clause. Ellipsis is used as a cohesion marker; it enhances the cohesiveness of a text/discourse as a clause is interpretable only through making reference to an antecedent clause. The present study attempts to investigate the linguistic phenomenon of ellipsis from a translation perspective. It is mainly concerned with how ellipsis is translated from English into Arabic. The study covers different forms of ellipsis, such as noun phrase ellipsis, verb phrase ellipsis, gapping, pseudo-gapping, stripping, and sluicing. The primary aim of the study, apart from discussing the use and function of ellipsis, is to find out how such ellipsis phenomena are dealt with in English-Arabic translation and determine the implications of the translations of elliptical structures into Arabic. The study is based on the analysis of Dhofar University (DU) students' translations of sentences containing different forms of ellipsis. The initial findings of the study indicate that due to differences in syntactic structures and stylistic preferences between English and Arabic, Arabic tends to use lexical repetition in the translation of some elliptical structures, thus achieving a higher level of explicitness. This implies that Arabic tends to prefer lexical repetition to create cohesion more than English does. Furthermore, the study also reveals that the improper translation of ellipsis leads to interpretations different from those understood from the source text. Such mistranslations can be attributed to student translators’ lack of awareness of the use and function of ellipsis as well as the stylistic preferences of both languages. This has pedagogical implications on the teaching and training of translation students at DU. Students' linguistic competence needs to be enhanced through teaching linguistics-related issues with reference to translation and both languages, .i.e. source and target languages and with special emphasis on their use, function and stylistic preferences.

Keywords: cohesion, ellipsis, explicitness, lexical repetition

Procedia PDF Downloads 99
1060 The Psychology of Cross-Cultural Communication: A Socio-Linguistics Perspective

Authors: Tangyie Evani, Edmond Biloa, Emmanuel Nforbi, Lem Lilian Atanga, Kom Beatrice

Abstract:

The dynamics of languages in contact necessitates a close study of how its users negotiate meanings from shared values in the process of cross-cultural communication. A transverse analysis of the situation demonstrates the existence of complex efforts on connecting cultural knowledge to cross-linguistic competencies within a widening range of communicative exchanges. This paper sets to examine the psychology of cross-cultural communication in a multi-linguistic setting like Cameroon where many local and international languages are in close contact. The paper equally analyses the pertinence of existing macro sociological concepts as fundamental knowledge traits in literal and idiomatic cross semantic mapping. From this point, the article presents a path model of connecting sociolinguistics to the increasing adoption of a widening range of communicative genre piloted by the on-going globalisation trends with its high-speed information technology machinery. By applying a cross cultural analysis frame, the paper will be contributing to a better understanding of the fundamental changes in the nature and goals of cross-cultural knowledge in pragmatics of communication and cultural acceptability’s. It emphasises on the point that, in an era of increasing global interchange, a comprehensive inclusive global culture through bridging gaps in cross-cultural communication would have significant potentials to contribute to achieving global social development goals, if inadequacies in language constructs are adjusted to create avenues that intertwine with sociocultural beliefs, ensuring that meaningful and context bound sociolinguistic values are observed within the global arena of communication.

Keywords: cross-cultural communication, customary language, literalisms, primary meaning, subclasses, transubstantiation

Procedia PDF Downloads 258
1059 On q-Non-extensive Statistics with Non-Tsallisian Entropy

Authors: Petr Jizba, Jan Korbel

Abstract:

We combine an axiomatics of Rényi with the q-deformed version of Khinchin axioms to obtain a measure of information (i.e., entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus obtained is uniquely solved in terms of a one-parameter family of information measures. The ensuing maximal-entropy distribution is phrased in terms of a special function known as the Lambert W-function. We analyze the corresponding ‘high’ and ‘low-temperature’ asymptotics and reveal a non-trivial structure of the parameter space.

Keywords: multifractals, Rényi information entropy, THC entropy, MaxEnt, heavy-tailed distributions

Procedia PDF Downloads 416
1058 Using of the Fractal Dimensions for the Analysis of Hyperkinetic Movements in the Parkinson's Disease

Authors: Sadegh Marzban, Mohamad Sobhan Sheikh Andalibi, Farnaz Ghassemi, Farzad Towhidkhah

Abstract:

Parkinson's disease (PD), which is characterized by the tremor at rest, rigidity, akinesia or bradykinesia and postural instability, affects the quality of life of involved individuals. The concept of a fractal is most often associated with irregular geometric objects that display self-similarity. Fractal dimension (FD) can be used to quantify the complexity and the self-similarity of an object such as tremor. In this work, we are aimed to propose a new method for evaluating hyperkinetic movements such as tremor, by using the FD and other correlated parameters in patients who are suffered from PD. In this study, we used 'the tremor data of Physionet'. The database consists of fourteen participants, diagnosed with PD including six patients with high amplitude tremor and eight patients with low amplitude. We tried to extract features from data, which can distinguish between patients before and after medication. We have selected fractal dimensions, including correlation dimension, box dimension, and information dimension. Lilliefors test has been used for normality test. Paired t-test or Wilcoxon signed rank test were also done to find differences between patients before and after medication, depending on whether the normality is detected or not. In addition, two-way ANOVA was used to investigate the possible association between the therapeutic effects and features extracted from the tremor. Just one of the extracted features showed significant differences between patients before and after medication. According to the results, correlation dimension was significantly different before and after the patient's medication (p=0.009). Also, two-way ANOVA demonstrates significant differences just in medication effect (p=0.033), and no significant differences were found between subject's differences (p=0.34) and interaction (p=0.97). The most striking result emerged from the data is that correlation dimension could quantify medication treatment based on tremor. This study has provided a technique to evaluate a non-linear measure for quantifying medication, nominally the correlation dimension. Furthermore, this study supports the idea that fractal dimension analysis yields additional information compared with conventional spectral measures in the detection of poor prognosis patients.

Keywords: correlation dimension, non-linear measure, Parkinson’s disease, tremor

Procedia PDF Downloads 222
1057 Misconception on Multilingualism in Glorious Quran

Authors: Muhammed Unais

Abstract:

The holy Quran is a pure Arabic book completely ensured the absence of non Arabic term. If it was revealed in a multilingual way including various foreign languages besides the Arabic, it can be easily misunderstood that the Arabs became helpless to compile such a work positively responding to the challenge of Allah due to their lack of knowledge in other languages in which the Quran is compiled. As based on the presence of some non Arabic terms in Quran like Istabrq, Saradiq, Rabbaniyyoon, etc. some oriental scholars argued that the holy Quran is not a book revealed in Arabic. We can see some Muslim scholars who either support or deny the presence of foreign terms in Quran but all of them agree that the roots of these words suspected as non Arabic are from foreign languages and are assimilated to the Arabic and using as same in that foreign language. After this linguistic assimilation was occurred and the assimilated non Arabic words became familiar among the Arabs, the Quran revealed as using these words in such a way stating that all words it contains are Arabic either pure or assimilated. Hence the two of opinions around the authenticity and reliability of etymology of these words are right. Those who argue the presence of foreign words he is right by the way of the roots of that words are from foreign and those who argue its absence he is right for that are assimilated and changed as the pure Arabic. The possibility of multilingualism in a monolingual book is logically negative but its significance is being changed according to time and place. The problem of multilingualism in Quran is the misconception raised by some oriental scholars that the Arabs became helpless to compile a book equal to Quran not because of their weakness in Arabic but because the Quran is revealed in languages they are ignorant on them. Really, the Quran was revealed in pure Arabic, the most literate language of the Arabs, and the whole words and its meaning were familiar among them. If one become positively aware of the linguistic and cultural assimilation ever found in whole civilizations and cultural sets he will have not any question in this respect. In this paper the researcher intends to shed light on the possibility of multilingualism in a monolingual book and debates among scholars in this issue, foreign terms in Quran and the logical justifications along with the exclusive features of Quran.

Keywords: Quran, foreign Terms, multilingualism, language

Procedia PDF Downloads 361
1056 Silent Struggles: Unveiling Linguistic Insights into Poverty in Ancient Egypt

Authors: Hossam Mohammed Abdelfattah

Abstract:

In ancient Egypt, poverty, recognized as the foremost challenge, was extensively addressed in teachings, wisdom, and literary texts. These sources vividly depicted the suffering of a class deprived of life's pleasures. The ancient Egyptian language evolved to introduce terms reflecting poverty and hunger, underscoring the society's commitment to acknowledging and cautioning against this prevalent issue. Among the notable expressions, iwty.f emerged during the Middle Kingdom, symbolizing "the one without property" and signifying the destitute poor. iwty n.f traced back to the Pyramid Texts era, referred to "the one who has nothing" or simply, the poor. Another term, , iwty-sw emphasized the state of possessing nothing. rA-awy originating in the Middle Kingdom Period, initially meant "poverty and poor," expanding to signify poverty in various texts with the addition of the preposition "in," conveying strength given to the poor. During the First Intermediate Period, sny - mnt denoted going through a crisis or suffering, possibly referencing a widespread disease or plague. It encompassed meanings of sickness, pain, and anguish. The term .” sq-sn introduced in Middle Kingdom texts, conveyed the notion of becoming miserable. sp-Xsy . represented a temporal expression reflecting a period of misery or poverty, with Xsy ,indicating distress or misery. The term qsnt appearing in Middle Kingdom texts, held meanings of painful, difficult, harsh, miserable, emaciated, and in bad condition. Its feminine form, qsn denoted anxiety and turmoil. Finally, tp-qsn encapsulated the essence of misery and unhappiness. In essence, these expressions provide linguistic insights into the multifaceted experience of poverty in ancient Egypt, illustrating the society's keen awareness and efforts to address this pervasive challenge.

Keywords: poverty, poor, suffering, misery, painful, ancient Egypt

Procedia PDF Downloads 24
1055 Construction and Analysis of Tamazight (Berber) Text Corpus

Authors: Zayd Khayi

Abstract:

This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.

Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods

Procedia PDF Downloads 39
1054 Enabling Translanguaging in the EFL Classroom, Affordances of Learning and Reflections

Authors: Nada Alghali

Abstract:

Translanguaging pedagogy suggests a new perspective in language education relating to multilingualism; multilingual learners have one linguistic repertoire and not two or more separate language systems (García and Wei, 2014). When learners translanguage, they are able to draw on all their language features in a flexible and integrated way (Otheguy, García, & Reid, 2015). In the Foreign Language Classroom, however, the tendency to use the target language only is still advocated as a pedagogy. This study attempts to enable learners in the English as a foreign language classroom to draw on their full linguistic repertoire through collaborative reading lessons. In observations prior to this study, in a classroom where English only policy prevails, learners still used their first language in group discussions yet were constrained at times by the teacher’s language policies. Through strategically enabling translanguaging in reading lessons (Celic and Seltzer, 2011), this study has revealed that learners showed creative ways of language use for learning and reflected positively on thisexperience. This case study enabled two groups in two different proficiency level classrooms who are learning English as a foreign language in their first year at University in Saudi Arabia. Learners in the two groups wereobserved over six weeks and wereasked to reflect their learning every week. The same learners were also interviewed at the end of translanguaging weeks after completing a modified model of the learning reflection (Ash and Clayton, 2009). This study positions translanguaging as collaborative and agentive within a sociocultural framework of learning, positioning translanguaging as a resource for learning as well as a process of learning. Translanguaging learning episodes are elicited from classroom observations, artefacts, interviews, reflections, and focus groups, where they are analysed qualitatively following the sociocultural discourse analysis (Fairclough &Wodak, 1997; Mercer, 2004). Initial outcomes suggest functions of translanguaging in collaborative reading tasks and recommendations for a collaborative translanguaging pedagogy approach in the EFL classroom.

Keywords: translanguaging, EFL, sociocultural theory, discourse analysis

Procedia PDF Downloads 148
1053 Learning to Translate by Learning to Communicate to an Entailment Classifier

Authors: Szymon Rutkowski, Tomasz Korbak

Abstract:

We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.

Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning

Procedia PDF Downloads 105
1052 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach

Authors: John D. Sanderson

Abstract:

The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.

Keywords: dubbing, film genre, screen translation, sociolect

Procedia PDF Downloads 139
1051 Linguistic Analysis of the Concept ‘Relation’ in Russian and English Languages

Authors: Nadezhda Obvintceva

Abstract:

The article gives the analysis of the concept ‘relation’ from the point of view of its realization in Russian and English languages on the basis of dictionaries articles. The analysis reveals the main difference of representation of this concept in both languages. It is the number of lexemes that express its general meanings. At the end of the article the author gives an explanation of possible causes of the difference and touches upon the issue about analytical phenomena in the vocabulary.

Keywords: concept, comparison, lexeme, meaning, relation, semantics

Procedia PDF Downloads 472