Search results for: language error
3586 Exploring the Impact of ChatGPT on the English Writing Skills of a Group of International EFL Uzbek Students: A Qualitative Case Study Conducted at a Private University College in Malaysia
Authors: Uranus Saadat
Abstract:
ChatGPT, as one of the well-known artificial intelligence (AI) tools, has recently been integrated into English language education and has had several impacts on learners. Accordingly, concerns regarding the overuse of this tool among EFL/ESL learners are rising, which could lead to several disadvantages in their writing skills development. The use of ChatGPT in facilitating writing skills is a novel concept that demands further studies in different contexts and learners. In this study, a qualitative case study is applied to investigate the impact of ChatGPT on the writing skills of a group of EFL bachelor’s students from Uzbekistan studying Teaching English as the Second Language (TESL) at a private university in Malaysia. The data was collected through the triangulation of document analysis, semi-structured interviews, classroom observations, and focus group discussions. Subsequently, the data was analyzed by using thematic analysis. Some of the emerging themes indicated that ChatGPT is helpful in engaging students by reducing their anxiety in class and providing them with constructive feedback and support. Conversely, certain emerging themes revealed excessive reliance on ChatGPT, resulting in a decrease in students’ creativity and critical thinking skills, memory span, and tolerance for ambiguity. The study suggests a number of strategies to alleviate its negative impacts, such as peer review activities, workshops for familiarizing students with AI, and gradual withdrawal of AI support activities. This study emphasizes the need for cautious AI integration into English language education to cultivate independent learners with higher-order thinking skills.Keywords: ChatGPT, EFL/ESL learners, English writing skills, artificial intelligence tools, critical thinking skills
Procedia PDF Downloads 353585 Code Mixing and Code-Switching Patterns in Kannada-English Bilingual Children and Adults Who Stutter
Authors: Vasupradaa Manivannan, Santosh Maruthy
Abstract:
Background/Aims: Preliminary evidence suggests that code-switching and code-mixing may act as one of the voluntary coping behavior to avoid the stuttering characteristics in children and adults; however, less is known about the types and patterns of code-mixing (CM) and code-switching (CS). Further, it is not known how it is different between children to adults who stutter. This study aimed to identify and compare the CM and CS patterns between Kannada-English bilingual children and adults who stutter. Method: A standard group comparison was made between five children who stutter (CWS) in the age range of 9-13 years and five adults who stutter (AWS) in the age range of 20-25 years. The participants who are proficient in Kannada (first language- L1) and English (second language- L2) were considered for the study. There were two tasks given to both the groups, a) General conversation (GC) with 10 random questions, b) Narration task (NAR) (Story / General Topic, for example., A Memorable Life Event) in three different conditions {Mono Kannada (MK), Mono English (ME), and Bilingual (BIL) Condition}. The children and adults were assessed online (via Zoom session) with a high-quality internet connection. The audio and video samples of the full assessment session were auto-recorded and manually transcribed. The recorded samples were analyzed for the percentage of dysfluencies using SSI-4 and CM, and CS exhibited in each participant using Matrix Language Frame (MLF) model parameters. The obtained data were analyzed using the Statistical Package for the Social Sciences (SPSS) software package (Version 20.0). Results: The mean, median, and standard deviation values were obtained for the percentage of dysfluencies (%SS) and frequency of CM and CS in Kannada-English bilingual children and adults who stutter for various parameters obtained through the MLF model. The inferential results indicated that %SS significantly varied between population (AWS vs CWS), languages (L1 vs L2), and tasks (GC vs NAR) but not across free (BIL) and bound (MK, ME) conditions. It was also found that the frequency of CM and CS patterns varies between CWS and AWS. The AWS had a lesser %SS but greater use of CS patterns than CWS, which is due to their excessive coping skills. The language mixing patterns were more observed in L1 than L2, and it was significant in most of the MLF parameters. However, there was a significantly higher (P<0.05) %SS in L2 than L1. The CS and CS patterns were more in conditions 1 and 3 than 2, which may be due to the higher proficiency of L2 than L1. Conclusion: The findings highlight the importance of assessing the CM and CS behaviors, their patterns, and the frequency of CM and CS between CWS and AWS on MLF parameters in two different tasks across three conditions. The results help us to understand CM and CS strategies in bilingual persons who stutter.Keywords: bilinguals, code mixing, code switching, stuttering
Procedia PDF Downloads 853584 Employing Motivation, Enjoyment and Self-Regulation to Predict Aural Vocabulary Knowledge
Authors: Seyed Mohammad Reza Amirian, Seyedeh Khadije Amirian, Maryam Sabouri
Abstract:
The present study aimed to investigate second language (L2) motivation, enjoyment, and self-regulation as the main variables for explaining variance in the process, and to find out the outcome of L2 Aural Vocabulary Knowledge (AVK) development by focusing on the Iranian EFL students at Hakim Sabzevari University. To this end, 122 EFL students (86 females) and (36 males) participated in this study. The students filled out the Motivation Questionnaire, Foreign Language Enjoyment Questionnaire, and Self-Regulation Questionnaire and also took Aural Vocabulary Knowledge (AVK) Test. Using SPSS software, the data were analyzed through multiple regressions and path analysis. A preliminary Pearson correlation analysis revealed that 2 out of 3 independent variables were significantly linked to AVK. According to the obtained regression model, self-regulation was a significant predictor of aural vocabulary knowledge test. Finally, the results of the mediation analysis showed that the indirect effect of enjoyment on AVK through self- regulation was significant. These findings are discussed, and implications are offered.Keywords: aural vocabulary knowledge, enjoyment, motivation, self-regulation
Procedia PDF Downloads 1573583 The Phenomena of False Cognates and Deceptive Cognates: Issues to Foreign Language Learning and Teaching Methodology Based on Set Theory
Authors: Marilei Amadeu Sabino
Abstract:
The aim of this study is to establish differences between the terms ‘false cognates’, ‘false friends’ and ‘deceptive cognates’, usually considered to be synonyms. It will be shown they are not synonyms, since they do not designate the same linguistic process or phenomenon. Despite their differences in meaning, many pairs of formally similar words in two (or more) different languages are true cognates, although they are usually known as ‘false’ cognates – such as, for instance, the English and Italian lexical items ‘assist x assistere’; ‘attend x attendere’; ‘argument x argomento’; ‘apology x apologia’; ‘camera x camera’; ‘cucumber x cocomero’; ‘fabric x fabbrica’; ‘factory x fattoria’; ‘firm x firma’; ‘journal x giornale’; ‘library x libreria’; ‘magazine x magazzino’; ‘parent x parente’; ‘preservative x preservativo’; ‘pretend x pretendere’; ‘vacancy x vacanza’, to name but a few examples. Thus, one of the theoretical objectives of this paper is firstly to elaborate definitions establishing a distinction between the words that are definitely ‘false cognates’ (derived from different etyma) and those that are just ‘deceptive cognates’ (derived from the same etymon). Secondly, based on Set Theory and on the concepts of equal sets, subsets, intersection of sets and disjoint sets, this study is intended to elaborate some theoretical and practical questions that will be useful in identifying more precisely similarities and differences between cognate words of different languages, and according to graphic interpretation of sets it will be possible to classify them and provide discernment about the processes of semantic changes. Therefore, these issues might be helpful not only to the Learning of Second and Foreign Languages, but they could also give insights into Foreign and Second Language Teaching Methodology. Acknowledgements: FAPESP – São Paulo State Research Support Foundation – the financial support offered (proc. n° 2017/02064-7).Keywords: deceptive cognates, false cognates, foreign language learning, teaching methodology
Procedia PDF Downloads 3403582 Designing a Corpus Database to Enhance the Learning of Old English Language
Authors: Raquel Mateo Mendaza, Carmen Novo Urraca
Abstract:
The current paper presents the elaboration of a corpus database that aligns two different corpora in order to simplify the search of information both for researchers and students of Old English. This database comprises the information contained in two main reference corpora, namely the Dictionary of Old English Corpus (DOEC), compiled at the University of Toronto, and the York-Toronto-Helsinki Parsed Corpus of Old English (YCOE). The first one provides information on all surviving texts written in the Old English language. The latter offers the syntactical and morphological annotation of several texts included in the DOEC. Although both corpora are closely related, as the YCOE includes the DOE source text identifier, the main problem detected is that there is not an alignment of texts that allows for the search of whole fragments to be further analysed in terms of morphology and syntax. The database proposed in this paper gathers all this information and presents it in a simple, more accessible, visual, and educational way. The alignment of fragments has been done in an automatized way. However, some problems have emerged during the creating process particularly related to the lack of correspondence in the division of fragments. For this reason, it has been necessary to revise the whole entries manually to obtain a truthful high-quality product and to carefully indicate the gaps encountered in these corpora. All in all, this database contains more than 60,000 entries corresponding with the DOE fragments annotated by the YCOE. The main strength of the resulting product is its research and teaching implications in the study of Old English. The use of this database will help researchers and students in the study of different aspects of the language, such as inflectional morphology, syntactic behaviour of given words, or translation studies, among others. By means of the search of words or fragments, the annotated information on morphology and syntax will be automatically displayed, automatizing, and speeding up the search of data.Keywords: alignment, corpus database, morphosyntactic analysis, Old English
Procedia PDF Downloads 1363581 Effects of Unfamiliar Orthography on the Lexical Encoding of Novel Phonological Features
Authors: Asmaa Shehata
Abstract:
Prior research indicates that second language (L2) learners encounter difficulty in the distinguishing novel L2 contrasting sounds that are not contrastive in their native languages. L2 orthographic information, however, is found to play a positive role in the acquisition of non-native phoneme contrasts. While most studies have mainly involved a familiar written script (i.e., the Roman script), the influence of a foreign, unfamiliar script is still unknown. Therefore, the present study asks: Does unfamiliar L2 script play a role in creating distinct phonological representations of novel contrasting phonemes? It is predicted that subjects’ performance in the unfamiliar orthography group will outperform their counterparts’ performance in the control group. Thus, training that entails orthographic inputs can yield a significant improvement in L2 adult learners’ identification and lexical encoding of novel L2 consonant contrasts. Results are discussed in terms of their implications for the type of input introduced to L2 learners to improve their language learning.Keywords: Arabic, consonant contrasts, foreign script, lexical encoding, orthography, word learning
Procedia PDF Downloads 2623580 Using iPads and Tablets in Language Teaching and Learning Process
Authors: Ece Sarigul
Abstract:
It is an undeniable fact that, teachers need new strategies to communicate with students of the next generation and to shape enticing educational experiences for them. Many schools have launched iPad/ Tablets initiatives in an effort to enhance student learning. Despite their rapid adoption, the extent to which iPads / Tablets increase student engagement and learning is not well understood. This presentation aims to examine the use of iPads and Tablets in primary and high schools in Turkey as well as in the world to increase academic achievement through promotion of higher order thinking skills. In addition to explaining the ideas of school teachers and students who use the specific iPads or Tablets , various applications in schools and their use will be discussed and demonstrated in this study. The specific” iPads or Tablets” applications discussed in this presentation can be incorporated into the curriculum to assist in developing transformative practices and programs to meet the needs of a diverse student population. In the conclusion section of the presentation, there will be some suggestions for teachers about the effective use of technological devices in the classroom. This study can help educators understand better how students are currently using iPads and Tablets and shape future use.Keywords: ipads, language teaching, tablets, technology
Procedia PDF Downloads 2573579 Text-to-Speech in Azerbaijani Language via Transfer Learning in a Low Resource Environment
Authors: Dzhavidan Zeinalov, Bugra Sen, Firangiz Aslanova
Abstract:
Most text-to-speech models cannot operate well in low-resource languages and require a great amount of high-quality training data to be considered good enough. Yet, with the improvements made in ASR systems, it is now much easier than ever to collect data for the design of custom text-to-speech models. In this work, our work on using the ASR model to collect data to build a viable text-to-speech system for one of the leading financial institutions of Azerbaijan will be outlined. NVIDIA’s implementation of the Tacotron 2 model was utilized along with the HiFiGAN vocoder. As for the training, the model was first trained with high-quality audio data collected from the Internet, then fine-tuned on the bank’s single speaker call center data. The results were then evaluated by 50 different listeners and got a mean opinion score of 4.17, displaying that our method is indeed viable. With this, we have successfully designed the first text-to-speech model in Azerbaijani and publicly shared 12 hours of audiobook data for everyone to use.Keywords: Azerbaijani language, HiFiGAN, Tacotron 2, text-to-speech, transfer learning, whisper
Procedia PDF Downloads 513578 Implementation of Language Policy in a Swedish Multicultural Early Childhood School: A Development Project
Authors: Carina Hermansson
Abstract:
This presentation focuses a development project aiming at developing and documenting the steps taken at a multilingual, multicultural K-5 school, with the aim to improve the achievement levels of the pupils by focusing language and literacy development across the schedule in a digital classroom, and in all units of the school. This pre-formulated aim, thus, may be said to adhere to neoliberal educational and accountability policies in terms of its focus on digital learning, learning results, and national curriculum standards. In particular the project aimed at improving the collaboration between the teachers, the leisure time unit, the librarians, the mother tongue teachers and bilingual study counselors. This is a school environment characterized by cultural, ethnic, linguistic, and professional pluralization. The overarching aims of the research project were to scrutinize and analyze the factors enabling and obstructing the implementation of the Language Policy in a digital classroom. Theoretical framework: We apply multi-level perspectives in the analyses inspired by Uljens’ ideas about interactive and interpersonal first order (teacher/students) and second order(principal/teachers and other staff) educational leadership as described within the framework of discursive institutionalism, when we try to relate the Language Policy, educational policy, and curriculum with the administrative processes. Methodology/research design: The development project is based on recurring research circles where teachers, leisure time assistants, mother tongue teachers and study counselors speaking the mother tongue of the pupils together with two researchers discuss their digital literacy practices in the classroom. The researchers have in collaboration with the principal developed guidelines for the work, expressed in a Language Policy document. In our understanding the document is, however, only a part of the concept, the actions of the personnel and their reflections on the practice constitute the major part of the development project. One and a half years out of three years have now passed and the project has met with a row of difficulties which shed light on factors of importance for the progress of the development project. Field notes and recordings from the research circles, a survey with the personnel, and recorded group interviews provide data on the progress of the project. Expected conclusions: The problems experienced deal with leadership, curriculum, interplay between aims, technology, contents and methods, the parents as customers taking their children to other schools, conflicting values, and interactional difficulties, that is, phenomena on different levels, ranging from school to a societal level, as for example teachers being substituted as a result of the marketization of schools. Also underlying assumptions from actors at different levels create obstacles. We find this study and the problems we are facing utterly important to share and discuss in an era with a steady flow of refugees arriving in the Nordic countries.Keywords: early childhood education, language policy, multicultural school, school development project
Procedia PDF Downloads 1473577 Design and Simulation of All Optical Fiber to the Home Network
Authors: Rahul Malhotra
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 5613576 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 2823575 Preparation on Sentimental Analysis on Social Media Comments with Bidirectional Long Short-Term Memory Gated Recurrent Unit and Model Glove in Portuguese
Authors: Leonardo Alfredo Mendoza, Cristian Munoz, Marco Aurelio Pacheco, Manoela Kohler, Evelyn Batista, Rodrigo Moura
Abstract:
Natural Language Processing (NLP) techniques are increasingly more powerful to be able to interpret the feelings and reactions of a person to a product or service. Sentiment analysis has become a fundamental tool for this interpretation but has few applications in languages other than English. This paper presents a classification of sentiment analysis in Portuguese with a base of comments from social networks in Portuguese. A word embedding's representation was used with a 50-Dimension GloVe pre-trained model, generated through a corpus completely in Portuguese. To generate this classification, the bidirectional long short-term memory and bidirectional Gated Recurrent Unit (GRU) models are used, reaching results of 99.1%.Keywords: natural processing language, sentiment analysis, bidirectional long short-term memory, BI-LSTM, gated recurrent unit, GRU
Procedia PDF Downloads 1633574 Speech Motor Processing and Animal Sound Communication
Authors: Ana Cleide Vieira Gomes Guimbal de Aquino
Abstract:
Sound communication is present in most vertebrates, from fish, mainly in species that live in murky waters, to some species of reptiles, anuran amphibians, birds, and mammals, including primates. There are, in fact, relevant similarities between human language and animal sound communication, and among these similarities are the vocalizations called calls. The first specific call in human babies is crying, which has a characteristic prosodic contour and is motivated most of the time by the need for food and by affecting the puppy-caregiver interaction, with a view to communicating the necessities and food requests and guaranteeing the survival of the species. The present work aims to articulate speech processing in the motor context with aspects of the project entitled emotional states and vocalization: a comparative study of the prosodic contours of crying in human and non-human animals. First, concepts of speech motor processing and general aspects of speech evolution will be presented to relate these two approaches to animal sound communication.Keywords: speech motor processing, animal communication, animal behaviour, language acquisition
Procedia PDF Downloads 943573 The Design of Multiple Detection Parallel Combined Spread Spectrum Communication System
Authors: Lixin Tian, Wei Xue
Abstract:
Many jobs in society go underground, such as mine mining, tunnel construction and subways, which are vital to the development of society. Once accidents occur in these places, the interruption of traditional wired communication is not conducive to the development of rescue work. In order to realize the positioning, early warning and command functions of underground personnel and improve rescue efficiency, it is necessary to develop and design an emergency ground communication system. It is easy to be subjected to narrowband interference when performing conventional underground communication. Spreading communication can be used for this problem. However, general spread spectrum methods such as direct spread communication are inefficient, so it is proposed to use parallel combined spread spectrum (PCSS) communication to improve efficiency. The PCSS communication not only has the anti-interference ability and the good concealment of the traditional spread spectrum system, but also has a relatively high frequency band utilization rate and a strong information transmission capability. So, this technology has been widely used in practice. This paper presents a PCSS communication model-multiple detection parallel combined spread spectrum (MDPCSS) communication system. In this paper, the principle of MDPCSS communication system is described, that is, the sequence at the transmitting end is processed in blocks and cyclically shifted to facilitate multiple detection at the receiving end. The block diagrams of the transmitter and receiver of the MDPCSS communication system are introduced. At the same time, the calculation formula of the system bit error rate (BER) is introduced, and the simulation and analysis of the BER of the system are completed. By comparing with the common parallel PCSS communication, we can draw a conclusion that it is indeed possible to reduce the BER and improve the system performance. Furthermore, the influence of different pseudo-code lengths selected on the system BER is simulated and analyzed, and the conclusion is that the larger the pseudo-code length is, the smaller the system error rate is.Keywords: cyclic shift, multiple detection, parallel combined spread spectrum, PN code
Procedia PDF Downloads 1423572 Redundancy in Malay Morphology: School Grammar versus Corpus Grammar
Authors: Zaharani Ahmad, Nor Hashimah Jalaluddin
Abstract:
The aim of this paper is to examine and identify the issue of linguistic redundancy in two competing grammars of Malay, namely the school grammar and the corpus grammar. The former is a normative grammar which is formally and prescriptively taught in the classroom, whereas the latter is a descriptive grammar that is informally acquired and mastered by the students as native speakers of the language outside the classroom. Corpus grammar is depicted based on its actual used in natural occurring texts, as attested in the corpus. It is observed that the grammar taught in schools is incompatible with the grammar used in the corpus. For instance, a noun phrase containing nominal reduplicated form which denotes plurality (i.e. murid-murid ‘students’ which is derived from murid ‘student’) and a modifier categorized as quantifiers (i.e. semua ‘all’, seluruh ‘entire’, and kebanyakan ‘most’) is not acceptable in the school grammar because the formation (i.e. semua murid-murid ‘all the students’ kebanyakan pelajar-pelajar ‘most of the students’) is claimed to be redundant, and redundancy is prohibited in the grammar. Redundancy is generally construed as the property of speech and language by which more information is provided than is precisely required for the message to be understood, so that, if some information is omitted, the remaining information will still be sufficient for the message to be comprehended. Thus, the correct construction to be used is strictly the reduplicated form (i.e. murid-murid ‘students’) or the quantifier plus the root (i.e. semua murid ‘all the students’) with the intention that the grammatical meaning of plural is not repeated. Nevertheless, the so-called redundant form (i.e. kebanyakan pelajar-pelajar ‘most of the students’) is frequently used in the corpus grammar. This study shows that there are a number of redundant forms occur in the morphology of the language, particularly in affixation, reduplication and combination of both. Apparently, the so-called redundancy has grammatical and socio-cultural functions in communication that is to give emphasis and to stress the importance of the information delivered by the speakers or writers.Keywords: corpus grammar, morphology, redundancy, school grammar
Procedia PDF Downloads 3463571 Intercultural Sensitivity in Iran: A Case Study of Intercultural Relations between Turks and Lors
Authors: Sepideh Mohammadi
Abstract:
Iran is a country that boasts of ethnic diversity, comprising various groups such as Turks, Lors, Arabs, Baluchs, Persians, Kurds, Gliks, Azaris, and Tabaris. The majority of people in Iran are Persians and as such, the Persian language is the official language of the country. However, it is also a common language among different ethnic groups. It is worth noting that there is a longstanding history of coexistence and cultural relations between the Turkic and Lor ethnic groups. The purpose of this article is to study the range of intercultural sensitivities of Turks and Lor peoples to identify the state of intercultural competence and reduce conflicts in the direction of cultural policy. It is important to gain insight into the mutual perceptions of Lor and Turkic people towards each other. Understanding these perceptions can greatly aid in fostering stronger relationships and promoting effective communication between the two ethnic groups. The study employed a qualitative content analysis approach to gather data using a semi-structured interview tool. The participants consisted of ten individuals from the Lor ethnic and ten individuals from the Turk ethnic. According to Milton Bennett's six-stage model, our findings reveal that the Turkish and Lor ethnic groups tend to exhibit higher intercultural sensitivity in the second stage, which consists of defense against differences. Both groups tend to emphasize the differences between them, and the notion of "us and the other" holds significant importance for them. It is important to acknowledge that both the Turk and Lor ethnicities consist of various clans, which significantly shape intercultural relations between them. A common stereotype in this regard is that the Turks of Tabriz province often do not recognize the Turks of other provinces of the country as their own. Moreover, our study indicates that an increase in interaction and communication between the Lor and Turk ethnic groups may lead to a reduction in cultural sensitivities between them.Keywords: intercultural communication, intercultural sensitivity, Iran, Lor, Turk
Procedia PDF Downloads 533570 Different Cognitive Processes in Selecting Spatial Demonstratives: A Cross-Linguistic Experimental Survey
Authors: Yusuke Sugaya
Abstract:
Our research conducts a cross-linguistic experimental investigation into the cognitive processes involved in distance judgment necessary for selecting demonstratives in deictic usage. Speakers may consider the addressee's judgment or apply certain criteria for distance judgment when they produce demonstratives. While it can be assumed that there are language and cultural differences, it remains unclear how these differences manifest across languages. This research conducted online experiments involving speakers of six languages—Japanese, Spanish, Irish, English, Italian, and French—in which a wide variety of drawings were presented on a screen, varying conditions from three perspectives: addressee, comparisons, and standard. The results of the experiments revealed various distinct features associated with demonstratives in each language, highlighting differences from a comparative standpoint. For one thing, there was an influence of a specific reference point (i.e., Standard) on the selection in Japanese and Spanish, whereas there was relatively an influence of competitors in English and Italian.Keywords: demonstratives, cross-linguistic experiment, distance judgment, social cognition
Procedia PDF Downloads 573569 A Comparative Analysis of (De)legitimation Strategies in Selected African Inaugural Speeches
Authors: Lily Chimuanya, Ehioghae Esther
Abstract:
Language, a versatile and sophisticated tool, is fundamentally sacrosanct to mankind especially within the realm of politics. In this dynamic world, political leaders adroitly use language to engage in a strategic show aimed at manipulating or mechanising the opinion of discerning people. This nuanced synergy is marked by different rhetorical strategies, meticulously synced with contextual factors ranging from cultural, ideological, and political to achieve multifaceted persuasive objectives. This study investigates the (de)legitimation strategies inherent in African presidential inaugural speeches, as African leaders not only state their policy agenda through inaugural speeches but also subtly indulge in a dance of legitimation and delegitimation, performing a twofold objective of strengthening the credibility of their administration and, at times, undermining the performance of the past administration. Drawing insights from two different legitimation models and a dataset of 4 African presidential inaugural speeches obtained from authentic websites, the study describes the roles of authorisation, rationalisation, moral evaluation, altruism, and mythopoesis in unmasking the structure of political discourse. The analysis takes a mixed-method approach to unpack the (de)legitimation strategy embedded in the carefully chosen speeches. The focus extends beyond a superficial exploration and delves into the linguistic elements that form the basis of presidential discourse. In conclusion, this examination goes beyond the nuanced landscape of language as a potent tool in politics, with each strategy contributing to the overall rhetorical impact and shaping the narrative. From this perspective, the study argues that presidential inaugural speeches are not only linguistic exercises but also viable weapons that influence perceptions and legitimise authority.Keywords: CDA, legitimation, inaugural speeches, delegitmation
Procedia PDF Downloads 743568 Wasting Human and Computer Resources
Authors: Mária Csernoch, Piroska Biró
Abstract:
The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem-solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text-based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text-based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.Keywords: deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources
Procedia PDF Downloads 3833567 Enhancing Code Security with AI-Powered Vulnerability Detection
Authors: Zzibu Mark Brian
Abstract:
As software systems become increasingly complex, ensuring code security is a growing concern. Traditional vulnerability detection methods often rely on manual code reviews or static analysis tools, which can be time-consuming and prone to errors. This paper presents a distinct approach to enhancing code security by leveraging artificial intelligence (AI) and machine learning (ML) techniques. Our proposed system utilizes a combination of natural language processing (NLP) and deep learning algorithms to identify and classify vulnerabilities in real-world codebases. By analyzing vast amounts of open-source code data, our AI-powered tool learns to recognize patterns and anomalies indicative of security weaknesses. We evaluated our system on a dataset of over 10,000 open-source projects, achieving an accuracy rate of 92% in detecting known vulnerabilities. Furthermore, our tool identified previously unknown vulnerabilities in popular libraries and frameworks, demonstrating its potential for improving software security.Keywords: AI, machine language, cord security, machine leaning
Procedia PDF Downloads 443566 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 3523565 Requests and Responses to Requests in Jordanian Arabic
Authors: Raghad Abu Salma, Beatrice Szczepek Reed
Abstract:
Politeness is one of the most researched areas in pragmatics as it is key to interpersonal interactional phenomena. Many studies, particularly in linguistics, have focused on developing politeness theories and exploring linguistic devices used in communication to construct and establish social norms. However, the question of what constitutes polite language remains a point of ongoing debate. Prior research primarily examined politeness in English and its native speaking communities, oversimplifying the notion of politeness and associating it with surface-level language use. There is also a dearth of literature on politeness in Arabic, particularly in the context of Jordanian Arabic. Prior research investigating politeness in Arabic make generalized claims about politeness in Arabic without taking the linguistic variations into account or providing empirical evidence. This proposed research aims to explore how Jordanian Arabic influences its first language users in making and responding to requests, exploring participants' perceptions of politeness and the linguistic choices they make in their interactions. The study focuses on Jordanian expats living in London, UK providing an intercultural perspective that prior research does not consider. This study employs a mixed-methods approach combining discourse completion tasks (DCTs) with semi-structured interviews. While DCTs provide insight into participants’ linguistic choices, semi-structured interviews glean insight into participants' perceptions of politeness and their linguistic choices impacted by cultural norms and diverse experiences. This paper discusses previous research on politeness in Arabic, identifies research gaps, and discusses different methods for data collection. This paper also presents preliminary findings from the ongoing study.Keywords: politeness, pragmatics, jordanian arabic, intercultural politeness
Procedia PDF Downloads 833564 Assessment of E-Portfolio on Teacher Reflections on English Language Education
Authors: Hsiaoping Wu
Abstract:
With the wide use of Internet, learners are exposed to the wider world. This exposure permits learners to discover new information and combine a variety of media in order to reach in-depth and broader understanding of their literacy and the world. Many paper-based teaching, learning and assessment modalities can be transferred to a digital platform. This study examines the use of e-portfolios for ESL (English as a second language) pre-service teacher. The data were collected by reviewing 100 E-portfolio from 2013 to 2015 in order to synthesize meaningful information about e-portfolios for ESL pre-service teachers. Participants were generalists, bilingual and ESL pre-service teachers. The studies were coded into two main categories: learning gains, including assessment, and technical skills. The findings showed that using e-portfolios enhanced and developed ESL pre-service teachers’ teaching and assessment skills. Also, the E-portfolio also developed the pre-service teachers’ technical stills to prepare a comprehensible portfolio to present who they are. Finally, the study and presentation suggested e-portfolios for ecological issues and educational purposes.Keywords: assessment, e-portfolio, pre-service teacher, reflection
Procedia PDF Downloads 3203563 Hierarchical Tree Long Short-Term Memory for Sentence Representations
Authors: Xiuying Wang, Changliang Li, Bo Xu
Abstract:
A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they cannot capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and long-term dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly.Keywords: deep learning, hierarchical tree long short-term memory, sentence representation, sentiment analysis
Procedia PDF Downloads 3533562 Fixed Points of Contractive-Like Operators by a Faster Iterative Process
Authors: Safeer Hussain Khan
Abstract:
In this paper, we prove a strong convergence result using a recently introduced iterative process with contractive-like operators. This improves and generalizes corresponding results in the literature in two ways: the iterative process is faster, operators are more general. In the end, we indicate that the results can also be proved with the iterative process with error terms.Keywords: contractive-like operator, iterative process, fixed point, strong convergence
Procedia PDF Downloads 4373561 Diverging Strategies for Processing Permissive Subjects in Dutch, German and English: Evidence from Event-Related Brain Potentials
Authors: Anne Renzel, Jens Bölte, Gunther de Vogelaer, Stefan Frank, Peter de Swart, Niko Busch
Abstract:
Permissive subjects are non-agentive subjects combined with action verbs in the active form (e.g., ‘A few years ago a penny would buy you two or three pins’, ‘The tent sleeps four people’), hardly found in German compared to English. This contrast can be related to processing constraints, proposing that distinct processing strategies account for varying efficiency of processing permissive subjects. The differences in processing strategies are linked to basic typological language properties, specifically basic word order. If a language has SVO order (like English), permissive subjects can be processed easier due to routinized look ahead parsing strategies. In contrast, if a language is SOV (like German), parsers are used to look back at parsing strategies, leading to difficulties in processing permissive subjects. The present study addresses the question of how to look ahead versus look back parsing strategies for permissive subjects depending on typological features like SVO/SOV. Additionally to English and German, we investigate Dutch, as it is clearly SOV but seems to allow more diverse roles in the grammatical subject than German. In order to demonstrate cross-linguistic differences in the processing of permissive subjects, we conduct an experiment where we record event-related brain potentials (ERPs) while native speakers of English, Dutch, and German read sentences with non-agentive permissive subjects and agentive control sentences. Test items were carefully designed considering, i.a. word frequency in the three languages. We hypothesize that in German, a non-agentive subject leads to an N400 effect on the following action verb, since in German as an SOV language, speakers apply look back strategies in processing, relying on sequence-independent non-word-order cues like case marking and animacy. Due to errors in form-to-meaning mappings, this could evoke surprisal effects, which are known to manifest in N400 amplitudes. In English, as an SVO language, speakers are more used to apply look ahead processing strategies and mostly exclusively rely on word order cues. Within the predictive coding framework grounded in research on semantic reversal anomalies (SRA, e.g., ‘The meal was devouring the kids’), we expect that the processing of permissive subjects in English elicits P600 effects. As regards Dutch, we should find N400 effects similar to German since speakers of Dutch should equally use look back strategies due to the SOV word order. However, research on SRA suggests differences in the processing of permissive subjects in Dutch and German. The results give insights into how fundamental differences in processing strategies are present in speakers of different languages and the question of whether these strategies correlate with contrasts in basic language properties. This allows for a typological classification of the West Germanic languages based on processing contrasts, which not only helps explain if and how distinct typological features between the related languages lead to varying strategies for processing grammatical structures but also sheds light on how language systems may evolve differently over time influenced by processing mechanisms. Also, the results enable us to contribute to the understanding of cross-linguistic trade-offs between linguistic variables and diachronic-causal relations from a efficiency-related processing perspective.Keywords: ERPs, look ahead vs. look back, N400, P600, permissive subjects, semantics, sentence parsing, syntax, West Germanic languages, word order
Procedia PDF Downloads 03560 Analyzing Apposition and the Typology of Specific Reference in Newspaper Discourse in Nigeria
Authors: Monday Agbonica Bello Eje
Abstract:
The language of the print media is characterized by the use of apposition. This linguistic element function strategically in journalistic discourse where it is communicatively necessary to name individuals and provide information about them. Linguistic studies on the language of the print media with bias for apposition have largely dwelt on other areas but the examination of the typology of appositive reference in newspaper discourse. Yet, it is capable of revealing ways writers communicate and provide information necessary for readers to follow and understand the message. The study, therefore, analyses the patterns of appositional occurrences and the typology of reference in newspaper articles. The data were obtained from The Punch and Daily Trust Newspapers. A total of six editions of these newspapers were collected randomly spread over three months. News and feature articles were used in the analysis. Guided by the referential theory of meaning in discourse, the appositions identified were subjected to analysis. The findings show that the semantic relation of coreference and speaker coreference have the highest percentage and frequency of occurrence in the data. This is because the subject matter of news reports and feature articles focuses on humans and the events around them; as a result, readers need to be provided with some form of detail and background information in order to identify as well as follow the discourse. Also, the non-referential relation of absolute synonymy and speaker synonymy no doubt have fewer occurrences and percentages in the analysis. This is tied to a major feature of the language of the media: simplicity. The paper concludes that appositions is mainly used for the purpose of providing the reader with much detail. In this way, the writer transmits information which helps him not only to give detailed yet concise descriptions but also in some way help the reader to follow the discourse.Keywords: apposition, discourse, newspaper, Nigeria, reference
Procedia PDF Downloads 1813559 Exploring the Use of Discourse Markers by American Male and Female Politicians: A Corpus Based Study
Authors: Gohar Rahman, Rabia Saad Ullah
Abstract:
This research aims to examine the use of discourse markers within the dominion of political speeches, differentiating between genders. The analysis centers on twelve speakers, comprising six males and six females. Speeches selected include commencement, victory, state union addresses, campaigns, and presidential speeches. Halliday and Hasan's cohesion framework, specifically discourse markers, is utilized as a theoretical framework. Data is quantitatively analyzed using AntConc to identify marker frequency. The findings are presented through Excel's tables and graphs, suggesting differences in discourse marker preferences between genders. The findings suggest a divergence in the preferences for discourse markers between males and females. However, asserting that females utilize discourse markers more frequently due to the increased use of filler words, face threat mitigation, and polite speech would be an exaggeration. The disparity in frequency is not substantial, suggesting that males and females exhibit varying language inclinations to some degree.Keywords: discourse markers, political discourse, gender, speeches, language
Procedia PDF Downloads 603558 A Critical Discourse Analysis of President Muhammad Buhari's Speeches
Authors: Joy Aworo-Okoroh
Abstract:
Politics is about trust and trust is challenged by the speaker’s ability to manipulate language before the electorate. Critical discourse analysis investigates the role of language in constructing social relationships between a political speaker and his audience. This paper explores the linguistic choices made by President Muhammad Buhari that enshrines his ideologies as well as the socio-political relations of power between him and Nigerians in his speeches. Two speeches of President Buhari –inaugural and Independence Day speeches are analyzed using Norman Fairclough’s perspective on Halliday’s Systemic functional grammar. The analysis is at two levels. The first level of analysis is the identification of transitivity and modality choices in the speeches and how they reveal the covert ideologies. The second analysis is premised on Normal Fairclough’s model, the clauses are analyzed to identify elements of power, hesistation, persuasion, threat and religious statement. It was discovered that Buhari is a dominant character who manipulates the material processes a lot.Keywords: politics, critical discourse analysis, Norman Fairclough, systemic functional grammar
Procedia PDF Downloads 5543557 The Effect of Unconscious Exposure to Religious Concepts on Mutual Stereotypes of Jews and Muslims in Israel
Authors: Lipaz Shamoa-Nir, Irene Razpurker-Apfeld
Abstract:
This research examined the impact of subliminal exposure to religious content on the mutual attitudes of majority group members (Jews) and minority group members (Muslims). Participants were subliminally exposed to religious concepts (e.g., Mezuzah, yarmulke or veil) and then they filled questionnaires assessing their stereotypes towards the out-group members. Each participant was primed with either in-group religious concepts, out-group concepts or neutral ones. The findings show that the Muslim participants were not influenced by the religious content to which they were exposed while the Jewish participants perceived the Muslims as less 'hostile' when subliminally exposed to religious concepts, regardless of concept type (out-group/in-group). This research highlights the influence of evoked religious content on out-group attitudes even when the perceiver is unaware of prime content. The power that exposure to content in a non-native language has in activating attitudes towards the out-group is also discussed.Keywords: intergroup attitudes, stereotypes, majority-minority, religious out-group, implicit content, native language
Procedia PDF Downloads 247