Search results for: pre-trained language models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9733

Search results for: pre-trained language models

9733 JaCoText: A Pretrained Model for Java Code-Text Generation

Authors: Jessica Lopez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri

Abstract:

Pretrained transformer-based models have shown high performance in natural language generation tasks. However, a new wave of interest has surged: automatic programming language code generation. This task consists of translating natural language instructions to a source code. Despite the fact that well-known pre-trained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformer neural network. It aims to generate java source code from natural language text. JaCoText leverages the advantages of both natural language and code generation models. More specifically, we study some findings from state of the art and use them to (1) initialize our model from powerful pre-trained models, (2) explore additional pretraining on our java dataset, (3) lead experiments combining the unimodal and bimodal data in training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.

Keywords: java code generation, natural language processing, sequence-to-sequence models, transformer neural networks

Procedia PDF Downloads 237
9732 Models and Metamodels for Computer-Assisted Natural Language Grammar Learning

Authors: Evgeny Pyshkin, Maxim Mozgovoy, Vladislav Volkov

Abstract:

The paper follows a discourse on computer-assisted language learning. We examine problems of foreign language teaching and learning and introduce a metamodel that can be used to define learning models of language grammar structures in order to support teacher/student interaction. Special attention is paid to the concept of a virtual language lab. Our approach to language education assumes to encourage learners to experiment with a language and to learn by discovering patterns of grammatically correct structures created and managed by a language expert.

Keywords: computer-assisted instruction, language learning, natural language grammar models, HCI

Procedia PDF Downloads 479
9731 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 114
9730 The Content-Based Classroom: Perspectives on Integrating Language and Content

Authors: Mourad Ben Bennani

Abstract:

Views of language and language learning have undergone a tremendous change over the last decades. Language is no longer seen as a set of structured rules. It is rather viewed as a tool of interaction and communication. This shift in views has resulted in change in viewing language learning, which gave birth to various approaches and methodologies of language teaching. Two of these approaches are content-based instruction and content and language integrated learning (CLIL). These are similar approaches which integrate content and foreign/second language learning through various methodologies and models as a result of different implementations around the world. This presentation deals with sociocultural view of CBI and CLIL. It also defines language and content as vital components of CBI and CLIL. Next it reviews the origins of CBI and the continuum perspectives and CLIL definitions and models featured in the literature. Finally it summarizes current aspects around research in program evaluation with a focus on the benefits and challenges of these innovative approaches for second language teaching.

Keywords: CBI, CLIL, CBI continuum, CLIL models

Procedia PDF Downloads 386
9729 Dual Language Immersion Models in Theory and Practice

Authors: S. Gordon

Abstract:

Dual language immersion is growing fast in language teaching today. This study provides an overview and evaluation of the different models of Dual language immersion programs in US K-12 schools. First, the paper provides a brief current literature review on the theory of Dual Language Immersion (DLI) in Second Language Acquisition (SLA) studies. Second, examples of several types of DLI language teaching models in US K-12 public schools are presented (including 50/50 models, 90/10 models, etc.). Third, we focus on the unique example of DLI education in the state of Utah, a successful, growing program in K-12 schools that includes: French, Chinese, Spanish, and Portuguese. The project investigates the theory and practice particularly of the case of public elementary and secondary school children that study half their school day in the L1 and the other half in the chosen L2, from kindergarten (age 5-6) through high school (age 17-18). Finally, the project takes the observations of Utah French DLI elementary through secondary programs as a case study. To conclude, we look at the principal challenges, pedagogical objectives and outcomes, and important implications for other US states and other countries (such as France currently) that are in the process of developing similar language learning programs.

Keywords: dual language immersion, second language acquisition, language teaching, pedagogy, teaching, French

Procedia PDF Downloads 141
9728 Benchmarking Bert-Based Low-Resource Language: Case Uzbek NLP Models

Authors: Jamshid Qodirov, Sirojiddin Komolov, Ravilov Mirahmad, Olimjon Mirzayev

Abstract:

Nowadays, natural language processing tools play a crucial role in our daily lives, including various techniques with text processing. There are very advanced models in modern languages, such as English, Russian etc. But, in some languages, such as Uzbek, the NLP models have been developed recently. Thus, there are only a few NLP models in Uzbek language. Moreover, there is no such work that could show which Uzbek NLP model behaves in different situations and when to use them. This work tries to close this gap and compares the Uzbek NLP models existing as of the time this article was written. The authors try to compare the NLP models in two different scenarios: sentiment analysis and sentence similarity, which are the implementations of the two most common problems in the industry: classification and similarity. Another outcome from this work is two datasets for classification and sentence similarity in Uzbek language that we generated ourselves and can be useful in both industry and academia as well.

Keywords: NLP, benchmak, bert, vectorization

Procedia PDF Downloads 25
9727 Probing Language Models for Multiple Linguistic Information

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, large-scale pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. The word vectors produced by these language models can be viewed as dense encoded presentations of natural language that in text form. However, it is unknown how much linguistic information is encoded and how. In this paper, we construct several corresponding probing tasks for multiple linguistic information to clarify the encoding capabilities of different language models and performed a visual display. We firstly obtain word presentations in vector form from different language models, including BERT, ELMo, RoBERTa and GPT. Classifiers with a small scale of parameters and unsupervised tasks are then applied on these word vectors to discriminate their capability to encode corresponding linguistic information. The constructed probe tasks contain both semantic and syntactic aspects. The semantic aspect includes the ability of the model to understand semantic entities such as numbers, time, and characters, and the grammatical aspect includes the ability of the language model to understand grammatical structures such as dependency relationships and reference relationships. We also compare encoding capabilities of different layers in the same language model to infer how linguistic information is encoded in the model.

Keywords: language models, probing task, text presentation, linguistic information

Procedia PDF Downloads 70
9726 Evaluation and Compression of Different Language Transformer Models for Semantic Textual Similarity Binary Task Using Minority Language Resources

Authors: Ma. Gracia Corazon Cayanan, Kai Yuen Cheong, Li Sha

Abstract:

Training a language model for a minority language has been a challenging task. The lack of available corpora to train and fine-tune state-of-the-art language models is still a challenge in the area of Natural Language Processing (NLP). Moreover, the need for high computational resources and bulk data limit the attainment of this task. In this paper, we presented the following contributions: (1) we introduce and used a translation pair set of Tagalog and English (TL-EN) in pre-training a language model to a minority language resource; (2) we fine-tuned and evaluated top-ranking and pre-trained semantic textual similarity binary task (STSB) models, to both TL-EN and STS dataset pairs. (3) then, we reduced the size of the model to offset the need for high computational resources. Based on our results, the models that were pre-trained to translation pairs and STS pairs can perform well for STSB task. Also, having it reduced to a smaller dimension has no negative effect on the performance but rather has a notable increase on the similarity scores. Moreover, models that were pre-trained to a similar dataset have a tremendous effect on the model’s performance scores.

Keywords: semantic matching, semantic textual similarity binary task, low resource minority language, fine-tuning, dimension reduction, transformer models

Procedia PDF Downloads 176
9725 Semantic Textual Similarity on Contracts: Exploring Multiple Negative Ranking Losses for Sentence Transformers

Authors: Yogendra Sisodia

Abstract:

Researchers are becoming more interested in extracting useful information from legal documents thanks to the development of large-scale language models in natural language processing (NLP), and deep learning has accelerated the creation of powerful text mining models. Legal fields like contracts benefit greatly from semantic text search since it makes it quick and easy to find related clauses. After collecting sentence embeddings, it is relatively simple to locate sentences with a comparable meaning throughout the entire legal corpus. The author of this research investigated two pre-trained language models for this task: MiniLM and Roberta, and further fine-tuned them on Legal Contracts. The author used Multiple Negative Ranking Loss for the creation of sentence transformers. The fine-tuned language models and sentence transformers showed promising results.

Keywords: legal contracts, multiple negative ranking loss, natural language inference, sentence transformers, semantic textual similarity

Procedia PDF Downloads 69
9724 Literacy in First and Second Language: Implication for Language Education

Authors: Inuwa Danladi Bawa

Abstract:

One of the challenges of African states in the development of education in the past and the present is the problem of literacy. Literacy in the first language is seen as a strong base for the development of second language; they are mostly the language of education. Language development is an offshoot of language planning; so the need to develop literacy in both first and second language affects language education and predicts the extent of achievement of the entire education sector. The need to balance literacy acquisition in first language for good conditioning the acquisition of second language is paramount. Likely constraints that includes; non-standardization, underdeveloped and undeveloped first languages are among many. Solutions to some of these include the development of materials and use of the stages and levels of literacy acquisition. This is with believed that a child writes well in second language if he has literacy in the first language.

Keywords: first language, second language, literacy, english language, linguistics

Procedia PDF Downloads 405
9723 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML

Procedia PDF Downloads 103
9722 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction

Authors: Zhengrong Wu, Haibo Yang

Abstract:

In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.

Keywords: large language model, knowledge graph, disaster, deep learning

Procedia PDF Downloads 23
9721 A Review of Research on Pre-training Technology for Natural Language Processing

Authors: Moquan Gong

Abstract:

In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.

Keywords: natural language processing, pre-training, language model, word vectors

Procedia PDF Downloads 24
9720 Transportation Language Register as One of Language Community

Authors: Diyah Atiek Mustikawati

Abstract:

Language register refers to a variety of a language used for particular purpose or in a particular social setting. Language register also means as a concept of adapting one’s use of language to conform to standards or tradition in a given professional or social situation. This descriptive study tends to discuss about the form of language register in transportation aspect, factors, also the function of use it. Mostly, language register in transportation aspect uses short sentences in form of informal register. The factor caused language register used are speaker, word choice, background of language. The functions of language register in transportations aspect are to make communication between crew easily, also to keep safety when they were in bad condition. Transportation language register developed naturally as one of variety of language used.

Keywords: language register, language variety, communication, transportation

Procedia PDF Downloads 444
9719 A Comparative Study of Approaches in User-Centred Health Information Retrieval

Authors: Harsh Thakkar, Ganesh Iyer

Abstract:

In this paper, we survey various user-centered or context-based biomedical health information retrieval systems. We present and discuss the performance of systems submitted in CLEF eHealth 2014 Task 3 for this purpose. We classify and focus on comparing the two most prevalent retrieval models in biomedical information retrieval namely: Language Model (LM) and Vector Space Model (VSM). We also report on the effectiveness of using external medical resources and ontologies like MeSH, Metamap, UMLS, etc. We observed that the LM based retrieval systems outperform VSM based systems on various fronts. From the results we conclude that the state-of-art system scores for MAP was 0.4146, P@10 was 0.7560 and NDCG@10 was 0.7445, respectively. All of these score were reported by systems built on language modeling approaches.

Keywords: clinical document retrieval, concept-based information retrieval, query expansion, language models, vector space models

Procedia PDF Downloads 291
9718 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9

Authors: Ulrich Wake, Eniman Syamsuddin

Abstract:

The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weights

Keywords: ​ One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation

Procedia PDF Downloads 172
9717 A Graph-Based Retrieval Model for Passage Search

Authors: Junjie Zhong, Kai Hong, Lei Wang

Abstract:

Passage Retrieval (PR) plays an important role in many Natural Language Processing (NLP) tasks. Traditional efficient retrieval models relying on exact term-matching, such as TF-IDF or BM25, have nowadays been exceeded by pre-trained language models which match by semantics. Though they gain effectiveness, deep language models often require large memory as well as time cost. To tackle the trade-off between efficiency and effectiveness in PR, this paper proposes Graph Passage Retriever (GraphPR), a graph-based model inspired by the development of graph learning techniques. Different from existing works, GraphPR is end-to-end and integrates both term-matching information and semantics. GraphPR constructs a passage-level graph from BM25 retrieval results and trains a GCN-like model on the graph with graph-based objectives. Passages were regarded as nodes in the constructed graph and were embedded in dense vectors. PR can then be implemented using embeddings and a fast vector-similarity search. Experiments on a variety of real-world retrieval datasets show that the proposed model outperforms related models in several evaluation metrics (e.g., mean reciprocal rank, accuracy, F1-scores) while maintaining a relatively low query latency and memory usage.

Keywords: efficiency, effectiveness, graph learning, language model, passage retrieval, term-matching model

Procedia PDF Downloads 91
9716 Towards Efficient Reasoning about Families of Class Diagrams Using Union Models

Authors: Tejush Badal, Sanaa Alwidian

Abstract:

Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time.

Keywords: analysis, class diagram, model family, unified modeling language, union model

Procedia PDF Downloads 46
9715 Neural Machine Translation for Low-Resource African Languages: Benchmarking State-of-the-Art Transformer for Wolof

Authors: Cheikh Bamba Dione, Alla Lo, Elhadji Mamadou Nguer, Siley O. Ba

Abstract:

In this paper, we propose two neural machine translation (NMT) systems (French-to-Wolof and Wolof-to-French) based on sequence-to-sequence with attention and transformer architectures. We trained our models on a parallel French-Wolof corpus of about 83k sentence pairs. Because of the low-resource setting, we experimented with advanced methods for handling data sparsity, including subword segmentation, back translation, and the copied corpus method. We evaluate the models using the BLEU score and find that transformer outperforms the classic seq2seq model in all settings, in addition to being less sensitive to noise. In general, the best scores are achieved when training the models on word-level-based units. For subword-level models, using back translation proves to be slightly beneficial in low-resource (WO) to high-resource (FR) language translation for the transformer (but not for the seq2seq) models. A slight improvement can also be observed when injecting copied monolingual text in the target language. Moreover, combining the copied method data with back translation leads to a substantial improvement of the translation quality.

Keywords: backtranslation, low-resource language, neural machine translation, sequence-to-sequence, transformer, Wolof

Procedia PDF Downloads 115
9714 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 90
9713 Diagonal Vector Autoregressive Models and Their Properties

Authors: Usoro Anthony E., Udoh Emediong

Abstract:

Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models.

Keywords: VAR models, diagonal VAR models, variance, autocovariance, autocorrelations

Procedia PDF Downloads 82
9712 Self-Supervised Pretraining on Sequences of Functional Magnetic Resonance Imaging Data for Transfer Learning to Brain Decoding Tasks

Authors: Sean Paulsen, Michael Casey

Abstract:

In this work we present a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening. Our pretraining results are the first to suggest a synergistic effect of multitask training on fMRI data. Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task. We observe significantly improved accuracy on held-out runs with the finetuned models, which demonstrates the ability of our pretraining tasks to facilitate transfer learning. This work contributes to the growing body of literature on transformer architectures for pretraining and transfer learning with fMRI data, and serves as a proof of concept for our pretraining tasks and multitask pretraining on fMRI data.

Keywords: transfer learning, fMRI, self-supervised, brain decoding, transformer, multitask training

Procedia PDF Downloads 60
9711 Models of Bilingual Education in Majority Language Contexts: An Exploratory Study of Bilingual Programmes in Qatari Primary Schools

Authors: Fatma Al-Maadheed

Abstract:

Following an ethnographic approach this study explored bilingual programmes offered by two types of primary schools in Qatar: international and Independent schools. Qatar with its unique linguistic and socio-economic situation launched a new initiative for educatiobnal development in 2001 but with hardly any research linked to theses changes. The study reveals that the Qatari bilingual schools context was one of heteroglossia, with three codes in operation: Modern Standard Arabic, Colloquial Arabic dialects and English. The two schools adopted different models of bilingualism. The international school adopted a strict separation policy between the two languages following a monoglossic belief. The independent school was found to apply a flexible language policy. The study also highlighted the daily challnges produced from the diglossia situation in Qatar, the difference between students and teacher dialect as well as acquiring literacy in the formal language. In addition to an abscence of a clear language policy in Schools, the study brought attention to the instructional methods utilised in language teaching which are mostly associated with successful bilingual education.

Keywords: diglossia, instructional methods, language policy, qatari primary schools

Procedia PDF Downloads 451
9710 Improving Academic Literacy in the Secondary History Classroom

Authors: Wilhelmina van den Berg

Abstract:

Through intentionally developing the Register Continuum and the Functional Model of Language in the secondary history classroom, teachers can effectively build a teaching and learning cycle geared towards literacy improvement and EAL differentiation. Developing an understanding of and engaging students in the field, tenor, and tone of written and spoken language, allows students to build the foundation for greater academic achievement due to integrated literacy skills in the history classroom. Building a variety of scaffolds during lessons within these models means students can improve their academic language and communication skills.

Keywords: academic language, EAL, functional model of language, international baccalaureate, literacy skills

Procedia PDF Downloads 38
9709 Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language

Authors: Daleesha M. Viswanathan, Sumam Mary Idicula

Abstract:

Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively.

Keywords: orientation features, discrete feature vector, HMM., Indian sign language

Procedia PDF Downloads 342
9708 User Intention Generation with Large Language Models Using Chain-of-Thought Prompting Title

Authors: Gangmin Li, Fan Yang

Abstract:

Personalized recommendation is crucial for any recommendation system. One of the techniques for personalized recommendation is to identify the intention. Traditional user intention identification uses the user’s selection when facing multiple items. This modeling relies primarily on historical behaviour data resulting in challenges such as the cold start, unintended choice, and failure to capture intention when items are new. Motivated by recent advancements in Large Language Models (LLMs) like ChatGPT, we present an approach for user intention identification by embracing LLMs with Chain-of-Thought (CoT) prompting. We use the initial user profile as input to LLMs and design a collection of prompts to align the LLM's response through various recommendation tasks encompassing rating prediction, search and browse history, user clarification, etc. Our tests on real-world datasets demonstrate the improvements in recommendation by explicit user intention identification and, with that intention, merged into a user model.

Keywords: personalized recommendation, generative user modelling, user intention identification, large language models, chain-of-thought prompting

Procedia PDF Downloads 9
9707 Enhancing Technical Trading Strategy on the Bitcoin Market using News Headlines and Language Models

Authors: Mohammad Hosein Panahi, Naser Yazdani

Abstract:

we present a technical trading strategy that leverages the FinBERT language model and financial news analysis with a focus on news related to a subset of Nasdaq 100 stocks. Our approach surpasses the baseline Range Break-out strategy in the Bitcoin market, yielding a remarkable 24.8% increase in the win ratio for all Friday trades and an impressive 48.9% surge in short trades specifically on Fridays. Moreover, we conduct rigorous hypothesis testing to establish the statistical significance of these improvements. Our findings underscore considerable potential of our NLP-driven approach in enhancing trading strategies and achieving greater profitability within financial markets.

Keywords: quantitative finance, technical analysis, bitcoin market, NLP, language models, FinBERT, technical trading

Procedia PDF Downloads 34
9706 Mask-Prompt-Rerank: An Unsupervised Method for Text Sentiment Transfer

Authors: Yufen Qin

Abstract:

Text sentiment transfer is an important branch of text style transfer. The goal is to generate text with another sentiment attribute based on a text with a specific sentiment attribute while maintaining the content and semantic information unrelated to sentiment unchanged in the process. There are currently two main challenges in this field: no parallel corpus and text attribute entanglement. In response to the above problems, this paper proposed a novel solution: Mask-Prompt-Rerank. Use the method of masking the sentiment words and then using prompt regeneration to transfer the sentence sentiment. Experiments on two sentiment benchmark datasets and one formality transfer benchmark dataset show that this approach makes the performance of small pre-trained language models comparable to that of the most advanced large models, while consuming two orders of magnitude less computing and memory.

Keywords: language model, natural language processing, prompt, text sentiment transfer

Procedia PDF Downloads 48
9705 ‘Non-Legitimate’ Voices as L2 Models: Towards Becoming a Legitimate L2 Speaker

Authors: M. Rilliard

Abstract:

Based on a Multiliteracies-inspired and sociolinguistically-informed advanced French composition class, this study employed autobiographical narratives from speakers traditionally considered non-legitimate models for L2 teaching purposes of inspiring students to develop an authentic L2 voice and to see themselves as legitimate L2 speakers. Students explored their L2 identities in French through a self-inspired fictional character. Two autobiographical narratives of identity quest by non-traditional French speakers provided them guidance through this process: the novel Le Bleu des Abeilles (2013) and the film Qu’Allah Bénisse la France (2014). Written and French oral productions for different genres, as well as metalinguistic reflections in English, were collected and analyzed. Results indicate that ideas and materials that were relatable to students, namely relatable experiences and relatable language, were most useful to them in developing their L2 voices and achieving authentic and legitimate L2 speakership. These results point towards the benefits of using non-traditional speakers as pedagogical models, as they serve to legitimize students’ sense of their own L2-speakership, which ultimately leads them towards a better, more informed, mastery of the language.

Keywords: foreign language classroom, L2 identity, L2 learning and teaching, L2 writing, sociolinguistics

Procedia PDF Downloads 106
9704 Enhancing English Language Learning through Learners Cultural Background

Authors: A. Attahiru, Rabi Abdullahi Danjuma, Fatima Bint

Abstract:

Language and culture are two concepts which are closely related that one affects the other. This paper attempts to examine the definition of language and culture by discussing the relationship between them. The paper further presents some instructional strategies for the teaching of language and culture as well as the influence of culture on language. It also looks at its implication to language education and finally some recommendation and conclusion were drawn.

Keywords: culture, language, relationship, strategies, teaching

Procedia PDF Downloads 377