World Academy of Science, Engineering and Technology
[Cognitive and Language Sciences]
Online ISSN : 1307-6892
2166 A Theragnostic Approach for Alzheimer’s Disease Focused on Phosphorylated Tau
Authors: Tomás Sobrino, Lara García-Varela, Marta Aramburu-Núñez, Mónica Castro, Noemí Gómez-Lado, Mariña Rodríguez-Arrizabalaga, Antía Custodia, Juan Manuel Pías-Peleteiro, José Manuel Aldrey, Daniel Romaus-Sanjurjo, Ángeles Almeida, Pablo Aguiar, Alberto Ouro
Abstract:
Introduction: Alzheimer’s disease (AD) and other tauopathies are primary causes of dementia, causing progressive cognitive deterioration that entails serious repercussions for the patients' performance of daily tasks. Currently, there is no effective approach for the early diagnosis and treatment of AD and tauopathies. This study suggests a theragnostic approach based on the importance of phosphorylated tau protein (p-Tau) in the early pathophysiological processes of AD. We have developed a novel theragnostic monoclonal antibody (mAb) to provide both diagnostic and therapeutic effects. Methods/Results: We have developed a p-Tau mAb, which was doped with deferoxamine for radiolabeling with Zirconium-89 (89Zr) for PET imaging, as well as fluorescence dies for immunofluorescence assays. The p-Tau mAb was evaluated in vitro for toxicity by MTT assay, LDH activity, propidium iodide/Annexin V assay, caspase-3, and mitochondrial membrane potential (MMP) assay in both mouse endothelial cell line (bEnd.3) and cortical primary neurons cell cultures. Importantly, non-toxic effects (up to concentrations of p-Tau mAb greater than 100 ug/mL) were detected. In vivo experiments in the tauopathy model mice (PS19) show that the 89Zr-pTau-mAb and 89Zr-Fragments-pTau-mAb are stable in circulation for up to 10 days without toxic effects. However, only less than 0.2% reached the brain, so further strategies have to be designed for crossing the Brain-Blood-Barrier (BBB). Moreover, an intraparenchymal treatment strategy was carried out. The PS19 mice were operated to implement osmotic pumps (Alzet 1004) at two different times, at 4 and 7 months, to stimulate the controlled release for one month each of the B6 antibody or the IgG1 control antibody. We demonstrated that B6-treated mice maintained their motor and memory abilities significantly compared with IgG1 treatment. In addition, we observed a significant reduction in p-Tau deposits in the brain. Conclusions /Discussion: A theragnostic pTau-mAb was developed. Moreover, we demonstrated that our p-Tau mAb recognizes very-early pathology forms of p-Tau by non-invasive techniques, such as PET. In addition, p-Tau mAb has non-toxic effects, both in vitro and in vivo. Although the p-Tau mAb is stable in circulation, only 0.2% achieve the brain. However, direct intraventricular treatment significantly reduces cognitive impairment in Alzheimer's animal models, as well as the accumulation of toxic p-Tau species.Keywords: alzheimer's disease, theragnosis, tau, PET, immunotherapy, tauopathies
Procedia PDF Downloads 702165 Denoising Convolutional Neural Network Assisted Electrocardiogram Signal Watermarking for Secure Transmission in E-Healthcare Applications
Authors: Jyoti Rani, Ashima Anand, Shivendra Shivani
Abstract:
In recent years, physiological signals obtained in telemedicine have been stored independently from patient information. In addition, people have increasingly turned to mobile devices for information on health-related topics. Major authentication and security issues may arise from this storing, degrading the reliability of diagnostics. This study introduces an approach to reversible watermarking, which ensures security by utilizing the electrocardiogram (ECG) signal as a carrier for embedding patient information. In the proposed work, Pan-Tompkins++ is employed to convert the 1D ECG signal into a 2D signal. The frequency subbands of a signal are extracted using RDWT(Redundant discrete wavelet transform), and then one of the subbands is subjected to MSVD (Multiresolution singular valued decomposition for masking. Finally, the encrypted watermark is embedded within the signal. The experimental results show that the watermarked signal obtained is indistinguishable from the original signals, ensuring the preservation of all diagnostic information. In addition, the DnCNN (Denoising convolutional neural network) concept is used to denoise the retrieved watermark for improved accuracy. The proposed ECG signal-based watermarking method is supported by experimental results and evaluations of its effectiveness. The results of the robustness tests demonstrate that the watermark is susceptible to the most prevalent watermarking attacks.Keywords: ECG, VMD, watermarking, PanTompkins++, RDWT, DnCNN, MSVD, chaotic encryption, attacks
Procedia PDF Downloads 1012164 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 1122163 Exploring Reading into Writing: A Corpus-Based Analysis of Postgraduate Students’ Literature Review Essays
Authors: Tanzeela Anbreen, Ammara Maqsood
Abstract:
Reading into writing is one of university students' most required academic skills. The current study explored postgraduate university students’ writing quality using a corpus-based approach. Twelve postgraduate students’ literature review essays were chosen for the corpus-based analysis. These essays were chosen because students had to incorporate multiple reading sources in these essays, which was a new writing exercise for them. The students were provided feedback at least two times which comprised of the written comments by the tutor highlighting the areas of improvement and also by using the ‘track changes’ function. This exercise was repeated two times, and students submitted two drafts. This investigation included only the finally submitted work of the students. A corpus-based approach was adopted to analyse the essays because it promotes autonomous discovery and personalised learning. The aim of this analysis was to understand the existing level of students’ writing before the start of their postgraduate thesis. Text Inspector was used to analyse the quality of essays. With the help of the Text Inspector tool, the vocabulary used in the essays was compared to the English Vocabulary Profile (EVP), which describes what learners know and can do at each Common European Framework of Reference (CEFR) level. Writing quality was also measured for the Flesch reading ease score, which is a standard to describe the ease of understanding the writing content. The results reflected that students found writing essays using multiple sources challenging. In most essays, the vocabulary level achieved was between B1-B2 of the CEFL level. The study recommends that students need extensive training in developing academic writing skills, particularly in writing the literature review type assignment, which requires multiple sources citations.Keywords: literature review essays, postgraduate students, corpus-based analysis, vocabulary proficiency
Procedia PDF Downloads 732162 Interculturalizing Ethiopian Universities: Between Initiation and Institutionalization
Authors: Desta Kebede Ayana, Lies Sercu, Demelash Mengistu
Abstract:
The study is set in Ethiopia, a sub-Saharan multilingual, multiethnic African country, which has seen a significant increase in the number of universities in recent years. The aim of this growth is to provide access to education for all cultural and linguistic groups across the country. However, there are challenges in promoting intercultural competence among students in this diverse context. The aim of the study is to investigate the interculturalization of Ethiopian Higher Education Institutions as perceived by university lecturers and administrators. In particular, the study aims to determine the level of support for this educational innovation and gather suggestions for its implementation and institutionalization. The researchers employed semi-structured interviews with administrators and lecturers from two large Ethiopian universities to gather data. Thematic analysis was utilized for coding and analyzing the interview data, with the assistance of the NVIVO software. The findings obtained from the grounded analysis of the interview data reveal that while there are opportunities for interculturalization in the curriculum and campus life, support for educational innovation remains low. Administrators and lecturers also emphasize the government's responsibility to prioritize interculturalization over other educational innovation goals. The study contributes to the existing literature by examining an under-researched population in an under-researched context. Additionally, the study explores whether Western perspectives of intercultural competence align with the African context, adding to the theoretical understanding of intercultural education. The data for this study was collected through semi-structured interviews conducted with administrators and lecturers from two large Ethiopian universities. The interviews allowed for an in-depth exploration of the participants' views on interculturalization in higher education. Thematic analysis was applied to the interview data, allowing for the identification and organization of recurring themes and patterns. The analysis was conducted using the NVIVO software, which aided in coding and analyzing the data. The study addresses the extent to which administrators and lecturers support the interculturalization of Ethiopian Higher Education Institutions. It also explores their suggestions for implementing and institutionalizing intercultural education, as well as their perspectives on the current level of institutionalization. The study highlights the challenges in interculturalizing Ethiopian universities and emphasizes the need for greater support and prioritization of intercultural education. It also underscores the importance of considering the African context when conceptualizing intercultural competence. This research contributes to the understanding of intercultural education in diverse contexts and provides valuable insights for policymakers and educational institutions aiming to promote intercultural competence in higher education settings.Keywords: administrators, educational change, Ethiopia, intercultural competence, lecturers
Procedia PDF Downloads 972161 Teaching How to Speak ‘Correct’ English in No Time: An Assessment of the ‘Success’ of Professor Higgins’ Motivation in George Bernard Shaw’s Pygmalion
Authors: Armel Mbon
Abstract:
This paper examines the ‘success’ of George Bernard Shaw's main character Professor Higgins' motivation in teaching Eliza Doolittle, a young Cockney flower girl, how to speak 'correct' English in no time in Pygmalion. Notice should be given that Shaw in whose writings, language issues feature prominently, does not believe there is such a thing as perfectly correct English, but believes in the varieties of spoken English as a source of its richness. Indeed, along with his fellow phonetician Colonel Pickering, Henry Higgins succeeds in teaching Eliza that he first judges unfairly, the dialect of the upper classes and Received Pronunciation, to facilitate her social advancement. So, after six months of rigorous learning, Eliza's speech and manners are transformed, and she is able to pass herself off as a lady. Such is the success of Professor Higgins’ motivation in linguistically transforming his learner in record time. On the other side, his motivation is unsuccessful since, by the end of the play, he cannot have Eliza he believes he has shaped to his so-called good image, for wife. So, this paper aims to show, in support of the psychological approach, that in motivation, feelings, pride and prejudice cannot be combined, and that one has not to pre-judge someone’s attitude based purely on how well they speak English.Keywords: teaching, speak, in no time, success
Procedia PDF Downloads 692160 Enhancing Technical Trading Strategy on the Bitcoin Market using News Headlines and Language Models
Authors: Mohammad Hosein Panahi, Naser Yazdani
Abstract:
we present a technical trading strategy that leverages the FinBERT language model and financial news analysis with a focus on news related to a subset of Nasdaq 100 stocks. Our approach surpasses the baseline Range Break-out strategy in the Bitcoin market, yielding a remarkable 24.8% increase in the win ratio for all Friday trades and an impressive 48.9% surge in short trades specifically on Fridays. Moreover, we conduct rigorous hypothesis testing to establish the statistical significance of these improvements. Our findings underscore considerable potential of our NLP-driven approach in enhancing trading strategies and achieving greater profitability within financial markets.Keywords: quantitative finance, technical analysis, bitcoin market, NLP, language models, FinBERT, technical trading
Procedia PDF Downloads 752159 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target
Authors: Vishal Raj, Noorhan Abbas
Abstract:
Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)
Procedia PDF Downloads 1072158 The Role of Named Entity Recognition for Information Extraction
Authors: Girma Yohannis Bade, Olga Kolesnikova, Grigori Sidorov
Abstract:
Named entity recognition (NER) is a building block for information extraction. Though the information extraction process has been automated using a variety of techniques to find and extract a piece of relevant information from unstructured documents, the discovery of targeted knowledge still poses a number of research difficulties because of the variability and lack of structure in Web data. NER, a subtask of information extraction (IE), came to exist to smooth such difficulty. It deals with finding the proper names (named entities), such as the name of the person, country, location, organization, dates, and event in a document, and categorizing them as predetermined labels, which is an initial step in IE tasks. This survey paper presents the roles and importance of NER to IE from the perspective of different algorithms and application area domains. Thus, this paper well summarizes how researchers implemented NER in particular application areas like finance, medicine, defense, business, food science, archeology, and so on. It also outlines the three types of sequence labeling algorithms for NER such as feature-based, neural network-based, and rule-based. Finally, the state-of-the-art and evaluation metrics of NER were presented.Keywords: the role of NER, named entity recognition, information extraction, sequence labeling algorithms, named entity application area
Procedia PDF Downloads 802157 Self-focused Language and the Reversive Impact of Depression in Negative Mood
Authors: Soheil Behdarvandirad
Abstract:
The relationship between depression and self-focused language has been well documented. The more depressed a person is, the more "I"s, "me"s, and "my"s they will use. The present study attempted to factor in the impact of mood and examine whether negative mood has self-focused impacts similar to those of depression. For this purpose, 160 Iranian native speakers of Farsi were divided into three experimental groups of negative, neutral, and positive groups. After completing the BDI-II inventory and depression measurement, they were presented with pretested mood stimuli (3 separate videos to induce the target moods). Finally, they were asked to write between 10 to 20 minutes about a topic that asked them to freely write about their state of life, how you feel about it and the reasons that had shaped their current life circumstances. While the significant correlation between depression and I-talk was observed, negative mood led to more we-talk in general and seemed to even push the participants away from self-rumination. It seems that it is an emotion-regulatory strategy that participants subconsciously adopt to distract themselves from the disturbing mood. However, negative mood intensified the self-focused language among depressed participants. Such results can be further studied by examining brain areas that are more involved in self-perception and particularly in precuneus.Keywords: self-focused language, depression, mood, precuneus
Procedia PDF Downloads 842156 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization
Authors: Yihao Kuang, Bowen Ding
Abstract:
With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.Keywords: reinforcement learning, PPO, knowledge inference, supervised learning
Procedia PDF Downloads 672155 A Semiotic Approach to Vulnerability in Conducting Gesture and Singing Posture
Authors: Johann Van Niekerk
Abstract:
The disciplines of conducting (instrumental or choral) and of singing presume a willingness toward an open posture and, in many cases, demand it for effective communication and technique. Yet, this very openness, with the "spread-eagle" gesture as an extreme, is oftentimes counterintuitive for musicians and within the trajectory of human evolution. Conversely, it is in this very gesture of "taking up space" that confidence-gaining techniques such as the popular "power pose" are based. This paper consists primarily of a literature review, exploring the topics of physical openness and vulnerability, considering the semiotics of the "spread-eagle" and its accompanying letter X. A major finding of this research is the discrepancy between evolutionary instinct towards physical self-protection and “folding in” and the demands of the discipline of physical and gestural openness, expansiveness and vulnerability. A secondary finding is ways in which encouragement of confidence-gaining techniques may be more effective in obtaining the required results than insistence on vulnerability, which is influenced by various cultural contexts and socialization. Choral conductors and music educators are constantly seeking ways to promote engagement and healthy singing. Much of the information and direction toward this goal is gleaned by students from conducting gestures and other pedagogies employed in the rehearsal. The findings of this research provide yet another avenue toward reaching the goals required for sufficient and effective teaching and artistry on the part of instructors and students alike.Keywords: conducting, gesture, music, pedagogy, posture, vulnerability
Procedia PDF Downloads 822154 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: data augmentation, mutex task generation, meta-learning, text classification.
Procedia PDF Downloads 932153 Discourse Markers in Chinese University Students and Native English Speakers: A Corpus-Based Study
Authors: Dan Xie
Abstract:
The use of discourse markers (DMs) can play a crucial role in representing discourse interaction and pragmatic competence. Learners’ use of DMs and differences between native speakers (NSs) and non-native speakers (NNSs) in the use of various DMs have been the focus of considerable research attention. However, some commonly used DMs, such as you know, have not received as much attention in comparative studies, especially in the Chinese context. This study analyses data in two corpora (COLSEC and Spoken BNC 2014 (14-25)) to investigate how Chinese learners differ from NNSs in their use of the DM you know and its functions in speech. The results show that there is a significant difference between the two corpora in terms of the frequency of use of you know. In terms of the functions of you know, the study shows that six functions can all be present in both corpora, although there are significant differences between the five functional dimensions, especially in introducing a claim linked to the prior discourse and highlighting particular points in the discourse. It is hoped to show empirically how Chinese learners and NSs use DMs differently.Keywords: you know, discourse marker, native speaker, Chinese learner
Procedia PDF Downloads 812152 Mask-Prompt-Rerank: An Unsupervised Method for Text Sentiment Transfer
Authors: Yufen Qin
Abstract:
Text sentiment transfer is an important branch of text style transfer. The goal is to generate text with another sentiment attribute based on a text with a specific sentiment attribute while maintaining the content and semantic information unrelated to sentiment unchanged in the process. There are currently two main challenges in this field: no parallel corpus and text attribute entanglement. In response to the above problems, this paper proposed a novel solution: Mask-Prompt-Rerank. Use the method of masking the sentiment words and then using prompt regeneration to transfer the sentence sentiment. Experiments on two sentiment benchmark datasets and one formality transfer benchmark dataset show that this approach makes the performance of small pre-trained language models comparable to that of the most advanced large models, while consuming two orders of magnitude less computing and memory.Keywords: language model, natural language processing, prompt, text sentiment transfer
Procedia PDF Downloads 812151 Unsupervised Domain Adaptive Text Retrieval with Query Generation
Authors: Rui Yin, Haojie Wang, Xun Li
Abstract:
Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.Keywords: dense retrieval, query generation, unsupervised training, text retrieval
Procedia PDF Downloads 732150 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge
Authors: Yulan Wu
Abstract:
The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 732149 Probing Syntax Information in Word Representations with Deep Metric Learning
Authors: Bowen Ding, Yihao Kuang
Abstract:
In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.Keywords: deep metric learning, syntax tree probing, natural language processing, word representations
Procedia PDF Downloads 682148 3D Reconstruction of Human Body Based on Gender Classification
Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo
Abstract:
SMPL-X was a powerful parametric human body model that included male, neutral, and female models, with significant gender differences between these three models. During the process of 3D human body reconstruction, the correct selection of standard templates was crucial for obtaining accurate results. To address this issue, we developed an efficient gender classification algorithm to automatically select the appropriate template for 3D human body reconstruction. The key to this gender classification algorithm was the precise analysis of human body features. By using the SMPL-X model, the algorithm could detect and identify gender features of the human body, thereby determining which standard template should be used. The accuracy of this algorithm made the 3D reconstruction process more accurate and reliable, as it could adjust model parameters based on individual gender differences. SMPL-X and the related gender classification algorithm have brought important advancements to the field of 3D human body reconstruction. By accurately selecting standard templates, they have improved the accuracy of reconstruction and have broad potential in various application fields. These technologies continue to drive the development of the 3D reconstruction field, providing us with more realistic and accurate human body models.Keywords: gender classification, joint detection, SMPL-X, 3D reconstruction
Procedia PDF Downloads 702147 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-Fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.Keywords: fact checking, claim verification, deep learning, natural language processing
Procedia PDF Downloads 622146 Didacticization of Code Switching as a Tool for Bilingual Education in Mali
Authors: Kadidiatou Toure
Abstract:
Mali has started experimentation of teaching the national languages at school through the convergent pedagogy in 1987. Then, it is in 1994 that it will become widespread with eleven of the thirteen former national languages used at primary school. The aim was to improve the Malian educational system because the use of French as the only medium of instruction was considered a contributing factor to the significant number of student dropouts and the high rate of repetition. The Convergent pedagogy highlights the knowledge acquired by children at home, their vision of the world and especially the knowledge they have of their mother tongue. That pedagogy requires the use of a specific medium only during classroom practices and teachers have been trained in this sense. The specific medium depends on the learning content, which sometimes is French, other times, it is the national language. Research has shown that bilingual learners do not only use the required medium in their learning activities, but they code switch. It is part of their learning processes. Currently, many scholars agree on the importance of CS in bilingual classes, and teachers have been told about the necessity of integrating it into their classroom practices. One of the challenges of the Malian bilingual education curriculum is the question of ‘effective languages management’. Theoretically, depending on the classrooms, an average have been established for each of the involved language. Following that, teachers make use of CS differently, sometimes, it favors the learners, other times, it contributes to the development of some linguistic weaknesses. The present research tries to fill that gap through a tentative model of didactization of CS, which simply means the practical management of the languages involved in the bilingual classrooms. It is to know how to use CS for effective learning. Moreover, the didactization of CS tends to sensitize the teachers about the functional role of CS so that they may overcome their own weaknesses. The overall goal of this research is to make code switching a real tool for bilingual education. The specific objectives are: to identify the types of CS used during classroom activities to present the functional role of CS for the teachers as well as the pupils. to develop a tentative model of code-switching, which will help the teachers in transitional classes of bilingual schools to recognize the appropriate moment for making use of code switching in their classrooms. The methodology adopted is a qualitative one. The study is based on recorded videos of teachers of 3rd year of primary school during their classroom activities and interviews with the teachers in order to confirm the functional role of CS in bilingual classes. The theoretical framework adopted is the typology of CS proposed by Poplack (1980) to identify the types of CS used. The study reveals that teachers need to be trained on the types of CS and the different functions they assume and on the consequences of inappropriate use of language alternation.Keywords: bilingual curriculum, code switching, didactization, national languages
Procedia PDF Downloads 712145 Digitalisation of the Railway Industry: Recent Advances in the Field of Dialogue Systems: Systematic Review
Authors: Andrei Nosov
Abstract:
This paper discusses the development directions of dialogue systems within the digitalisation of the railway industry, where technologies based on conversational AI are already potentially applied or will be applied. Conversational AI is one of the popular natural language processing (NLP) tasks, as it has great prospects for real-world applications today. At the same time, it is a challenging task as it involves many areas of NLP based on complex computations and deep insights from linguistics and psychology. In this review, we focus on dialogue systems and their implementation in the railway domain. We comprehensively review the state-of-the-art research results on dialogue systems and analyse them from three perspectives: type of problem to be solved, type of model, and type of system. In particular, from the perspective of the type of tasks to be solved, we discuss characteristics and applications. This will help to understand how to prioritise tasks. In terms of the type of models, we give an overview that will allow researchers to become familiar with how to apply them in dialogue systems. By analysing the types of dialogue systems, we propose an unconventional approach in contrast to colleagues who traditionally contrast goal-oriented dialogue systems with open-domain systems. Our view focuses on considering retrieval and generative approaches. Furthermore, the work comprehensively presents evaluation methods and datasets for dialogue systems in the railway domain to pave the way for future research. Finally, some possible directions for future research are identified based on recent research results.Keywords: digitalisation, railway, dialogue systems, conversational AI, natural language processing, natural language understanding, natural language generation
Procedia PDF Downloads 632144 Human Posture Estimation Based on Multiple Viewpoints
Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo
Abstract:
This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.Keywords: multi-view, pose estimation, ST-GCN, joint fusion
Procedia PDF Downloads 702143 Parametric Template-Based 3D Reconstruction of the Human Body
Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo, Linhang Zhu
Abstract:
This study proposed a 3D human body reconstruction method, which integrates multi-view joint information into a set of joints and processes it with a parametric human body template. Firstly, we obtained human body image information captured from multiple perspectives. The multi-view information can avoid self-occlusion and occlusion problems during the reconstruction process. Then, we used the MvP algorithm to integrate multi-view joint information into a set of joints. Next, we used the parametric human body template SMPL-X to obtain more accurate three-dimensional human body reconstruction results. Compared with the traditional single-view parametric human body template reconstruction, this method significantly improved the accuracy and stability of the reconstruction.Keywords: parametric human body templates, reconstruction of the human body, multi-view, joint
Procedia PDF Downloads 792142 An Overview of College English Writing Teaching Studies in China Between 2002 and 2022: Visualization Analysis Based on CiteSpace
Authors: Yang Yiting
Abstract:
This paper employs CiteSpace to conduct a visualiazation analysis of literature on college English writing teaching researches published in core journals from the CNKI database and CSSCI journals between 2002 and 2022. It aims to explore the characteristics of researches and future directions on college English writing teaching. The present study yielded the following major findings: the field primarily focuses on innovative writing teaching models and methods, the integration of traditional classroom teaching and information technology, and instructional strategies to enhance students' writing skills. The future research is anticipated to involve a hybrid writing teaching approach combining online and offline teaching methods, leveraging the "Internet+" digital platform, aiming to elevate students' writing proficiency. This paper also presents a prospective outlook for college English writing teaching research in China.Keywords: citespace, college English, writing teaching, visualization analysis
Procedia PDF Downloads 702141 Track Initiation Method Based on Multi-Algorithm Fusion Learning of 1DCNN And Bi-LSTM
Abstract:
Aiming at the problem of high-density clutter and interference affecting radar detection target track initiation in ECM and complex radar mission, the traditional radar target track initiation method has been difficult to adapt. To this end, we propose a multi-algorithm fusion learning track initiation algorithm, which transforms the track initiation problem into a true-false track discrimination problem, and designs an algorithm based on 1DCNN(One-Dimensional CNN)combined with Bi-LSTM (Bi-Directional Long Short-Term Memory )for fusion classification. The experimental dataset consists of real trajectories obtained from a certain type of three-coordinate radar measurements, and the experiments are compared with traditional trajectory initiation methods such as rule-based method, logical-based method and Hough-transform-based method. The simulation results show that the overall performance of the multi-algorithm fusion learning track initiation algorithm is significantly better than that of the traditional method, and the real track initiation rate can be effectively improved under high clutter density with the average initiation time similar to the logical method.Keywords: track initiation, multi-algorithm fusion, 1DCNN, Bi-LSTM
Procedia PDF Downloads 942140 Balancing Independence and Guidance: Cultivating Student Agency in Blended Learning
Authors: Yeo Leng Leng
Abstract:
Blended learning, with its combination of online and face-to-face instruction, presents a unique set of challenges and opportunities in terms of cultivating student agency. While it offers flexibility and personalized learning pathways, it also demands a higher degree of self-regulation and motivation from students. This paper presents the design of blended learning in a Chinese lesson and discusses the framework involved. It also talks about the Edtech tools adopted to engage the students. Some of the students’ works will be showcased. A qualitative case study research method was employed in this paper to find out more about students’ learning experiences and to give them a voice. The purpose is to seek improvement in the blended learning design of the Chinese lessons and to encourage students’ self-directed learning.Keywords: blended learning, student agency, ed-tech tools, self-directed learning
Procedia PDF Downloads 782139 The Effectiveness of Using Picture Storybooks on Young English as a Foreign Language Learners for English Vocabulary Acquisition and Moral Education: A Case Study
Authors: Tiffany Yung Hsuan Ma
Abstract:
The Whole Language Approach, which gained prominence in the 1980s, and the increasing emphasis on multimodal resources in educational research have elevated the utilization of picture books in English as a foreign language (EFL) instruction. This approach underscores real-world language application, providing EFL learners with a range of sensory stimuli, including visual elements. Additionally, the substantial impact of picture books on fostering prosocial behaviors in children has garnered recognition. These narratives offer opportunities to impart essential values such as kindness, fairness, and respect. Examining how picture books enhance vocabulary acquisition can offer valuable insights for educators in devising engaging language activities conducive to a positive learning environment. This research entails a case study involving two kindergarten-aged EFL learners and employs qualitative methods, including worksheets, observations, and interviews with parents. It centers on three pivotal inquiries: (1) The extent of young learners' acquisition of essential vocabulary, (2) The influence of these books on their behavior at home, and (3) Effective teaching strategies for the seamless integration of picture storybooks into EFL instruction for young learners. The findings can provide guidance to parents, educators, curriculum developers, and policymakers regarding the advantages and optimal approaches to incorporating picture books into language instruction. Ultimately, this research has the potential to enhance English language learning outcomes and promote moral education within the Taiwanese EFL context.Keywords: EFL, vocabulary acquisition, young learners, picture book, moral education
Procedia PDF Downloads 692138 3D Human Body Reconstruction Based on Multiple Viewpoints
Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo
Abstract:
The aim of this study was to improve the effects of human body 3D reconstruction. The MvP algorithm was adopted to obtain key point information from multiple perspectives. This algorithm allowed the capture of human posture and joint positions from multiple angles, providing more comprehensive and accurate data. The study also incorporated the SMPL-X model, which has been widely used for human body modeling, to achieve more accurate 3D reconstruction results. The use of the MvP algorithm made it possible to observe the reconstructed object from multiple angles, thus reducing the problems of blind spots and missing information. This algorithm was able to effectively capture key point information, including the position and rotation angle of limbs, providing key data for subsequent 3D reconstruction. Compared with traditional single-view methods, the method of multi-view fusion significantly improved the accuracy and stability of reconstruction. By combining the MvP algorithm with the SMPL-X model, we successfully achieved better human body 3D reconstruction effects. The SMPL-X model is highly scalable and can generate highly realistic 3D human body models, thus providing more detail and shape information.Keywords: 3D human reconstruction, multi-view, joint point, SMPL-X
Procedia PDF Downloads 702137 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization
Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu
Abstract:
This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection
Procedia PDF Downloads 61