Search results for: Bert Van Bavel
45 BERT-Based Chinese Coreference Resolution
Authors: Li Xiaoge, Wang Chaodong
Abstract:
We introduce the first Chinese Coreference Resolution Model based on BERT (CCRM-BERT) and show that it significantly outperforms all previous work. The key idea is to consider the features of the mention, such as part of speech, width of spans, distance between spans, etc. And the influence of each features on the model is analyzed. The model computes mention embeddings that combine BERT with features. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the Chinese OntoNotes benchmark.Keywords: BERT, coreference resolution, deep learning, nature language processing
Procedia PDF Downloads 21644 A Context-Centric Chatbot for Cryptocurrency Using the Bidirectional Encoder Representations from Transformers Neural Networks
Authors: Qitao Xie, Qingquan Zhang, Xiaofei Zhang, Di Tian, Ruixuan Wen, Ting Zhu, Ping Yi, Xin Li
Abstract:
Inspired by the recent movement of digital currency, we are building a question answering system concerning the subject of cryptocurrency using Bidirectional Encoder Representations from Transformers (BERT). The motivation behind this work is to properly assist digital currency investors by directing them to the corresponding knowledge bases that can offer them help and increase the querying speed. BERT, one of newest language models in natural language processing, was investigated to improve the quality of generated responses. We studied different combinations of hyperparameters of the BERT model to obtain the best fit responses. Further, we created an intelligent chatbot for cryptocurrency using BERT. A chatbot using BERT shows great potential for the further advancement of a cryptocurrency market tool. We show that the BERT neural networks generalize well to other tasks by applying it successfully to cryptocurrency.Keywords: bidirectional encoder representations from transformers, BERT, chatbot, cryptocurrency, deep learning
Procedia PDF Downloads 14743 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors
Authors: Dylan Elliott, Katya Pertsova
Abstract:
Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.Keywords: BERT, grammatical error correction, preposition error detection, prepositions
Procedia PDF Downloads 14742 Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach
Authors: Adeep Hande, Shubham Agarwal
Abstract:
This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy.Keywords: large language models, semi-supervised learning, sexism detection, data sparsity
Procedia PDF Downloads 7041 Relation between Low Thermal Stress and Antioxidant Enzymes Activity in a Sweetening Plant: Stevia Rebaudiana Bert
Authors: T. Bettaieb, S. Soufi, S. Arbaoui
Abstract:
Stevia rebaudiana Bert. is a natural sweet plant. The leaves contain diterpene glycosides stevioside, rebaudiosides A-F, steviolbioside and dulcoside, which are responsible for its sweet taste and have commercial value all over the world as sugar substitute in foods and medicines. Stevia rebaudiana Bert. is sensitive temperature lower than 9°C. The possibility of its outdoor culture in Tunisian conditions demand genotypes tolerant to low temperatures. In order to evaluate the low temperature tolerance of eight genotypes of Stevia rebaudiana, the activities of superoxide dismutase (SOD), ascorbate peroxidase (APX) and catalases (CAT) were measured. Before carrying out the analyses, three genotypes of Stevia were exposed for 1 month at a temperature regime of 18°C during the day and 7°C at night similar to winter conditions in Tunisia. In response to the stress generated by low temperature, antioxidant enzymes activity revealed on native gel and quantified by spectrophotometry showed variable levels according to their degree of tolerance to low temperatures.Keywords: chilling tolerance, enzymatic activity, stevia rebaudiana bert, low thermal stress
Procedia PDF Downloads 44240 A Grey-Box Text Attack Framework Using Explainable AI
Authors: Esther Chiramal, Kelvin Soh Boon Kai
Abstract:
Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.Keywords: BERT, explainable AI, Grey-box text attack, transformer
Procedia PDF Downloads 13639 One-Shot Text Classification with Multilingual-BERT
Authors: Hsin-Yang Wang, K. M. A. Salam, Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu Kao
Abstract:
Detecting user intent from natural language expression has a wide variety of use cases in different natural language processing applications. Recently few-shot training has a spike of usage on commercial domains. Due to the lack of significant sample features, the downstream task performance has been limited or leads to an unstable result across different domains. As a state-of-the-art method, the pre-trained BERT model gathering the sentence-level information from a large text corpus shows improvement on several NLP benchmarks. In this research, we are proposing a method to change multi-class classification tasks into binary classification tasks, then use the confidence score to rank the results. As a language model, BERT performs well on sequence data. In our experiment, we change the objective from predicting labels into finding the relations between words in sequence data. Our proposed method achieved 71.0% accuracy in the internal intent detection dataset and 63.9% accuracy in the HuffPost dataset. Acknowledgment: This work was supported by NCKU-B109-K003, which is the collaboration between National Cheng Kung University, Taiwan, and SoftBank Corp., Tokyo.Keywords: OSML, BERT, text classification, one shot
Procedia PDF Downloads 10138 Detecting Covid-19 Fake News Using Deep Learning Technique
Authors: AnjalI A. Prasad
Abstract:
Nowadays, social media played an important role in spreading misinformation or fake news. This study analyzes the fake news related to the COVID-19 pandemic spread in social media. This paper aims at evaluating and comparing different approaches that are used to mitigate this issue, including popular deep learning approaches, such as CNN, RNN, LSTM, and BERT algorithm for classification. To evaluate models’ performance, we used accuracy, precision, recall, and F1-score as the evaluation metrics. And finally, compare which algorithm shows better result among the four algorithms.Keywords: BERT, CNN, LSTM, RNN
Procedia PDF Downloads 20537 A Review of Research on Pre-training Technology for Natural Language Processing
Authors: Moquan Gong
Abstract:
In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.Keywords: natural language processing, pre-training, language model, word vectors
Procedia PDF Downloads 5736 Bidirectional Encoder Representations from Transformers Sentiment Analysis Applied to Three Presidential Pre-Candidates in Costa Rica
Authors: Félix David Suárez Bonilla
Abstract:
A sentiment analysis service to detect polarity (positive, neural, and negative), based on transfer learning, was built using a Spanish version of BERT and applied to tweets written in Spanish. The dataset that was used consisted of 11975 reviews, which were extracted from Google Play using the google-play-scrapper package. The BETO trained model used: the AdamW optimizer, a batch size of 16, a learning rate of 2x10⁻⁵ and 10 epochs. The system was tested using tweets of three presidential pre-candidates from Costa Rica. The system was finally validated using human labeled examples, achieving an accuracy of 83.3%.Keywords: NLP, transfer learning, BERT, sentiment analysis, social media, opinion mining
Procedia PDF Downloads 17435 Benchmarking Bert-Based Low-Resource Language: Case Uzbek NLP Models
Authors: Jamshid Qodirov, Sirojiddin Komolov, Ravilov Mirahmad, Olimjon Mirzayev
Abstract:
Nowadays, natural language processing tools play a crucial role in our daily lives, including various techniques with text processing. There are very advanced models in modern languages, such as English, Russian etc. But, in some languages, such as Uzbek, the NLP models have been developed recently. Thus, there are only a few NLP models in Uzbek language. Moreover, there is no such work that could show which Uzbek NLP model behaves in different situations and when to use them. This work tries to close this gap and compares the Uzbek NLP models existing as of the time this article was written. The authors try to compare the NLP models in two different scenarios: sentiment analysis and sentence similarity, which are the implementations of the two most common problems in the industry: classification and similarity. Another outcome from this work is two datasets for classification and sentence similarity in Uzbek language that we generated ourselves and can be useful in both industry and academia as well.Keywords: NLP, benchmak, bert, vectorization
Procedia PDF Downloads 5434 A BERT-Based Model for Financial Social Media Sentiment Analysis
Authors: Josiel Delgadillo, Johnson Kinyua, Charles Mutigwe
Abstract:
The purpose of sentiment analysis is to determine the sentiment strength (e.g., positive, negative, neutral) from a textual source for good decision-making. Natural language processing in domains such as financial markets requires knowledge of domain ontology, and pre-trained language models, such as BERT, have made significant breakthroughs in various NLP tasks by training on large-scale un-labeled generic corpora such as Wikipedia. However, sentiment analysis is a strong domain-dependent task. The rapid growth of social media has given users a platform to share their experiences and views about products, services, and processes, including financial markets. StockTwits and Twitter are social networks that allow the public to express their sentiments in real time. Hence, leveraging the success of unsupervised pre-training and a large amount of financial text available on social media platforms could potentially benefit a wide range of financial applications. This work is focused on sentiment analysis using social media text on platforms such as StockTwits and Twitter. To meet this need, SkyBERT, a domain-specific language model pre-trained and fine-tuned on financial corpora, has been developed. The results show that SkyBERT outperforms current state-of-the-art models in financial sentiment analysis. Extensive experimental results demonstrate the effectiveness and robustness of SkyBERT.Keywords: BERT, financial markets, Twitter, sentiment analysis
Procedia PDF Downloads 15233 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.Keywords: subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing
Procedia PDF Downloads 13032 Topic Sentiments toward the COVID-19 Vaccine on Twitter
Authors: Melissa Vang, Raheyma Khan, Haihua Chen
Abstract:
The coronavirus disease 2019 (COVID‐19) pandemic has changed people's lives from all over the world. More people have turned to Twitter to engage online and discuss the COVID-19 vaccine. This study aims to present a text mining approach to identify people's attitudes towards the COVID-19 vaccine on Twitter. To achieve this purpose, we collected 54,268 COVID-19 vaccine tweets from September 01, 2020, to November 01, 2020, then the BERT model is used for the sentiment and topic analysis. The results show that people had more negative than positive attitudes about the vaccine, and countries with an increasing number of confirmed cases had a higher percentage of negative attitudes. Additionally, the topics discussed in positive and negative tweets are different. The tweet datasets can be helpful to information professionals to inform the public about vaccine-related informational resources. Our findings may have implications for understanding people's cognitions and feelings about the vaccine.Keywords: BERT, COVID-19 vaccine, sentiment analysis, topic modeling
Procedia PDF Downloads 15031 The Impact of Recurring Events in Fake News Detection
Authors: Ali Raza, Shafiq Ur Rehman Khan, Raja Sher Afgun Usmani, Asif Raza, Basit Umair
Abstract:
Detection of Fake news and missing information is gaining popularity, especially after the advancement in social media and online news platforms. Social media platforms are the main and speediest source of fake news propagation, whereas online news websites contribute to fake news dissipation. In this study, we propose a framework to detect fake news using the temporal features of text and consider user feedback to identify whether the news is fake or not. In recent studies, the temporal features in text documents gain valuable consideration from Natural Language Processing and user feedback and only try to classify the textual data as fake or true. This research article indicates the impact of recurring and non-recurring events on fake and true news. We use two models BERT and Bi-LSTM to investigate, and it is concluded from BERT we get better results and 70% of true news are recurring and rest of 30% are non-recurring.Keywords: natural language processing, fake news detection, machine learning, Bi-LSTM
Procedia PDF Downloads 2230 Toxic Chemicals from Industries into Pacific Biota. Investigation of Polychlorinated Biphenyls (PCBs), Dioxins (PCDD), Furans (PCDF) and Polybrominated Diphenyls (PBDE No. 47) in Tuna and Shellfish in Kiribati, Solomon Islands and the Fiji Islands
Authors: Waisea Votadroka, Bert Van Bavel
Abstract:
The most commonly consumed shellfish species produced in the Pacific, shellfish and tuna fish, were investigated for the occurrence of a range of brominated and chlorinated contaminants in order to establish current levels. Polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) were analysed in the muscle of tuna species Katsuwonis pelamis, yellow fin tuna, and shellfish species from the Fiji Islands. The investigation of polychlorinated biphenyls (PCBs), furans (PCDFs) and polybrominated diphenylethers (PBDE No.47) in tuna and shellfish in Kiribati, Solomon Islands and Fiji is necessary due to the lack of research data in the Pacific region. The health risks involved in the consumption of marine foods laced with toxic organo-chlorinated and brominated compounds makes in the analyses of these compounds in marine foods important particularly when Pacific communities rely on these resources as their main diet. The samples were homogenized in a motor with anhydrous sodium sulphate in the ratio of 1:3 (muscle) and 1:4-1:5 (roe and butter). The tuna and shellfish samples were homogenized and freeze dried at the sampling location at the Institute of Applied Science, Fiji. All samples were stored in amber glss jars at -18 ° C until extraction at Orebro University. PCDD/Fs, PCBs and pesticides were all analysed using an Autospec Ultina HRGC/HRMS operating at 10,000 resolutions with EI ionization at 35 eV. All the measurements were performed in the selective ion recording mode (SIR), monitoring the two most abundant ions of the molecular cluster (PCDD/Fs and PCBs). Results indicated that the Fiji Composite sample for Batissa violacea range 0.7-238.6 pg/g lipid; Fiji sample composite Anadara antiquate range 1.6 – 808.6 pg/g lipid; Solomon Islands Katsuwonis Pelamis 7.5-3770.7 pg/g lipid; Solomon Islands Yellow Fin tuna 2.1 -778.4 pg/g lipid; Kiribati Katsuwonis Pelamis 4.8-1410 pg/g lipids. The study has demonstrated that these species are good bio-indicators of the presence of these toxic organic pollutants in edible marine foods. Our results suggest that for pesticides levels, p,p-DDE is the most dominant for all the groups and seems to be highest at 565.48 pg/g lipid in composite Batissa violacea from Fiji. For PBDE no.47 in comparing all samples, the composite Batissa violacea from Fiji had the highest level of 118.20 pg/g lipid. Based upon this study, the contamination levels found in the study species were quite lower compared with levels reported in impacted ecosystems around the worldKeywords: polychlorinated biphenyl, polybrominated diphenylethers, pesticides, organoclorinated pesticides, PBDEs
Procedia PDF Downloads 38329 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 14628 Legal Judgment Prediction through Indictments via Data Visualization in Chinese
Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun
Abstract:
Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization
Procedia PDF Downloads 12127 Ontology Expansion via Synthetic Dataset Generation and Transformer-Based Concept Extraction
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology expansion, synthetic dataset, transformer fine-tuning, concept extraction, DOLCE, BERT, taxonomy, LLM, NER
Procedia PDF Downloads 1426 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework
Authors: Ma Cecilia Siva
Abstract:
This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.Keywords: tokenized, sigmoid activation, transformer, multi category classification
Procedia PDF Downloads 825 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review
Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha
Abstract:
Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text
Procedia PDF Downloads 11524 The Platform for Digitization of Georgian Documents
Authors: Erekle Magradze, Davit Soselia, Levan Shughliashvili, Irakli Koberidze, Shota Tsiskaridze, Victor Kakhniashvili, Tamar Chaghiashvili
Abstract:
Since the beginning of active publishing activity in Georgia, voluminous printed material has been accumulated, the digitization of which is an important task. Digitized materials will be available to the audience, and it will be possible to find text in them and conduct various factual research. Digitizing scanned documents means scanning documents, extracting text from the scanned documents, and processing the text into a corresponding language model to detect inaccuracies and grammatical errors. Implementing these stages requires a unified, scalable, and automated platform, where the digital service developed for each stage will perform the task assigned to it; at the same time, it will be possible to develop these services dynamically so that there is no interruption in the work of the platform.Keywords: NLP, OCR, BERT, Kubernetes, transformers
Procedia PDF Downloads 14323 Hate Speech Detection Using Deep Learning and Machine Learning Models
Authors: Nabil Shawkat, Jamil Saquer
Abstract:
Social media has accelerated our ability to engage with others and eliminated many communication barriers. On the other hand, the widespread use of social media resulted in an increase in online hate speech. This has drastic impacts on vulnerable individuals and societies. Therefore, it is critical to detect hate speech to prevent innocent users and vulnerable communities from becoming victims of hate speech. We investigate the performance of different deep learning and machine learning algorithms on three different datasets. Our results show that the BERT model gives the best performance among all the models by achieving an F1-score of 90.6% on one of the datasets and F1-scores of 89.7% and 88.2% on the other two datasets.Keywords: hate speech, machine learning, deep learning, abusive words, social media, text classification
Procedia PDF Downloads 13622 Design and Optimization of a 6 Degrees of Freedom Co-Manipulated Parallel Robot for Prostate Brachytherapy
Authors: Aziza Ben Halima, Julien Bert, Dimitris Visvikis
Abstract:
In this paper, we propose designing and evaluating a parallel co-manipulated robot dedicated to low-dose-rate prostate brachytherapy. We developed 6 degrees of freedom compact and lightweight robot easy to install in the operating room thanks to its parallel design. This robotic system provides a co-manipulation allowing the surgeon to keep control of the needle’s insertion and consequently to improve the acceptability of the plan for the clinic. The best dimension’s configuration was solved by calculating the geometric model and using an optimization approach. The aim was to ensure the whole coverage of the prostate volume and consider the allowed free space around the patient that includes the ultrasound probe. The final robot dimensions fit in a cube of 300 300 300 mm³. A prototype was 3D printed, and the robot workspace was measured experimentally. The results show that the proposed robotic system satisfies the medical application requirements and permits the needle to reach any point within the prostate.Keywords: medical robotics, co-manipulation, prostate brachytherapy, optimization
Procedia PDF Downloads 20521 A Comparison of Performance Indicators Between University-Level Rugby Union and Rugby Union Sevens Matches
Authors: Pieter van den Berg, Retief Broodryk, Bert Moolman
Abstract:
Firstly, this study aimed to identify which performance indicators (PIs) discriminate between winning and losing university-level Rugby Union (RU) teams and, secondly, to compare the significant PIs in RU and Rugby Union Sevens (RS) at university level. Understanding the importance of PIs and their effect on match outcomes could assist coaching staff to prioritise specific game aspects during training to increase performance. Twenty randomly selected round-robin matches of the 2018 Varsity Cup (n=20), and Varsity Sports sevens (n=20) tournaments were analysed. A linear mixed model was used to determine statistical significant differences set at p≤0.05 while effect size was reported according to Cohen's d value. Results revealed that various PIs discriminated between winning and losing RU teams and that specific PIs could be observed as significant in both RU and RS. Therefore, specific identified tactical aspects of RU and RS should be prioritised to optimise performanceKeywords: match success, notational analysis, performance analysis, rugby, video analysis
Procedia PDF Downloads 7020 ViraPart: A Text Refinement Framework for Automatic Speech Recognition and Natural Language Processing Tasks in Persian
Authors: Narges Farokhshad, Milad Molazadeh, Saman Jamalabbasi, Hamed Babaei Giglou, Saeed Bibak
Abstract:
The Persian language is an inflectional subject-object-verb language. This fact makes Persian a more uncertain language. However, using techniques such as Zero-Width Non-Joiner (ZWNJ) recognition, punctuation restoration, and Persian Ezafe construction will lead us to a more understandable and precise language. In most of the works in Persian, these techniques are addressed individually. Despite that, we believe that for text refinement in Persian, all of these tasks are necessary. In this work, we proposed a ViraPart framework that uses embedded ParsBERT in its core for text clarifications. First, used the BERT variant for Persian followed by a classifier layer for classification procedures. Next, we combined models outputs to output cleartext. In the end, the proposed model for ZWNJ recognition, punctuation restoration, and Persian Ezafe construction performs the averaged F1 macro scores of 96.90%, 92.13%, and 98.50%, respectively. Experimental results show that our proposed approach is very effective in text refinement for the Persian language.Keywords: Persian Ezafe, punctuation, ZWNJ, NLP, ParsBERT, transformers
Procedia PDF Downloads 21419 Probing Syntax Information in Word Representations with Deep Metric Learning
Authors: Bowen Ding, Yihao Kuang
Abstract:
In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.Keywords: deep metric learning, syntax tree probing, natural language processing, word representations
Procedia PDF Downloads 6718 Text2Time: Transformer-Based Article Time Period Prediction
Authors: Karthick Prasad Gunasekaran, B. Chase Babrich, Saurabh Shirodkar, Hee Hwang
Abstract:
Construction preparation is crucial for the success of a construction project. By involving project participants early in the construction phase, project managers can plan ahead and resolve issues early, resulting in project success and satisfaction. This study uses quantitative data from construction management projects to determine the relationship between the pre-construction phase, construction schedule, and customer satisfaction. This study examined a total of 65 construction projects and 93 clients per job to (a) identify the relationship between the pre-construction phase and program reduction and (b) the pre-construction phase and customer retention. Based on a quantitative analysis, this study found a negative correlation between pre-construction status and project schedule in 65 construction projects. This finding means that the more preparatory work done on a particular project, the shorter the total construction time. The Net Promoter Score of 93 clients from 65 projects was then used to determine the relationship between construction preparation and client satisfaction. The pre-construction status and the projects were further analyzed, and a positive correlation between them was found. This shows that customers are happier with projects with a higher ready-to-build ratio than projects with less ready-to-build.Keywords: NLP, BERT, LLM, deep learning, classification
Procedia PDF Downloads 10417 From Parents to Pioneers: Examining Parental Impact on Entrepreneurial Traits in Latin America
Authors: Bert Seither
Abstract:
Entrepreneurship is a critical driver of economic growth, especially in emerging regions such as Latin America. This study investigates how parental influences, particularly parental individual entrepreneurial orientation (IEO), shape the entrepreneurial traits of Latin American entrepreneurs. By examining key factors like parental IEO, work ethic, parenting style, and family support, this research assesses how much of an entrepreneur's own IEO can be attributed to parental influence. The study also explores how socio-economic status and cultural context moderate the relationship between parental traits and entrepreneurial orientation. Data will be collected from 600 Latin American entrepreneurs via an online survey. This research aims to provide a comprehensive understanding of the intergenerational transmission of entrepreneurial traits and the broader socio-cultural factors that contribute to entrepreneurial success in diverse contexts. Findings from this study will offer valuable insights for policymakers, educators, and business leaders on fostering entrepreneurship across Latin America, providing practical applications for shaping entrepreneurial behavior through family influences.Keywords: entrepreneurial orientation, parental influence, Latin American entrepreneurs, socio-economic status, cultural context
Procedia PDF Downloads 1816 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection
Authors: Maryam Heidari, James H. Jones Jr.
Abstract:
Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.Keywords: bot detection, natural language processing, neural network, social media
Procedia PDF Downloads 116