Search results for: text obfuscation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1322

Search results for: text obfuscation

1082 Identifying Concerned Citizen Communication Style During the State Parliamentary Elections in Bavaria

Authors: Volker Mittendorf, Andre Schmale

Abstract:

In this case study, we want to explore the Twitter-use of candidates during the state parliamentary elections-year 2018 in Bavaria, Germany. This paper focusses on the seven parties that probably entered the parliament. Against this background, the paper classifies the use of language as populism which itself is considered as a political communication style. First, we determine the election campaigns which started in the years 2017 on Twitter, after that we categorize the posting times of the different direct candidates in order to derive ideal types from our empirical data. Second, we have done the exploration based on the dictionary of concerned citizens which contains German political language of the right and the far right. According to that, we are analyzing the corpus with methods of text mining and social network analysis, and afterwards we display the results in a network of words of concerned citizen communication style (CCCS).

Keywords: populism, communication style, election, text mining, social media

Procedia PDF Downloads 149
1081 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition

Authors: Mohamed Lotfy, Ghada Soliman

Abstract:

Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.

Keywords: computer vision, pattern recognition, optical character recognition, deep learning

Procedia PDF Downloads 95
1080 Poultry in Motion: Text Mining Social Media Data for Avian Influenza Surveillance in the UK

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Background: Avian influenza, more commonly known as Bird flu, is a viral zoonotic respiratory disease stemming from various species of poultry, including pets and migratory birds. Researchers have purported that the accessibility of health information online, in addition to the low-cost data collection methods the internet provides, has revolutionized the methods in which epidemiological and disease surveillance data is utilized. This paper examines the feasibility of using internet data sources, such as Twitter and livestock forums, for the early detection of the avian flu outbreak, through the use of text mining algorithms and social network analysis. Methods: Social media mining was conducted on Twitter between the period of 01/01/2021 to 31/12/2021 via the Twitter API in Python. The results were filtered firstly by hashtags (#avianflu, #birdflu), word occurrences (avian flu, bird flu, H5N1), and then refined further by location to include only those results from within the UK. Analysis was conducted on this text in a time-series manner to determine keyword frequencies and topic modeling to uncover insights in the text prior to a confirmed outbreak. Further analysis was performed by examining clinical signs (e.g., swollen head, blue comb, dullness) within the time series prior to the confirmed avian flu outbreak by the Animal and Plant Health Agency (APHA). Results: The increased search results in Google and avian flu-related tweets showed a correlation in time with the confirmed cases. Topic modeling uncovered clusters of word occurrences relating to livestock biosecurity, disposal of dead birds, and prevention measures. Conclusions: Text mining social media data can prove to be useful in relation to analysing discussed topics for epidemiological surveillance purposes, especially given the lack of applied research in the veterinary domain. The small sample size of tweets for certain weekly time periods makes it difficult to provide statistically plausible results, in addition to a great amount of textual noise in the data.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, avian influenza, social media

Procedia PDF Downloads 105
1079 L1 Poetry and Moral Tales as a Factor Affecting L2 Acquisition in EFL Settings

Authors: Arif Ahmed Mohammed Al-Ahdal

Abstract:

Poetry, tales, and fables have always been a part of the L1 repertoire and one that takes the learners to another amazing and fascinating world of imagination. The storytelling class and the genre of poems are activities greatly enjoyed by all age groups. The very significant idea behind their inclusion in the language curriculum is to sensitize young minds to a wide range of human emotions that are believed to greatly contribute to building their social resilience, emotional stability, empathy towards fellow creatures, and literacy. Quite certainly, the learning objective at this stage is not language acquisition (though it happens as an automatic process) but getting the young learners to be acquainted with an entire spectrum of what may be called the ‘noble’ abilities of the human race. They enrich their very existence, inspiring them to unearth ‘selves’ that help them as adults and enable them to co-exist fruitfully and symbiotically with their fellow human beings. By extension, ‘higher’ training in these literature genres shows the universality of human emotions, sufferings, aspirations, and hopes. The current study is anchored on the Reader-Response-Theory in literature learning, which suggests that the reader reconstructs work and re-enacts the author's creative role. Reiteratingly, literary works provide clues or verbal symbols in a linguistic system, widely accepted by everyone who shares the language, but everyone reads their own life experiences and situations into them. The significance of words depends on the reader, even if they have a typical relationship. In every reading, there is an interaction between the reader and the text. The process of reading is an experience in which the reader tries to comprehend the literary work, which surpasses its full potential since it provides emotional and intellectual reactions that are not anticipated from the document but cannot be affirmed just by the reader as a part of the text. The idea is that the text forms the basis of a unifying experience. A reinterpretation of the literary text may transform it into a guiding principle to respond to actual experiences and personal memories. The impulses delivered to the reader vary according to poetry or texts; nevertheless, the readers differ considerably even with the same material. Previous studies confirm that poetry is a useful tool for learning a language. This present paper works on these hypotheses and proposes to study the impetus given to L2 learning as a factor of exposure to poetry and meaningful stories in L1. The driving force behind the choice of this topic is the first-hand experience that the researcher had while teaching a literary text to a group of BA students who, as a reaction to the text, initially burst into tears and ultimately turned the class into an interactive session. The study also intends to compare the performance of male and female students post intervention using pre and post-tests, apart from undertaking a detailed inquiry via interviews with college learners of English to understand how L1 literature plays a great role in the acquisition of L2.

Keywords: SLA, literary text, poetry, tales, affective factors

Procedia PDF Downloads 77
1078 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification

Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar

Abstract:

Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.

Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings

Procedia PDF Downloads 174
1077 A New Method to Reduce 5G Application Layer Payload Size

Authors: Gui Yang Wu, Bo Wang, Xin Wang

Abstract:

Nowadays, 5G service-based interface architecture uses text-based payload like JSON to transfer business data between network functions, which has obvious advantages as internet services but causes unnecessarily larger traffic. In this paper, a new 5G application payload size reduction method is presented to provides the mechanism to negotiate about new capability between network functions when network communication starts up and how 5G application data are reduced according to negotiated information with peer network function. Without losing the advantages of 5G text-based payload, this method demonstrates an excellent result on application payload size reduction and does not increase the usage quota of computing resource. Implementation of this method does not impact any standards or specifications and not change any encoding or decoding functionality too. In a real 5G network, this method will contribute to network efficiency and eventually save considerable computing resources.

Keywords: 5G, JSON, payload size, service-based interface

Procedia PDF Downloads 184
1076 Ancient Port Towns of Western Coastal Plain in Kerala, India: From Manuscripts to Material Remains

Authors: Saravanan R.

Abstract:

The landscape of Kerala was paved way for the growth of maritime contacts with foreigners. Pepper was the important exported item from here because this region only having pepper production on the West Coast of India. The paper is attempting to analysis the available references of ancient port town in Kerala. It is merely preliminary investigation about Early Historic urban centres with the available literary evidences and excavations reports that would help us to understand the ancient port town in Kerala coast. There were number of ancient port towns mentioned in classical Greek and Sangam literatures. For instance, Naura, Tyndis, Nelcynda, Bacare and Muziris were the major sites of Kerala which represented only in the text but not able to locate these sites on the ground so far. There are lot of studies on site based as well as state based regarding the various aspects of ancient port towns. But, it is mainly focussed on factual narration and theoretical interpretation.

Keywords: urban centre, amphora, Muziris, port town, Sangam text and trade

Procedia PDF Downloads 71
1075 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 159
1074 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules

Authors: Mohsen Maraoui

Abstract:

In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.

Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing

Procedia PDF Downloads 141
1073 Direct Blind Separation Methods for Convolutive Images Mixtures

Authors: Ahmed Hammed, Wady Naanaa

Abstract:

In this paper, we propose a general approach to deal with the problem of a convolutive mixture of images. We use a direct blind source separation method by adding only one non-statistical justified constraint describing the relationships between different mixing matrix at the aim to make its resolution easy. This method can be applied, provided that this constraint is known, to degraded document affected by the overlapping of text-patterns and images. This is due to chemical and physical reactions of the materials (paper, inks,...) occurring during the documents aging, and other unpredictable causes such as humidity, microorganism infestation, human handling, etc. We will demonstrate that this problem corresponds to a convolutive mixture of images. Subsequently, we will show how the validation of our method through numerical examples. We can so obtain clear images from unreadable ones which can be caused by pages superposition, a phenomenon similar to that we find every often in archival documents.

Keywords: blind source separation, convoluted mixture, degraded documents, text-patterns overlapping

Procedia PDF Downloads 322
1072 Scattered Places in Stories Singularity and Pattern in Geographic Information

Authors: I. Pina, M. Painho

Abstract:

Increased knowledge about the nature of place and the conditions under which space becomes place is a key factor for better urban planning and place-making. Although there is a broad consensus on the relevance of this knowledge, difficulties remain in relating the theoretical framework about place and urban management. Issues related to representation of places are among the greatest obstacles to overcome this gap. With this critical discussion, based on literature review, we intended to explore, in a common framework for geographical analysis, the potential of stories to spell out place meanings, bringing together qualitative text analysis and text mining in order to capture and represent the singularity contained in each person's life history, and the patterns of social processes that shape places. The development of this reasoning is based on the extensive geographical thought about place, and in the theoretical advances in the field of Geographic Information Science (GISc).

Keywords: discourse analysis, geographic information science place, place-making, stories

Procedia PDF Downloads 198
1071 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features

Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim

Abstract:

The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.

Keywords: data mining, Korean linguistic feature, literary fiction, relationship extraction

Procedia PDF Downloads 382
1070 Metadiscourse in EFL, ESP and Subject-Teaching Online Courses in Higher Education

Authors: Maria Antonietta Marongiu

Abstract:

Propositional information in discourse is made coherent, intelligible, and persuasive through metadiscourse. The linguistic and rhetorical choices that writers/speakers make to organize and negotiate content matter are intended to help relate a text to its context. Besides, they help the audience to connect to and interpret a text according to the values of a specific discourse community. Based on these assumptions, this work aims to analyse the use of metadiscourse in the spoken performance of teachers in online EFL, ESP, and subject-teacher courses taught in English to non-native learners in higher education. In point of fact, the global spread of Covid 19 has forced universities to transition their in-class courses to online delivery. This has inevitably placed on the instructor a heavier interactional responsibility compared to in-class courses. Accordingly, online delivery needs greater structuring as regards establishing the reader/listener’s resources for text understanding and negotiating. Indeed, in online as well as in in-class courses, lessons are social acts which take place in contexts where interlocutors, as members of a community, affect the ways ideas are presented and understood. Following Hyland’s Interactional Model of Metadiscourse (2005), this study intends to investigate Teacher Talk in online academic courses during the Covid 19 lock-down in Italy. The selected corpus includes the transcripts of online EFL and ESP courses and subject-teachers online courses taught in English. The objective of the investigation is, firstly, to ascertain the presence of metadiscourse in the form of interactive devices (to guide the listener through the text) and interactional features (to involve the listener in the subject). Previous research on metadiscourse in academic discourse, in college students' presentations in EAP (English for Academic Purposes) lessons, as well as in online teaching methodology courses and MOOC (Massive Open Online Courses) has shown that instructors use a vast array of metadiscoursal features intended to express the speakers’ intentions and standing with respect to discourse. Besides, they tend to use directions to orient their listeners and logical connectors referring to the structure of the text. Accordingly, the purpose of the investigation is also to find out whether metadiscourse is used as a rhetorical strategy by instructors to control, evaluate and negotiate the impact of the ongoing talk, and eventually to signal their attitudes towards the content and the audience. Thus, the use of metadiscourse can contribute to the informative and persuasive impact of discourse, and to the effectiveness of online communication, especially in learning contexts.

Keywords: discourse analysis, metadiscourse, online EFL and ESP teaching, rhetoric

Procedia PDF Downloads 129
1069 The Oral Production of University EFL Students: An Analysis of Tasks, Format, and Quality in Foreign Language Development

Authors: Vera Lucia Teixeira da Silva, Sandra Regina Buttros Gattolin de Paula

Abstract:

The present study focuses on academic literacy and addresses the impact of semantic-discursive resources on the constitution of genres that are produced in such context. The research considers the development of writing in the academic context in Portuguese. Researches that address academic literacy and the characteristics of the texts produced in this context are rare, mainly with focus on the development of writing, considering three variables: the constitution of the writer, the perception of the reader/interlocutor and the organization of the informational text flow. The research aims to map the semantic-discursive resources of the written register in texts of several genres and produced by students in the first semester of the undergraduate course in Letters. The hypothesis raised is that writing in the academic environment is not a recurrent literacy practice for these learners and can be explained by the ontogenetic and phylogenetic nature of language development. Qualitative in nature, the present research has as empirical data texts produced in a half-yearly course of Reading and Textual Production; these data result from the proposition of four different writing proposals, in a total of 600 texts. The corpus is analyzed based on semantic-discursive resources, seeking to contemplate relevant aspects of language (grammar, discourse and social context) that reveal the choices made in the reader/writer interrelationship and the organizational flow of the Text. Among the semantic-discursive resources, the analysis includes three resources, including (a) appraisal and negotiation to understand the attitudes negotiated (roles of the participants of the discourse and their relationship with the other); (b) ideation to explain the construction of the experience (activities performed and participants); and (c) periodicity to outline the flow of information in the organization of the text according to the genre it instantiates. The results indicate the organizational difficulties of the flow of the text information. Cartography contributes to the understanding of the way writers use language in an effort to present themselves, evaluate someone else’s work, and communicate with readers.

Keywords: academic writing, Portuguese mother tongue, semantic-discursive resources, academic context

Procedia PDF Downloads 126
1068 Topic-to-Essay Generation with Event Element Constraints

Authors: Yufen Qin

Abstract:

Topic-to-Essay generation is a challenging task in Natural language processing, which aims to generate novel, diverse, and topic-related text based on user input. Previous research has overlooked the generation of articles under the constraints of event elements, resulting in issues such as incomplete event elements and logical inconsistencies in the generated results. To fill this gap, this paper proposes an event-constrained approach for a topic-to-essay generation that enforces the completeness of event elements during the generation process. Additionally, a language model is employed to verify the logical consistency of the generated results. Experimental results demonstrate that the proposed model achieves a better BLEU-2 score and performs better than the baseline in terms of subjective evaluation on a real dataset, indicating its capability to generate higher-quality topic-related text.

Keywords: event element, language model, natural language processing, topic-to-essay generation.

Procedia PDF Downloads 236
1067 Examining the Dubbing Strategies Used in the Egyptian Dubbed Version of Mulan (1998)

Authors: Shaza Melies, Saadeya Salem, Seham Kareh

Abstract:

Cartoon films are multisemiotic as various modes integrate in the production of meaning. This study aims to examine the cultural and linguistic specific references in the Egyptian dubbed cartoon film Mulan. The study examines the translation strategies implemented in the Egyptian dubbed version of Mulan to meet the cultural preferences of the audience. The study reached the following findings: Using the traditional translation strategies does not deliver the intended meaning of the source text and causes loss in the intended humor. As a result, the findings showed that in the dubbed version, translators tend to omit, change, or add information to the target text to be accepted by the audience. The contrastive analysis of the Mulan (English and dubbed versions) proves the connotations that the dubbing has taken to be accepted by the target audience. Cartoon films are multisemiotic as various modes integrate in the production of meaning. This study aims to examine the cultural and linguistic specific references in the Egyptian dubbed cartoon film Mulan. The study examines the translation strategies implemented in the Egyptian dubbed version of Mulan to meet the cultural preferences of the audience. The study reached the following findings: Using the traditional translation strategies does not deliver the intended meaning of the source text and causes loss in the intended humor. As a result, the findings showed that in the dubbed version, translators tend to omit, change, or add information to the target text to be accepted by the audience. The contrastive analysis of the Mulan (English and dubbed versions) proves the connotations that the dubbing has taken to be accepted by the target audience.

Keywords: domestication, dubbing, Mulan, translation theories

Procedia PDF Downloads 136
1066 Translation Quality Assessment: Proposing a Linguistic-Based Model for Translation Criticism with Considering Ideology and Power Relations

Authors: Mehrnoosh Pirhayati

Abstract:

In this study, the researcher tried to propose a model of Translation Criticism (TC) regarding the phenomenon of Translation Quality Assessment (TQA). With changing the general view on re/writing as an illegal act, the researcher defined a scale for the act of translation and determined the redline of translation with other products. This research attempts to show TC as a related phenomenon to TQA. This study shows that TQA with using the rules and factors of TC as depicted in both product-oriented analysis and process-oriented analysis, determines the orientation or the level of the quality of translation. This study also depicts that TC, regarding TQA’s perspective, reveals the aim of the translation of original text and the root of ideological manipulation and re/writing. On the other hand, this study stresses the existence of a direct relationship between the linguistic materials and semiotic codes of a text or book. This study can be fruitful for translators, scholars, translation criticizers, and translation quality assessors, and also it is applicable in the area of pedagogy.

Keywords: a model of translation criticism, a model of translation quality assessment, critical discourse analysis (CDA), re/writing, translation criticism (TC), translation quality assessment (TQA)

Procedia PDF Downloads 320
1065 Intentionality and Context in the Paradox of Reward and Punishment in the Meccan Surahs

Authors: Asmaa Fathy Mohamed Desoky

Abstract:

The subject of this research is the inference of intentionality and context from the verses of the Meccan surahs, which include the paradox of reward and punishment, applied to the duality of disbelief and faith; The Holy Quran is the most important sacred linguistic reference in the Arabic language because it is rich in all the rules of the language in addition to the linguistic miracle. the Quranic text is a first-class intentional text, sent down to convey something to the recipient (Muhammad first and then communicates it to Muslims) and influence and convince him, which opens the door to many Ijtihad; a desire to reach the will of Allah and his intention from his words Almighty. Intentionality as a term is one of the most important deliberative terms, but it will be modified to suit the Quranic discourse, especially since intentionality is related to intention-as it turned out earlier - that is, it turns the reader or recipient into a predictor of the unseen, and this does not correspond to the Quranic discourse. Hence, in this research, a set of dualities will be identified that will be studied in order to clarify the meaning of them according to the opinions of previous interpreters in accordance with the sanctity of the Quranic discourse, which is intentionally related to the dualities of reward and punishment, such as: the duality of disbelief and faith, noting that it is a duality that combines opposites and Paradox on one level, because it may be an external paradox between action and reaction, and may be an internal paradox in matters related to faith, and may be a situational paradox in a specific event or a certain fact. It should be noted that the intention of the Qur'anic text is fully realized in form and content, in whole and in part, and this research includes a presentation of some applied models of the issues of intention and context that appear in the verses of the paradox of reward and punishment in the Meccan surahs in Quraan.

Keywords: intentionality, context, the paradox, reward, punishment, Meccan surahs

Procedia PDF Downloads 79
1064 Switching to the Latin Alphabet in Kazakhstan: A Brief Overview of Character Recognition Methods

Authors: Ainagul Yermekova, Liudmila Goncharenko, Ali Baghirzade, Sergey Sybachin

Abstract:

In this article, we address the problem of Kazakhstan's transition to the Latin alphabet. The transition process started in 2017 and is scheduled to be completed in 2025. In connection with these events, the problem of recognizing the characters of the new alphabet is raised. Well-known character recognition programs such as ABBYY FineReader, FormReader, MyScript Stylus did not recognize specific Kazakh letters that were used in Cyrillic. The author tries to give an assessment of the well-known method of character recognition that could be in demand as part of the country's transition to the Latin alphabet. Three methods of character recognition: template, structured, and feature-based, are considered through the algorithms of operation. At the end of the article, a general conclusion is made about the possibility of applying a certain method to a particular recognition process: for example, in the process of population census, recognition of typographic text in Latin, or recognition of photos of car numbers, store signs, etc.

Keywords: text detection, template method, recognition algorithm, structured method, feature method

Procedia PDF Downloads 187
1063 Unsupervised Domain Adaptive Text Retrieval with Query Generation

Authors: Rui Yin, Haojie Wang, Xun Li

Abstract:

Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.

Keywords: dense retrieval, query generation, unsupervised training, text retrieval

Procedia PDF Downloads 73
1062 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 135
1061 Use and Relationship of Shell Nouns as Cohesive Devices in the Quality of Second Language Writing

Authors: Kristine D. de Leon, Junifer A. Abatayo, Jose Cristina M. Pariña

Abstract:

The current study is a comparative analysis of the use of shell nouns as a cohesive device (CD) in an English for Second Language (ESL) setting in order to identify their use and relationship in the quality of second language (L2) writing. As these nouns were established to anticipate the meaning within, across or outside the text, their use has fascinated writing researchers. The corpus of the study included published articles from reputable journals and graduate students’ papers in order to analyze the frequency of shell nouns using “highly prevalent” nouns in the academic community, to identify the different lexicogrammatical patterns where these nouns occur and to the functions connected with these patterns. The result of the study implies that published authors used more shell nouns in their paper than graduate students. However, the functions of the different lexicogrammatical patterns for the frequently occurring shell nouns are somewhat similar. These results could help students in enhancing the cohesion of their text and in comprehending it.

Keywords: anaphoric, cataphoric, lexico-grammatical, shell nouns

Procedia PDF Downloads 186
1060 A U-Net Based Architecture for Fast and Accurate Diagram Extraction

Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal

Abstract:

In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.

Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO

Procedia PDF Downloads 138
1059 An End-to-end Piping and Instrumentation Diagram Information Recognition System

Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha

Abstract:

Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.

Keywords: object recognition system, P&ID, symbol recognition, text recognition

Procedia PDF Downloads 153
1058 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 297
1057 Composite Kernels for Public Emotion Recognition from Twitter

Authors: Chien-Hung Chen, Yan-Chun Hsing, Yung-Chun Chang

Abstract:

The Internet has grown into a powerful medium for information dispersion and social interaction that leads to a rapid growth of social media which allows users to easily post their emotions and perspectives regarding certain topics online. Our research aims at using natural language processing and text mining techniques to explore the public emotions expressed on Twitter by analyzing the sentiment behind tweets. In this paper, we propose a composite kernel method that integrates tree kernel with the linear kernel to simultaneously exploit both the tree representation and the distributed emotion keyword representation to analyze the syntactic and content information in tweets. The experiment results demonstrate that our method can effectively detect public emotion of tweets while outperforming the other compared methods.

Keywords: emotion recognition, natural language processing, composite kernel, sentiment analysis, text mining

Procedia PDF Downloads 218
1056 Psychological Nano-Therapy: A New Method in Family Therapy

Authors: Siamak Samani, Nadereh Sohrabi

Abstract:

Psychological nano-therapy is a new method based on systems theory. According to the theory, systems with severe dysfunctions are resistant to changes. Psychological nano-therapy helps the therapists to break this ice. Two key concepts in psychological nano-therapy are nano-functions and nano-behaviors. The most important step in psychological nano-therapy in family therapy is selecting the most effective nano-function and nano-behavior. The aim of this study was to check the effectiveness of psychological nano-therapy for family therapy. One group pre-test-post-test design (quasi-experimental Design) was applied for research. The sample consisted of ten families with severe marital conflict. The important character of these families was resistance for participating in family therapy. In this study, sending respectful (nano-function) text massages (nano-behavior) with cell phone were applied as a treatment. Cohesion/respect sub scale from self-report family processes scale and family readiness for therapy scale were used to assess all family members in pre-test and post-test. In this study, one of family members was asked to send a respectful text massage to other family members every day for a week. The content of the text massages were selected and checked by therapist. To compare the scores of families in pre-test and post-test paired sample t-test was used. The results of the test showed significant differences in both cohesion/respect score and family readiness for therapy between per-test and post-test. The results revealed that these families have found a better atmosphere for participation in a complete family therapy program. Indeed, this study showed that psychological nano-therapy is an effective method to make family readiness for therapy.

Keywords: family therapy, family conflicts, nano-therapy, family readiness

Procedia PDF Downloads 659
1055 Jalal-Ale-Ahmad and ‘Critical Consciousness’: A Comparative Study

Authors: Zohreh Ramin

Abstract:

One of the most important contributions that Edward Said has had in the realm of critical theory is his insistence on the worldliness of the text and the critic. By this, Said meant that the critic and the text must be considered in their ‘material’ contexts. Foregrounding the substantial role of a critic as embodying what he refers to as ‘critical consciousness’, a true critic, Said maintains, is one who can stand between the ‘dominant culture’ and ‘the totalizing forms of critical systems.’ Considered as one of Iran’s major contemporary intellectuals, Jalal Ale Ahmad is responsible for introducing the idea of ‘Westoxication’ in Iran, constructing a social paradigm of the necessity to return to tradition in contemporary Iran. The present paper intends to study Al-Ahmad’s definition of the orient versus the occident, his criticism of the ‘machination’ of contemporary Iranian society, and his solution to the problem of ‘Westoxication’. The objective of this study is to see whether Ale Ahmad can be considered as embodying the spirit of ‘critical consciousness’ as described by Said as the necessary tool in the hands of an intellectual who is simultaneously attached filitavely to his culture but can detach himself affilitavely through employing critical consciousness.

Keywords: Westoxication, filiative, affiliative, machination

Procedia PDF Downloads 184
1054 Ancient Latin Language and Haiku Poetry: A Case Study between Teaching and Translation Studies

Authors: Arianna Sacerdoti

Abstract:

The translation of Haiku Poetry into Latin is fundamentally experimental in nature. One of the first seminal books containing such translations, alongside translations into different modern languages, 'A Piedi Scalzi', was written by Tartamella in 2016. The results of a text-oriented study of this book will be commented upon and analyzed. The author Arianna Sacerdoti made similar translations with high school student. Such an experiment garners interest across a diverse range of disciplines such as teaching, translation studies, and classics reception studies. The methodology employed is text-oriented as the Haiku poem translations will be commented on by considering their relationship with the original. The results of this investigation, conducted within the field of experimental teaching, are expected to confirm the usefulness of this approach to the teaching of Latin and its potential to actively involve students in identifying the diachronic differences between the world of classical antiquity and the contemporary one.

Keywords: ancient latin, Haiku, translation studies, reception of classics

Procedia PDF Downloads 134
1053 Move Analysis of Death Row Statements: An Explanatory Study Applied to Death Row Statements in Texas Department of Criminal Justice Website

Authors: Giya Erina

Abstract:

Linguists have analyzed the rhetorical structure of various forensic genres, but only a few have investigated the complete structure of death row statements. Unlike other forensic text types, such as suicide or ransom notes, the focus of death row statement analysis is not the authenticity or falsity of the text, but its intended meaning and its communicative purpose. As it constitutes their last statement before their execution, there are probably many things that inmates would like to express. This study mainly examines the rhetorical moves of 200 death row statements from the Texas Department of Criminal Justice website using rhetorical move analysis. The rhetorical moves identified in the statements will be classified based on their communicative purpose, and they will be grouped into moves and steps. A move structure will finally be suggested from the most common or characteristic moves and steps, as well as some sub-moves. However, because of some statements’ atypicality, some moves may appear in different parts of the texts or not at all.

Keywords: Death row statements, forensic linguistics, genre analysis, move analysis

Procedia PDF Downloads 296