Search results for: textual data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24264

Search results for: textual data

24264 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 252
24263 Using Textual Pre-Processing and Text Mining to Create Semantic Links

Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo

Abstract:

This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.

Keywords: semantic links, data mining, linked data, SKOS

Procedia PDF Downloads 138
24262 Systemic Functional Grammar Analysis of Barack Obama's Second Term Inaugural Speech

Authors: Sadiq Aminu, Ahmed Lamido

Abstract:

This research studies Barack Obama’s second inaugural speech using Halliday’s Systemic Functional Grammar (SFG). SFG is a text grammar which describes how language is used, so that the meaning of the text can be better understood. The primary source of data in this research work is Barack Obama’s second inaugural speech which was obtained from the internet. The analysis of the speech was based on the ideational and textual metafunctions of Systemic Functional Grammar. Specifically, the researcher analyses the Process Types and Participants (ideational) and the Theme/Rheme (textual). It was found that material process (process of doing) was the most frequently used ‘Process type’ and ‘We’ which refers to the people of America was the frequently used ‘Theme’. Application of the SFG theory, therefore, gives a better meaning to Barack Obama’s speech.

Keywords: ideational, metafunction, rheme, textual, theme

Procedia PDF Downloads 125
24261 Discrimination in Insurance Pricing: A Textual-Analysis Perspective

Authors: Ruijuan Bi

Abstract:

Discrimination in insurance pricing is a topic of increasing concern, particularly in the context of the rapid development of big data and artificial intelligence. There is a need to explore the various forms of discrimination, such as direct and indirect discrimination, proxy discrimination, algorithmic discrimination, and unfair discrimination, and understand their implications in insurance pricing models. This paper aims to analyze and interpret the definitions of discrimination in insurance pricing and explore measures to reduce discrimination. It utilizes a textual analysis methodology, which involves gathering qualitative data from relevant literature on definitions of discrimination. The research methodology focuses on exploring the various forms of discrimination and their implications in insurance pricing models. Through textual analysis, this paper identifies the specific characteristics and implications of each form of discrimination in the general insurance industry. This research contributes to the theoretical understanding of discrimination in insurance pricing. By analyzing and interpreting relevant literature, this paper provides insights into the definitions of discrimination and the laws and regulations surrounding it. This theoretical foundation can inform future empirical research on discrimination in insurance pricing using relevant theories of probability theory.

Keywords: algorithmic discrimination, direct and indirect discrimination, proxy discrimination, unfair discrimination, insurance pricing

Procedia PDF Downloads 33
24260 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification

Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar

Abstract:

Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.

Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings

Procedia PDF Downloads 145
24259 The Interleaving Effect of Subject Matter and Perceptual Modality on Students’ Attention and Learning: A Portable EEG Study

Authors: Wen Chen

Abstract:

To investigate the interleaving effect of subject matter (mathematics vs. history) and perceptual modality (visual vs. auditory materials) on student’s attention and learning outcomes, the present study collected self-reported data on subjective cognitive load (SCL) and attention level, EEG data, and learning outcomes from micro-lectures. Eighty-one 7th grade students were randomly assigned to four learning conditions: blocked (by subject matter) micro-lectures with auditory textual information (B-A condition), blocked (by subject matter) micro-lectures with visual textual information (B-V condition), interleaved (by subject matter) micro-lectures with auditory textual information (I-A condition), and interleaved micro-lectures by both perceptual modality and subject matter (I-all condition). The results showed that although interleaved conditions may show advantages in certain indices, the I-all condition showed the best overall outcomes (best performance, low SCL, and high attention). This study suggests that interleaving by both subject matter and perceptual modality should be preferred in scheduling and planning classes.

Keywords: cognitive load, interleaving effect, micro-lectures, sustained attention

Procedia PDF Downloads 108
24258 Evaluation and Compression of Different Language Transformer Models for Semantic Textual Similarity Binary Task Using Minority Language Resources

Authors: Ma. Gracia Corazon Cayanan, Kai Yuen Cheong, Li Sha

Abstract:

Training a language model for a minority language has been a challenging task. The lack of available corpora to train and fine-tune state-of-the-art language models is still a challenge in the area of Natural Language Processing (NLP). Moreover, the need for high computational resources and bulk data limit the attainment of this task. In this paper, we presented the following contributions: (1) we introduce and used a translation pair set of Tagalog and English (TL-EN) in pre-training a language model to a minority language resource; (2) we fine-tuned and evaluated top-ranking and pre-trained semantic textual similarity binary task (STSB) models, to both TL-EN and STS dataset pairs. (3) then, we reduced the size of the model to offset the need for high computational resources. Based on our results, the models that were pre-trained to translation pairs and STS pairs can perform well for STSB task. Also, having it reduced to a smaller dimension has no negative effect on the performance but rather has a notable increase on the similarity scores. Moreover, models that were pre-trained to a similar dataset have a tremendous effect on the model’s performance scores.

Keywords: semantic matching, semantic textual similarity binary task, low resource minority language, fine-tuning, dimension reduction, transformer models

Procedia PDF Downloads 173
24257 Resource Framework Descriptors for Interestingness in Data

Authors: C. B. Abhilash, Kavi Mahesh

Abstract:

Human beings are the most advanced species on earth; it's all because of the ability to communicate and share information via human language. In today's world, a huge amount of data is available on the web in text format. This has also resulted in the generation of big data in structured and unstructured formats. In general, the data is in the textual form, which is highly unstructured. To get insights and actionable content from this data, we need to incorporate the concepts of text mining and natural language processing. In our study, we mainly focus on Interesting data through which interesting facts are generated for the knowledge base. The approach is to derive the analytics from the text via the application of natural language processing. Using semantic web Resource framework descriptors (RDF), we generate the triple from the given data and derive the interesting patterns. The methodology also illustrates data integration using the RDF for reliable, interesting patterns.

Keywords: RDF, interestingness, knowledge base, semantic data

Procedia PDF Downloads 124
24256 Neural Networks Models for Measuring Hotel Users Satisfaction

Authors: Asma Ameur, Dhafer Malouche

Abstract:

Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.

Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring

Procedia PDF Downloads 103
24255 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 103
24254 Anomaly Detection Based on System Log Data

Authors: M. Kamel, A. Hoayek, M. Batton-Hubert

Abstract:

With the increase of network virtualization and the disparity of vendors, the continuous monitoring and detection of anomalies cannot rely on static rules. An advanced analytical methodology is needed to discriminate between ordinary events and unusual anomalies. In this paper, we focus on log data (textual data), which is a crucial source of information for network performance. Then, we introduce an algorithm used as a pipeline to help with the pretreatment of such data, group it into patterns, and dynamically label each pattern as an anomaly or not. Such tools will provide users and experts with continuous real-time logs monitoring capability to detect anomalies and failures in the underlying system that can affect performance. An application of real-world data illustrates the algorithm.

Keywords: logs, anomaly detection, ML, scoring, NLP

Procedia PDF Downloads 60
24253 A Study on Puzzle-Based Game to Teach Elementary Students to Code

Authors: Jaisoon Baek, Gyuhwan Oh

Abstract:

In this study, we developed a puzzle game based on coding and a web-based management system to observe the user's learning status in real time and maximize the understanding of the coding of elementary students. We have improved upon and existing coding game which cannot be connected to textual language coding or comprehends learning state. We analyzed the syntax of various coding languages for the curriculum and provided a menu to convert icon into textual coding languages. In addition, the management system includes multiple types of tutoring, real-time analysis of user play data and feedback. Following its application in regular elementary school software classes, students reported positive effects on understanding and interest in coding were shown by students. It is expected that this will contribute to quality improvement in software education by providing contents with proven educational value by breaking away from simple learning-oriented coding games.

Keywords: coding education, serious game, coding, education management system

Procedia PDF Downloads 113
24252 Text Mining Past Medical History in Electrophysiological Studies

Authors: Roni Ramon-Gonen, Amir Dori, Shahar Shelly

Abstract:

Background and objectives: Healthcare professionals produce abundant textual information in their daily clinical practice. The extraction of insights from all the gathered information, mainly unstructured and lacking in normalization, is one of the major challenges in computational medicine. In this respect, text mining assembles different techniques to derive valuable insights from unstructured textual data, so it has led to being especially relevant in Medicine. Neurological patient’s history allows the clinician to define the patient’s symptoms and along with the result of the nerve conduction study (NCS) and electromyography (EMG) test, assists in formulating a differential diagnosis. Past medical history (PMH) helps to direct the latter. In this study, we aimed to identify relevant PMH, understand which PMHs are common among patients in the referral cohort and documented by the medical staff, and examine the differences by sex and age in a large cohort based on textual format notes. Methods: We retrospectively identified all patients with abnormal NCS between May 2016 to February 2022. Age, gender, and all NCS attributes reports were recorded, including the summary text. All patients’ histories were extracted from the text report by a query. Basic text cleansing and data preparation were performed, as well as lemmatization. Very popular words (like ‘left’ and ‘right’) were deleted. Several words were replaced with their abbreviations. A bag of words approach was used to perform the analyses. Different visualizations which are common in text analysis, were created to easily grasp the results. Results: We identified 5282 unique patients. Three thousand and five (57%) patients had documented PMH. Of which 60.4% (n=1817) were males. The total median age was 62 years (range 0.12 – 97.2 years), and the majority of patients (83%) presented after the age of forty years. The top two documented medical histories were diabetes mellitus (DM) and surgery. DM was observed in 16.3% of the patients, and surgery at 15.4%. Other frequent patient histories (among the top 20) were fracture, cancer (ca), motor vehicle accident (MVA), leg, lumbar, discopathy, back and carpal tunnel release (CTR). When separating the data by sex, we can see that DM and MVA are more frequent among males, while cancer and CTR are less frequent. On the other hand, the top medical history in females was surgery and, after that, DM. Other frequent histories among females are breast cancer, fractures, and CTR. In the younger population (ages 18 to 26), the frequent PMH were surgery, fractures, trauma, and MVA. Discussion: By applying text mining approaches to unstructured data, we were able to better understand which medical histories are more relevant in these circumstances and, in addition, gain additional insights regarding sex and age differences. These insights might help to collect epidemiological demographical data as well as raise new hypotheses. One limitation of this work is that each clinician might use different words or abbreviations to describe the same condition, and therefore using a coding system can be beneficial.

Keywords: abnormal studies, healthcare analytics, medical history, nerve conduction studies, text mining, textual analysis

Procedia PDF Downloads 62
24251 Selecting Answers for Questions with Multiple Answer Choices in Arabic Question Answering Based on Textual Entailment Recognition

Authors: Anes Enakoa, Yawei Liang

Abstract:

Question Answering (QA) system is one of the most important and demanding tasks in the field of Natural Language Processing (NLP). In QA systems, the answer generation task generates a list of candidate answers to the user's question, in which only one answer is correct. Answer selection is one of the main components of the QA, which is concerned with selecting the best answer choice from the candidate answers suggested by the system. However, the selection process can be very challenging especially in Arabic due to its particularities. To address this challenge, an approach is proposed to answer questions with multiple answer choices for Arabic QA systems based on Textual Entailment (TE) recognition. The developed approach employs a Support Vector Machine that considers lexical, semantic and syntactic features in order to recognize the entailment between the generated hypotheses (H) and the text (T). A set of experiments has been conducted for performance evaluation and the overall performance of the proposed method reached an accuracy of 67.5% with C@1 score of 80.46%. The obtained results are promising and demonstrate that the proposed method is effective for TE recognition task.

Keywords: information retrieval, machine learning, natural language processing, question answering, textual entailment

Procedia PDF Downloads 115
24250 The Challenges of Hyper-Textual Learning Approach for Religious Education

Authors: Elham Shirvani–Ghadikolaei, Seyed Mahdi Sajjadi

Abstract:

State of the art technology has the tremendous impact on our life, in this situation education system have been influenced as well as. In this paper, tried to compare two space of learning text and hypertext with each other, and some challenges of using hypertext in religious education. Regarding the fact that, hypertext is an undeniable part of learning in this world and it has highly beneficial for the education process from class to office and home. In this paper tried to solve this question: the consequences and challenges of applying hypertext in religious education. Also, the consequences of this survey demonstrate the role of curriculum designer and planner of education to solve this problem.

Keywords: Hyper-textual, learning, religious education, learning text

Procedia PDF Downloads 282
24249 A Contrastive Rhetoric Study: The Use of Textual and Interpersonal Metadiscoursal Markers in Persian and English Newspaper Editorials

Authors: Habibollah Mashhady, Moslem Fatollahi

Abstract:

This study tries to contrast the use of metadiscoursal markers in English and Persian Newspaper Editorials as persuasive text types. These markers are linguistic elements in the text which do not add to the propositional content of it, rather they serve to realize the Halliday’s (1985) textual and interpersonal functions of language. At first, some of the most common markers from five subcategories of Text Connectives, Illocution Markers, Hedges, Emphatics, and Attitude Markers were identified in both English and Persian newspapers. Then, the frequency of occurrence of these markers in both English and Persian corpus consisting of 44 randomly selected editorials (18,000 words in each) from several English and Persian newspapers was recorded. After that, using a two-way chi square analysis, the overall x2 obs was found to be highly significant. So, the null hypothesis of no difference was confidently rejected. Finally, in order to determine the contribution of each subcategory to the overall x 2 value, one-way chi square analyses were applied to the individual subcategories. The results indicated that only two of the five subcategories of markers were statistically significant. This difference is then attributed to the differing spirits prevailing in the linguistic communities involved. Regarding the minor research question it was found that, in contrast to English writers, Persian writers are more writer-oriented in their writings.

Keywords: metadiscoursal markers, textual meta-function, interpersonal meta-function, persuasive texts, English and Persian newspaper editorials

Procedia PDF Downloads 545
24248 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data

Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis

Abstract:

Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.

Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction

Procedia PDF Downloads 556
24247 Integrating Critical Stylistics and Visual Grammar: A Multimodal Stylistic Approach to the Analysis of Non-Literary Texts

Authors: Shatha Khuzaee

Abstract:

The study develops multimodal stylistic approach to analyse a number of BBC online news articles reporting some key events from the so called ‘Arab Uprisings’. Critical stylistics (CS) and visual grammar (VG) provide insightful arguments to the ways ideology is projected through different verbal and visual modes, yet they are mode specific because they examine how each mode projects its meaning separately and do not attempt to clarify what happens intersemiotically when the two modes co-occur. Therefore, it is the task undertaken in this research to propose multimodal stylistic approach that addresses the issue of ideology construction when the two modes co-occur. Informed by functional grammar and social semiotics, the analysis attempts to integrate three linguistic models developed in critical stylistics, namely, transitivity choices, prioritizing and hypothesizing along with their visual equivalents adopted from visual grammar to investigate the way ideology is constructed, in multimodal text, when text/image participate and interrelate in the process of meaning making on the textual level of analysis. The analysis provides comprehensive theoretical and analytical elaborations on the different points of integration between CS linguistic models and VG equivalents which operate on the textual level of analysis to better account for ideology construction in news as non-literary multimodal texts. It is argued that the analysis well thought out a plan that would remark the first step towards the integration between the well-established linguistic models of critical stylistics and that of visual analysis to analyse multimodal texts on the textual level. Both approaches are compatible to produce multimodal stylistic approach because they intend to analyse text and image depending on whatever textual evidence is available. This supports the analysis maintain the rigor and replicability needed for a stylistic analysis like the one undertaken in this study.

Keywords: multimodality, stylistics, visual grammar, social semiotics, functional grammar

Procedia PDF Downloads 192
24246 Some Theoretical Approaches on the Style of Lyrical Subject of the Confessional Poetry

Authors: Lemac Tin

Abstract:

This paper deals with the lyrical subject of the confessional poetry which is the main part of her stylistic strucuture. We concluded two types of this subject in the classical confessional poetic discourse; reflexive and authentic subject. We offer the model of their genesis, textual features and appeareance realisations. Genesis is related to the theories of deriving poetry from emotion and magic and their similar position in the primitive lyrics and lyrics of the ancient civilizations. Textual features are related to the emotive and semiotic analysis of each type. Appearance realisations of these two types are I-subject, We-subject, transvocal and objectified subject. We check this approaches on some of the poems from World literature.

Keywords: confessional poetry, confessional lyrical subject, magic, emotion, emotive analysis, semiotic analysis

Procedia PDF Downloads 241
24245 Linguistic Analysis of Holy Scriptures: A Comparative Study of Islamic Jurisprudence and the Western Hermeneutical Tradition

Authors: Sana Ammad

Abstract:

The tradition of linguistic analysis in Islam and Christianity has developed independently of each other in lieu of the social developments specific to their historical context. However, recently increasing number of Muslim academics educated in the West have tried to apply the Western tradition of linguistic interpretation to the Qur’anic text while completely disregarding the Islamic linguistic tradition used and developed by the traditional scholars over the centuries. The aim of the paper is to outline the linguistic tools and methods used by the traditional Islamic scholars for the purpose of interpretating the Holy Qur’an and shed light on how they contribute towards a better understanding of the text compared to their Western counterparts. This paper carries out a descriptive-comparative study of the linguistic tools developed and perfected by the traditional scholars in Islam for the purpose of textual analysis of the Qur’an as they have been described in the authentic works of Usul Al Fiqh (Jurisprudence) and the principles of textual analysis employed by the Western hermeneutical tradition for the study of the Bible. First, it briefly outlines the independent historical development of the two traditions emphasizing the final normative shape that they have taken. Then it draws a comparison of the two traditions highlighting the similarities and the differences existing between them. In the end, the paper demonstrates the level of academic excellence achieved by the traditional linguistic scholars in their efforts to develop appropriate tools of textual interpretation and how these tools are more suitable for interpreting the Qur’an compared to the Western principles. Since the aim of interpreters of both the traditions is to try and attain an objective understanding of the Scriptures, the emphasis of the paper shall be to highlight how well the Islamic method of linguistic interpretation contributes to an objective understanding of the Qur’anic text. The paper concludes with the following findings: The Western hermeneutical tradition of linguistic analysis developed within the Western historical context. However, the Islamic method of linguistic analysis is much more highly developed and complex and serves better the purpose of objective understanding of the Holy text.

Keywords: Islamic jurisprudence, linguistic analysis, textual interpretation, western hermeneutics

Procedia PDF Downloads 289
24244 Exploring the Difficulties of Acceleration Concept from the Perspective of Historical Textual Analysis

Authors: Yun-Ju Chiu, Feng-Yi Chen

Abstract:

Kinematics is the beginning to learn mechanics in physics course. The concept of acceleration plays an important role in learning kinematics. Teachers usually instruct the conception through the formulas and graphs of kinematics and the well-known law F = ma. However, over the past few decades, a lot of researchers reveal numerous students’ difficulties in learning acceleration. One of these difficulties is that students frequently confuse acceleration with velocity and force. Why is the concept of acceleration so difficult to learn? The aim of this study is to understand the conceptual evolution of acceleration through the historical textual analysis. Text analysis and one-to-one interviews with high school students and teachers are used in this study. This study finds the history of science constructed from textbooks is usually quite different from the real evolution of history. For example, most teachers and students believe that the best-known law F = ma was written down by Newton. The expression of the second law is not F = ma in Newton’s best-known book Principia in 1687. Even after more than one hundred years, a famous Cambridge textbook titled An Elementary Treatise on Mechanics by Whewell of Trinity College did not express this law as F = ma. At that time of Whewell, the early mid-nineteenth century Britain, the concept of acceleration was not only ambiguous but also confused with the concept of force. The process of learning the concept of acceleration is analogous to its conceptual development in history. The study from the perspective of historical textual analysis will promote the understanding of the concept learning difficulties, the development of professional physics teaching, and the improvement of the context of physics textbooks.

Keywords: acceleration, textbooks, mechanics, misconception, history of science

Procedia PDF Downloads 223
24243 Grammatical Parallelism in the Qurʼān

Authors: Yehudit Dror

Abstract:

Parallelism¬, or as it is called in Arabic, al-muqābala, occupies a central position in the rhetorical discipline of ʻilm al-bayān. Parallelism is used as a figure of textual ornamentation or embellishment and can be divided into several types that are based on the semantics of parallelism and its formative structure. Parallelism in Arabic has received a considerable amount of attention from the Arab rhetorician, which enables understanding the essence of parallelism in Arabic – its types, structure and meaning. However, there are some lacunae in their descriptions concerning the function and thematic restrictions of parallelism in the Qur’ān. In my presentation, which focuses on grammatical parallelism where the two stichos of the parallelism are the same with respect to syntax and morphology, I will show that parallelism has some important roles in the textual arrangement; it may, for example, conclude a thematic section, indicate a turning point in the text or to clarify what has been said previously. In addition, it will be shown that parallelism is not used randomly in the Qurʼān but rather is restricted to repeated themes which carry the most important messages of the Qurʼān, such as God's Might or behavioral patterns of the believers and the non-believers; or it can be used as a stylistic device.

Keywords: grammatical parallelism, half-line, symmetry, Koran

Procedia PDF Downloads 298
24242 Unsupervised Text Mining Approach to Early Warning System

Authors: Ichihan Tai, Bill Olson, Paul Blessner

Abstract:

Traditional early warning systems that alarm against crisis are generally based on structured or numerical data; therefore, a system that can make predictions based on unstructured textual data, an uncorrelated data source, is a great complement to the traditional early warning systems. The Chicago Board Options Exchange (CBOE) Volatility Index (VIX), commonly referred to as the fear index, measures the cost of insurance against market crash, and spikes in the event of crisis. In this study, news data is consumed for prediction of whether there will be a market-wide crisis by predicting the movement of the fear index, and the historical references to similar events are presented in an unsupervised manner. Topic modeling-based prediction and representation are made based on daily news data between 1990 and 2015 from The Wall Street Journal against VIX index data from CBOE.

Keywords: early warning system, knowledge management, market prediction, topic modeling.

Procedia PDF Downloads 298
24241 Aspects of the Promotional Language of Tourism in Social Media. A Case Study of Romanian Accommodation Industry

Authors: Sanda-Maria Ardeleanu, Ana Crăciunescu

Abstract:

This paper is sustained by our previous research on discursive strategies, whichdemonstrated that tourismhas developed and employed apromotional languageper se. We have studied this concept within the framework of audio-visual advertising by analyzing its discursive structures at the level of three main strategies (textual, visual, and both textual and visual) and confirmed the applicability of the promotional language per se within the field. Tourism, at large, represents a largely potential interdisplinary field, which allowed us to use qualitative methods of research such as Discourse Analysis (DA). Due to further research which showed that in the third phase of qualitative research methodologies, scholars in tourism recognized semiotics and DA as potential paths to follow, but which were insufficiently explored at the time, we soon realized that the natural next step to take is to bring together common qualitative methodologies for both fields, such as the method of observation, the triangulation, Discourse Analysis, etc. Therefore and in the light of fast transformations of the medium that intermediates the message, in this paper, we are going to focus on the manifestations of the promotional language in social media texts, which advertise for the urban industry of accommodation in Romania. We shall constitute a corpus of study as the basis for our research methodology and, through the empirical method of observation and DA, we propose to recognize or discover new patterns developed at textual (mainly) and visual level or the mix of the two, known as strategies of the promotional language of tourism.

Keywords: discourse analysis, promotional language of tourism, social media, urban accommodation industry, tourism

Procedia PDF Downloads 120
24240 The Influence of Modern Islamic Thought Liberalization to the Improvement of Science

Authors: Muhammad Ilham Agus Salim

Abstract:

The liberalization of Islamic thought is not only an impact on the views of Muslim community regarding worldview, but has touched the stage reconstruction of contemporary general science. It can be seen from the emergence of Western and Eastern intellectual movements that try to reconstruct contemporary science arguing that scientific culture is not currently able to deliver audiences to change the order of the better society. Such Islamic thought liberalization has a huge influence on the multidimensional crisis in various sectors such as the economic, culture, politic, ecology, and other sectors. Therefore, this paper examines the effects of the liberalization of contemporary Islamic thought towards on the development of modern science. The method used in this paper is based on textual study of Al -Qur'an, Hadith (prophetic tradition), and the history of contemporary Islamic thought and comparing it with the reality of the development of science today. So the influence of Islamic thought liberalization has created a crisis and stagnation of the development of scientific disciplines can be found.

Keywords: liberalization, science, Islam, al-Qur’an textual studies

Procedia PDF Downloads 371
24239 Acausal and Causal Model Construction with FEM Approach Using Modelica

Authors: Oke Oktavianty, Tadayuki Kyoutani, Shigeyuki Haruyama, Junji Kaneko, Ken Kaminishi

Abstract:

Modelica has many advantages and it is very useful in modeling and simulation especially for the multi-domain with a complex technical system. However, the big obstacle for a beginner is to understand the basic concept and to build a new system model for a real system. In order to understand how to solve the simple circuit model by hand translation and to get a better understanding of how modelica works, we provide a detailed explanation about solver ordering system in horizontal and vertical sorting and make some proposals for improvement. In this study, some difficulties in using modelica software with the original concept and the comparison with Finite Element Method (FEM) approach is discussed. We also present our textual modeling approach using FEM concept for acausal and causal model construction. Furthermore, simulation results are provided that demonstrate the comparison between using textual modeling with original coding in modelica and FEM concept.

Keywords: FEM, a causal model, modelica, horizontal and vertical sorting

Procedia PDF Downloads 282
24238 A Novel Probabilistic Spatial Locality of Reference Technique for Automatic Cleansing of Digital Maps

Authors: A. Abdullah, S. Abushalmat, A. Bakshwain, A. Basuhail, A. Aslam

Abstract:

GIS (Geographic Information System) applications require geo-referenced data, this data could be available as databases or in the form of digital or hard-copy agro-meteorological maps. These parameter maps are color-coded with different regions corresponding to different parameter values, converting these maps into a database is not very difficult. However, text and different planimetric elements overlaid on these maps makes an accurate image to database conversion a challenging problem. The reason being, it is almost impossible to exactly replace what was underneath the text or icons; thus, pointing to the need for inpainting. In this paper, we propose a probabilistic inpainting approach that uses the probability of spatial locality of colors in the map for replacing overlaid elements with underlying color. We tested the limits of our proposed technique using non-textual simulated data and compared text removing results with a popular image editing tool using public domain data with promising results.

Keywords: noise, image, GIS, digital map, inpainting

Procedia PDF Downloads 318
24237 Literary Translation Human vs Machine: An Essay about Online Translation

Authors: F. L. Bernardo, R. A. S. Zacarias

Abstract:

The ways to translate are manifold since textual genres undergoing translations are diverse. In this essay, our goal is to give special attention to the literary genre and to the online translation tool Google Translate (GT), widely used either by nonprofessionals or by scholars, in order to show evidence of the indispensability of human wit in a good translation. Our study has its basis on a literary review of prominent authors, with emphasis on translation categories. Also highlighting the issue of polysemous literary translation, we aim to shed light on the translator’s craft and the fallible nature of online translation. To better illustrate these principles, the methodology consisted on performing a comparative analysis involving the original text Moll Flanders by Daniel Defoe in English to its online translation given by GT and to a translation into Brazilian Portuguese performed by a human. We proceeded to identifying and analyzing the degrees of textual equivalence according to the following categories: volume, levels and order. The results have attested the unsuitability in a translation done by a computer connected to the World Wide Web.

Keywords: Google Translator, human translation, literary translation, Moll Flanders

Procedia PDF Downloads 606
24236 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies

Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe

Abstract:

The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.

Keywords: online political debate, French election, hyper-text, phylomemy

Procedia PDF Downloads 155
24235 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation

Procedia PDF Downloads 321