Search results for: text information retrieval
11279 Coronavirus Academic Paper Sorting Application
Authors: Christina A. van Hal, Xiaoqian Jiang, Luyao Chen, Yan Chu, Robert D. Jolly, Yaobin Lin, Jitian Zhao, Kang Lin Hsieh
Abstract:
The COVID-19 Literature Summary App was created for the primary purpose of enabling academicians and clinicians to quickly sort through the vast array of recent coronavirus publications by topics of interest. Multiple methods of summarizing and sorting the manuscripts were created. A summary page introduces the application function and capabilities, while an interactive map provides daily updates on infection, death, and recovery rates. A page with a pivot table allows publication sorting by topic, with an interactive data table that allows sorting topics by columns, as wells as the capability to view abstracts. Additionally, publications may be sorted by the medical topics they cover. We used the CORD-19 database to compile lists of publications. The data table can sort binary variables, allowing the user to pick desired publication topics, such as papers that describe COVID-19 symptoms. The application is primarily designed for use by researchers but can be used by anybody who wants a faster and more efficient means of locating papers of interest.Keywords: COVID-19, literature summary, information retrieval, Snorkel
Procedia PDF Downloads 15211278 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining
Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri
Abstract:
In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.Keywords: educational data mining, Facebook, learning styles, personality traits
Procedia PDF Downloads 23111277 Band Characterization and Development of Hyperspectral Indices for Retrieving Chlorophyll Content
Authors: Ramandeep Kaur M. Malhi, Prashant K. Srivastava, G.Sandhya Kiran
Abstract:
Quantitative estimates of foliar biochemicals, namely chlorophyll content (CC), serve as key information for the assessment of plant productivity, stress, and the availability of nutrients. This also plays a critical role in predicting the dynamic response of any vegetation to altering climate conditions. The advent of hyperspectral data with an enhanced number of available wavelengths has increased the possibility of acquiring improved information on CC. Retrieval of CC is extensively carried through well known spectral indices derived from hyperspectral data. In the present study, an attempt is made to develop hyperspectral indices by identifying optimum bands for CC estimation in Butea monosperma (Lam.) Taub growing in forests of Shoolpaneshwar Wildlife Sanctuary, Narmada district, Gujarat State, India. 196 narrow bands of EO-1 Hyperion images were screened, and the best optimum wavelength from blue, green, red, and near infrared (NIR) regions were identified based on the coefficient of determination (R²) between band reflectance and laboratory estimated CC. The identified optimum wavelengths were then employed for developing 12 hyperspectral indices. These spectral index values and CC values were then correlated to investigate the relation between laboratory measured CC and spectral indices. Band 15 of blue range and Band 22 of green range, Band 40 of the red region, and Band 79 of NIR region were found to be optimum bands for estimating CC. The optimum band based combinations on hyperspectral data proved to be the most effective indices for quantifying Butea CC with NDVI and TVI identified as the best (R² > 0.7, p < 0.01). The study demonstrated the significance of band characterization in the development of the best hyperspectral indices for the chlorophyll estimation, which can aid in monitoring the vitality of forests.Keywords: band, characterization, chlorophyll, hyperspectral, indices
Procedia PDF Downloads 15311276 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates
Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc
Abstract:
Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS
Procedia PDF Downloads 35711275 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 34311274 Archaeological Study of Statues of King Thutmosis III from Luxor
Authors: Mahmoud Abualsoud
Abstract:
The era of Thutmosis III represents a transitional period between the art of the Thutmoside art and the Amarna period, so we intend to declare that it serves as the cradle of Amarna art. The study will examine the Statues of king Thutmose III that was discovered in Luxor by an Egyptian mission. These Statues have been transferred to the Conservation Center of the Grand Egyptian Museum (GEM) to be conserved and made ready to be displayed at the new museum (the project of the century). We focus on three Statues chosen because they relate to different years of the king's reign. These Statues were all made of granite. The first one is a Kneeling statue representing the god Amun showing king Thutmose III offering to the goddess Hathor. The second is decorated with king Thutmose III with the red crown, between the goddess Hathor and the royal wife, Nefertari. The third shows the king offering NW vessels and bread to the god Seker. Each statue is divided into registers containing a description and decorated with scenes of the king presenting offerings to gods. The proposed study will focus on the development which happened sequentially according to differences that occur in each statue. We will use comparative research to determine the workshops of these statues, whether one or several, and what are the distinguishing features of each one. We will examine what innovations the artisans added to royal art. The description and the texts will be translated with linguistic comments. This research focuses on text analyses and technology. Paleographic information found on these objects includes the names and titles of the king. This research focuses on text analyses and technology. The study aims to create a manual that may help in dating the artwork of Thutmosis III. This research will be beneficial and useful for heritage and ancient civilizations, particularly when we talk about opening museums like the Grand Egyptian Museum, which will exhibit a collection of statues. Indeed, this kind of study will open a new destination in order to know how to identify these collections and how to exhibit them commensurate with the nature of ancient Egyptian history and heritage.Keywords: archaeological study, Giza, new kingdom, statues, royal art
Procedia PDF Downloads 7011273 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution
Authors: Haiyan Wu, Ying Liu, Shaoyun Shi
Abstract:
Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction
Procedia PDF Downloads 13611272 Towards A New Maturity Model for Information System
Authors: Ossama Matrane
Abstract:
Information System has become a strategic lever for enterprises. It contributes effectively to align business processes on strategies of enterprises. It is regarded as an increase in productivity and effectiveness. So, many organizations are currently involved in implementing sustainable Information System. And, a large number of studies have been conducted the last decade in order to define the success factors of information system. Thus, many studies on maturity model have been carried out. Some of this study is referred to the maturity model of Information System. In this article, we report on development of maturity models specifically designed for information system. This model is built based on three components derived from Maturity Model for Information Security Management, OPM3 for Project Management Maturity Model and processes of COBIT for IT governance. Thus, our proposed model defines three maturity stages for corporate a strong Information System to support objectives of organizations. It provides a very practical structure with which to assess and improve Information System Implementation.Keywords: information system, maturity models, information security management, OPM3, IT governance
Procedia PDF Downloads 44711271 A Teaching Method for Improving Sentence Fluency in Writing
Authors: Manssour Habbash, Srinivasa Rao Idapalapati
Abstract:
Although writing is a multifaceted task, teaching writing is a demanding task basically for two reasons: Grammar and Syntax. This article provides a method of teaching writing that was found to be effective in improving students’ academic writing composition skill. The article explains the concepts of ‘guided-discovery’ and ‘guided-construction’ upon which a method of teaching writing is grounded and developed. Providing a brief commentary on what the core could mean primarily, the article presents an exposition of understanding and identifying the core and building upon the core that can demonstrate the way a teacher can make use of the concepts in teaching for improving the writing skills of their students. The method is an adaptation of grammar translation method that has been improvised to suit to a student-centered classroom environment. An intervention of teaching writing through this method was tried out with positive outcomes in formal classroom research setup, and in view of the content’s quality that relates more to the classroom practices and also in consideration of its usefulness to the practicing teachers the process and the findings are presented in a narrative form along with the results in tabular form.Keywords: core of a text, guided construction, guided discovery, theme of a text
Procedia PDF Downloads 38011270 Visualisation in Health Communication: Taking Weibo Interaction in COVD19 as the Example
Authors: Zicheng Zhang, Linli Zhang
Abstract:
As China's biggest social media platform, Weibo has taken on essential health communication responsibilities during the pandemic. This research takes 105 posters in 15 health-related official Weibo accounts as the analysis objects to explore COVID19 health information communication and visualisation. First, the interaction between the audiences and Weibo, including forwarding, comments, and likes, is statistically analysed. The comments about the information design are extracted manually, and then the sentiment analysis is carried out to verdict audiences' views about the poster's design. The forwarding and comments are quantified as the attention index for a reference to the degree of likes. In addition, this study also designed an evaluation scale based on the standards of Health Literacy Resource by the Centers for Medicare& Medicaid Services (US). Then designers scored all selected posters one by one. Finally, combining the data of the two parts, concluded that: 1. To a certain extent, people think that the posters do not deliver substantive and practical information; 2. Non-knowledge posters(i.e., cartoon posters) gained more Forwarding and Likes, such as Go, Wuhan poster; 3. The analysis of COVID posters is still mainly picture-oriented, mainly about encouraging people to overcome difficulties; 4. Posters for pandemic prevention usually contain more text and fewer illustrations and do not clearly show cultural differences. In conclusion, health communication usually involves a lot of professional knowledge, so visualising that knowledge in an accessible way for the general public is challenging. The relevant posters still have the problems of lack of effective communication, superficial design, and insufficient content accessibility.Keywords: weibo, visualisation, covid posters, poster design
Procedia PDF Downloads 12711269 'Wandering Uterus': An Analogy of Perception of Women in Hippocratic Corpus and Post-Modern Times
Authors: Ankita Sharma
Abstract:
The study proposes to review the perception of women in the Classical Age (500-336 BC) when Greek Philosophy was in bloom. It was observed that women had very few rights and were still under the control of men. One of the possible reasons for this exclusion was woman’s biology that had a huge influence on her being seen as inferior to men. The text ‘Hippocratic Corpus’ focuses on the biological construct of the female body in classical Greek science that perpetuated the idea of women as second-class citizens and were considered inherently weaker than men. The research highlights the significance of the text that was used to encourage women of that time to get married and produce children and how till today the perception remains the same. The Greek belief of need for confinement and control of 'wandering uterus' has led to superior understanding of men. The pivotal emphasis of this research is to women and their bodies that are depicted in a misogynistic way which paved the way for Hippocratic writers to influence the society’s attitude towards women in their writings. It is intended to draw attention to the prevailing cultural assumptions and preconceived notions about female anatomy that had a pervasive influence in the following centuries with its roots being in ancient science.Keywords: classical Greek theory, women, wandering womb, modern ideology
Procedia PDF Downloads 19411268 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography
Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway
Abstract:
This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.Keywords: steganography, stego, LSB, crop
Procedia PDF Downloads 26911267 Classification of Political Affiliations by Reduced Number of Features
Authors: Vesile Evrim, Aliyu Awwal
Abstract:
By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.Keywords: feature selection, LIWC, machine learning, politics
Procedia PDF Downloads 38211266 The Processing of Implicit Stereotypes in Contexts of Reading, Using Eye-Tracking and Self-Paced Reading Tasks
Authors: Magali Mari, Misha Muller
Abstract:
The present study’s objectives were to determine how diverse implicit stereotypes affect the processing of written information and linguistic inferential processes, such as presupposition accommodation. When reading a text, one constructs a representation of the described situation, which is then updated, according to new outputs and based on stereotypes inscribed within society. If the new output contradicts stereotypical expectations, the representation must be corrected, resulting in longer reading times. A similar process occurs in cases of linguistic inferential processes like presupposition accommodation. Presupposition accommodation is traditionally regarded as fast, automatic processing of background information (e.g., ‘Mary stopped eating meat’ is quickly processed as Mary used to eat meat). However, very few accounts have investigated if this process is likely to be influenced by domains of social cognition, such as implicit stereotypes. To study the effects of implicit stereotypes on presupposition accommodation, adults were recorded while they read sentences in French, combining two methods, an eye-tracking task and a classic self-paced reading task (where participants read sentence segments at their own pace by pressing a computer key). In one condition, presuppositions were activated with the French definite articles ‘le/la/les,’ whereas in the other condition, the French indefinite articles ‘un/une/des’ was used, triggering no presupposition. Using a definite article presupposes that the object has already been uttered and is thus part of background information, whereas using an indefinite article is understood as the introduction of new information. Two types of stereotypes were under examination in order to enlarge the scope of stereotypes traditionally analyzed. Study 1 investigated gender stereotypes linked to professional occupations to replicate previous findings. Study 2 focused on nationality-related stereotypes (e.g. ‘the French are seducers’ versus ‘the Japanese are seducers’) to determine if the effects of implicit stereotypes on reading are generalizable to other types of implicit stereotypes. The results show that reading is influenced by the two types of implicit stereotypes; in the two studies, the reading pace slowed down when a counter-stereotype was presented. However, presupposition accommodation did not affect participants’ processing of information. Altogether these results show that (a) implicit stereotypes affect the processing of written information, regardless of the type of stereotypes presented, and (b) that implicit stereotypes prevail over the superficial linguistic treatment of presuppositions, which suggests faster processing for treating social information compared to linguistic information.Keywords: eye-tracking, implicit stereotypes, reading, social cognition
Procedia PDF Downloads 19811265 Comics Scanlation and Publishing Houses Translation
Authors: Sharifa Alshahrani
Abstract:
Comics is a multimodal text wherein meaning is created by taking in all modes of expression at once. It uses two different semiotic modes, the verbal and the visual modes, together to make meaning and these different semiotic modes can be socially and culturally shaped to give meaning. Therefore, comics translation cannot treat comics as a monomodal text by translating only the verbal mode inside or outside the speech balloons as the cultural differences are encoded in the visual mode as well. Due to the development of the internet and editing software, comics translation is not anymore confined to the publishing houses and official translation as scanlation, or the fan translation took the initiative in translating comics for being emotionally attracted to the culture and genre. Scanlation is carried out by volunteering fans who translate out of passion. However, quality is one of the debatable issues relating to scanlation and fan translation. This study will investigate how the dynamic multimodal relationship in comics is exploited and interpreted in the translation by exploring the translation strategies and procedures adopted by the publishing houses and scanlation in interpreting comics into Arabic using three analytical frameworks; cultural references model, multimodal relation model and translation strategies and procedures models.Keywords: comics, multimodality, translation, scanlation
Procedia PDF Downloads 21211264 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches
Authors: Mariam Matiashvili
Abstract:
Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon
Procedia PDF Downloads 7011263 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit
Procedia PDF Downloads 14211262 Death of the Author and Birth of the Adapter in a Literary Work
Authors: Slwa Al-Hammad
Abstract:
Adaptation studies have been closely aligned to translation studies as both deal with the process of rendering the meaning from one culture to another. These two disciplines are related to each other, but the theories are still being developed. This research aims to fill this gap and provide a contribution to the growing discipline of adaptation studies through a theoretical perspective while investigating how different cultural interpretations of adaptation influence the final literary product. This research focuses on the theoretical concepts of Barthes’s death of the author and Benjamin’s afterlife of the text in translation, which is believed to lead to the birth of the adapter in a literary work. That is, in adaptation, the ‘death’ of the author allows for the ‘birth’ of the adapter, offering them all the creative possibilities of authorship. It also explores the differences between the meanings of adaptation in the West and the Arab world through the analysis of adapted texts in Arabic initially deriving from the European and American literature of the 19th and 20th centuries. The methodology of this thesis is based upon qualitative literary analysis, in which original and adapted works are compared and contrasted, with the additional insights of literary and adaptation theories and prior scholarship. The main works discussed are the Arabic adaptations of William Faulkner’s novels. The analysis is guided by theories of adaptation studies to help in explaining the concepts of relocating, recreating, and rewriting in the process of adaptation. It draws on scholarship on adaptations to inquire into the status of the adapted texts in relation to the original texts. Also, these theories prove that adaptation is the process that is used to transfer text from source to adapted text, not some other analytical practice. Through the textual analysis, concepts of the death of the author and the birth of the adapter will be illustrated, as will the roles of the adapter and the task of rendering works for a different culture, and the understanding of adaptation and Arabization in Arabic literature.Keywords: adaptation, Arabization, authorship, recreating, relocating
Procedia PDF Downloads 13611261 A Physical Theory of Information vs. a Mathematical Theory of Communication
Authors: Manouchehr Amiri
Abstract:
This article introduces a general notion of physical bit information that is compatible with the basics of quantum mechanics and incorporates the Shannon entropy as a special case. This notion of physical information leads to the Binary data matrix model (BDM), which predicts the basic results of quantum mechanics, general relativity, and black hole thermodynamics. The compatibility of the model with holographic, information conservation, and Landauer’s principles are investigated. After deriving the “Bit Information principle” as a consequence of BDM, the fundamental equations of Planck, De Broglie, Beckenstein, and mass-energy equivalence are derived.Keywords: physical theory of information, binary data matrix model, Shannon information theory, bit information principle
Procedia PDF Downloads 17111260 Prosperous Digital Image Watermarking Approach by Using DCT-DWT
Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar
Abstract:
In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacksKeywords: watermarking, digital, DCT-DWT, security
Procedia PDF Downloads 42211259 Anaphora and Cataphora on the Selected State of the City Addresses of the Mayor of Dapitan
Authors: Mark Herman Sumagang Potoy
Abstract:
State of the City Address (SOCA) is a speech, modelled after the State of the Nation Address, given not as mandated by law but usually a matter of practice or tradition delivered before the chief executive’s constituents. Through this, the general public is made to know the performance of the local government unit and its agenda for the coming year. Therefore, it is imperative for SOCAs to clearly convey its message and carry out the myriad function of enlightening its readers which could be achieved through the proper use of reference. Anaphora and cataphora are the two major types of reference; the former refer back to something that has already been mentioned while the latter points forward to something which is yet to be said. This paper seeks to identify the types of reference employed on the SOCAs from 2014 to 2016 of Hon. Rosalina Garcia Jalosjos, Mayor of Dapitan City and look into how the references contribute to the clarity of the message of the text. The qualitative method of research is used in this study through an in-depth analysis of the corpus. As soon as the copies of the SOCAs are secured from the Office of the City Mayor, they are then analyzed using documentary technique categorizing the types of reference as to anaphora and cataphora, counting each of these types and describing the implications of the dominant types used in the addresses. After a thorough analysis, it is found out that the two reference types namely, anaphora and cataphora are both employed on the three SOCAs, the former being used more frequently than the latter accounting to 80% and 20% of actual usage, respectively. Moreover, the use of anaphors and cataphora on the three addresses helps in conveying the message clearly because they primarily become aids to avoid the repetition of the same element in the text especially when there wasn’t a need to emphasize a point. Finally, it is recommended that writers of State of the City Addresses should have a vast knowledge on how reference should be used and the functions they take in the text since this is a vital tool to clearly transmit a message. Moreover, English teachers should explicitly teach the proper usage of anaphora and cataphora, as instruments to develop cohesion in written discourse, to enable students to write not only with sense but also with fluidity in tying utterances together.Keywords: anaphora, cataphora, reference, State of the City Address
Procedia PDF Downloads 19211258 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification
Authors: Zhaoxin Luo, Michael Zhu
Abstract:
In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese
Procedia PDF Downloads 6811257 Semantic Based Analysis in Complaint Management System with Analytics
Authors: Francis Alterado, Jennifer Enriquez
Abstract:
Semantic Based Analysis in Complaint Management System with Analytics is an enhanced tool of providing complaints by the clients as well as a mechanism for Palawan Polytechnic College to gather, process, and monitor status of these complaints. The study has a mobile application that serves as a remote facility of communication between the students and the school management on the issues encountered by the student and the solution of every complaint received. In processing the complaints, text mining and clustering algorithms were utilized. Every module of the systems was tested and based on the results; these are 100% free from error before integration was done. A system testing was also done by checking the expected functionality of the system which was 100% functional. The system was tested by 10 students by forwarding complaints to 10 departments. Based on results, the students were able to submit complaints, the system was able to process accordingly by identifying to which department the complaints are intended, and the concerned department was able to give feedback on the complaint received to the student. With this, the system gained 4.7 rating which means Excellent.Keywords: technology adoption, emerging technology, issues challenges, algorithm, text mining, mobile technology
Procedia PDF Downloads 19911256 The Effect of Supply Chain Integration on Information Sharing
Authors: Khlif Hamadi
Abstract:
Supply chain integration has become a potentially valuable way of securing shared information and improving supply chain performance since competition is no longer between organizations but among supply chains. This research conceptualizes and develops three dimensions of supply chain integration (integration with customers, integration with suppliers, and the interorganizational integration) and tests the relationships between supply chain integration, information sharing, and supply chain performance. Furthermore, the four types of information sharing namely; information sharing with customers, information sharing with suppliers, inter-functional information sharing, and intra-organizational information sharing; and the four constructs of Supply Chain Performance represents expenses of costs, asset utilization, supply chain reliability, and supply chain flexibility and responsiveness. The theoretical and practical implications of the study, as well as directions for future research, are discussed.Keywords: supply chain integration, supply chain management, information sharing, supply chain performance
Procedia PDF Downloads 26111255 Gastric Foreign Bodies in Dogs
Authors: Naglaa A. Abd Elkader, Haithem A. Farghali
Abstract:
The present study carried out on fifteen clinical cases of different species of dogs which admitted to surgical clinic of veterinary medicine with different symptoms (Acute vomiting, hematemesis and anorexia). There was diagnostic march which including plain radiograph and endoscopic examination. Treatment was including surgical interference and endoscopic retrieval followed by medicinal treatment. This study was aimed the detection of different foreign bodies by the most suitable method according to the type of the foreign bodies.Keywords: stomach, endoscopy, foreign bodies, dogs
Procedia PDF Downloads 41711254 Compilation and Statistical Analysis of an Arabic-English Legal Corpus in Sketch Engine
Authors: C. Brierley, H. El-Farahaty, A. Farhan
Abstract:
The Leeds Parallel Corpus of Arabic-English Constitutions is a parallel corpus for the Arabic legal domain. Analysis of legal language via Corpus Linguistics techniques is an important development. In legal proceedings, a corpus-based approach to disambiguating meaning is set to replace the dictionary as an interpretative tool, and legal scholarship in the States is now attuned to the potential for Text Analytics over vast quantities of text-based legal material, following the business and medical industries. This trend is reflected in Europe: the interdisciplinary research group in Computer Assisted Legal Linguistics mines big data collections of legal and non-legal texts to analyse: legal interpretations; legal discourse; the comprehensibility of legal texts; conflict resolution; and linguistic human rights. This paper focuses on ‘dignity’ as an important aspect of the overarching concept of human rights in current constitutions across the Arab world. We have compiled a parallel, Arabic-English raw text corpus (169,861 Arabic words and 205,893 English words) from reputable websites such as the World Intellectual Property Organisation and CONSTITUTE, and uploaded and queried our corpus in Sketch Engine. Our most challenging task was sentence-level alignment of Arabic-English data. This entailed manual intervention to ensure correspondence on a one-to-many basis since Arabic sentences differ from English in length and punctuation. We have searched for morphological variants of ‘dignity’ (رامة ك, karāma) in the Arabic data and inspected their English translation equivalents. The term occurs most frequently in the Sudanese constitution (10 instances), and not at all in the constitution of Palestine. Its most frequent collocate, determined via the logDice statistic in Sketch Engine, is ‘human’ as in ‘human dignity’.Keywords: Arabic constitution, corpus-based legal linguistics, human rights, parallel Arabic-English legal corpora
Procedia PDF Downloads 18311253 Chinese Event Detection Technique Based on Dependency Parsing and Rule Matching
Authors: Weitao Lin
Abstract:
To quickly extract adequate information from large-scale unstructured text data, this paper studies the representation of events in Chinese scenarios and performs the regularized abstraction. It proposes a Chinese event detection technique based on dependency parsing and rule matching. The method first performs dependency parsing on the original utterance, then performs pattern matching at the word or phrase granularity based on the results of dependent syntactic analysis, filters out the utterances with prominent non-event characteristics, and obtains the final results. The experimental results show the effectiveness of the method.Keywords: natural language processing, Chinese event detection, rules matching, dependency parsing
Procedia PDF Downloads 14111252 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity
Authors: Vahid Ebrahimipour
Abstract:
Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation
Procedia PDF Downloads 10511251 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients
Authors: Lise Paesen, Marielle Leijten
Abstract:
People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.Keywords: Alzheimer's disease, experimental design, language decline, writing process
Procedia PDF Downloads 27411250 1/Sigma Term Weighting Scheme for Sentiment Analysis
Authors: Hanan Alshaher, Jinsheng Xu
Abstract:
Large amounts of data on the web can provide valuable information. For example, product reviews help business owners measure customer satisfaction. Sentiment analysis classifies texts into two polarities: positive and negative. This paper examines movie reviews and tweets using a new term weighting scheme, called one-over-sigma (1/sigma), on benchmark datasets for sentiment classification. The proposed method aims to improve the performance of sentiment classification. The results show that 1/sigma is more accurate than the popular term weighting schemes. In order to verify if the entropy reflects the discriminating power of terms, we report a comparison of entropy values for different term weighting schemes.Keywords: 1/sigma, natural language processing, sentiment analysis, term weighting scheme, text classification
Procedia PDF Downloads 202