Search results for: latent semantic analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28314

Search results for: latent semantic analysis

28284 Fingerprint on Ballistic after Shooting

Authors: Narong Kulnides

Abstract:

This research involved fingerprints on ballistics after shooting. Two objectives of research were as follows; (1) to study the duration of the existence of latent fingerprints on .38, .45, 9 mm and .223 cartridge case after shooting, and (2) to compare the effectiveness of the detection of latent fingerprints by Black Powder, Super Glue, Perma Blue and Gun Bluing. The latent fingerprint appearance were studied on .38, .45, 9 mm. and .223 cartridge cases before and after shooting with Black Powder, Super Glue, Perma Blue and Gun Bluing. The detection times were 3 minute, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 78 and 84 hours respectively. As a result of the study, it can be conclude that: (1) Before shooting, the detection of latent fingerprints on 38, .45, and 9 mm. and .223 cartridge cases with Black Powder, Super Glue, Perma Blue and Gun Bluing can detect the fingerprints at all detection times. (2) After shooting, the detection of latent fingerprints on .38, .45, 9 mm. and .223 cartridge cases with Black Powder, Super Glue did not appear. The detection of latent fingerprints on .38, .45, 9 mm. cartridge cases with Perma Blue and Gun Bluing were found 100% of the time and the detection of latent fingerprints on .223 cartridge cases with Perma Blue and Gun Bluing were found 40% and 46.67% of the time, respectively.

Keywords: ballistic, fingerprint, shooting, detection times

Procedia PDF Downloads 418
28283 A Secure System for Handling Information from Heterogeous Sources

Authors: Shoohira Aftab, Hammad Afzal

Abstract:

Information integration is a well known procedure to provide consolidated view on sets of heterogeneous information sources. It not only provides better statistical analysis of information but also facilitates users to query without any knowledge on the underlying heterogeneous information sources The problem of providing a consolidated view of information can be handled using Semantic data (information stored in such a way that is understandable by machines and integrate-able without manual human intervention). However, integrating information using semantic web technology without any access management enforced, will results in increase of privacy and confidentiality concerns. In this research we have designed and developed a framework that would allow information from heterogeneous formats to be consolidated, thus resolving the issue of interoperability. We have also devised an access control system for defining explicit privacy constraints. We designed and applied our framework on both semantic and non-semantic data from heterogeneous resources. Our approach is validated using scenario based testing.

Keywords: information integration, semantic data, interoperability, security, access control system

Procedia PDF Downloads 357
28282 A Network of Nouns and Their Features :A Neurocomputational Study

Authors: Skiker Kaoutar, Mounir Maouene

Abstract:

Neuroimaging studies indicate that a large fronto-parieto-temporal network support nouns and their features, with some areas store semantic knowledge (visual, auditory, olfactory, gustatory,…), other areas store lexical representation and other areas are implicated in general semantic processing. However, it is not well understood how this fronto-parieto-temporal network can be modulated by different semantic tasks and different semantic relations between nouns. In this study, we combine a behavioral semantic network, functional MRI studies involving object’s related nouns and brain network studies to explain how different semantic tasks and different semantic relations between nouns can modulate the activity within the brain network of nouns and their features. We first describe how nouns and their features form a large scale brain network. For this end, we examine the connectivities between areas recruited during the processing of nouns to know which configurations of interaction areas are possible. We can thus identify if, for example, brain areas that store semantic knowledge communicate via functional/structural links with areas that store lexical representations. Second, we examine how this network is modulated by different semantic tasks involving nouns and finally, we examine how category specific activation may result from the semantic relations among nouns. The results indicate that brain network of nouns and their features is highly modulated and flexible by different semantic tasks and semantic relations. At the end, this study can be used as a guide to help neurosientifics to interpret the pattern of fMRI activations detected in the semantic processing of nouns. Specifically; this study can help to interpret the category specific activations observed extensively in a large number of neuroimaging studies and clinical studies.

Keywords: nouns, features, network, category specificity

Procedia PDF Downloads 521
28281 Semantic Data Schema Recognition

Authors: Aïcha Ben Salem, Faouzi Boufares, Sebastiao Correia

Abstract:

The subject covered in this paper aims at assisting the user in its quality approach. The goal is to better extract, mix, interpret and reuse data. It deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.

Keywords: schema recognition, semantic data profiling, meta-categorisation, semantic dependencies inter columns

Procedia PDF Downloads 418
28280 Using Textual Pre-Processing and Text Mining to Create Semantic Links

Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo

Abstract:

This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.

Keywords: semantic links, data mining, linked data, SKOS

Procedia PDF Downloads 179
28279 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6

Authors: Levent Dumenci, Laura A. Siminoff

Abstract:

Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.

Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement

Procedia PDF Downloads 178
28278 Semantic Preference across Research Articles: A Corpus-Based Study of Adjectives in English

Authors: Valdênia Carvalho e Almeida

Abstract:

The goal of the present study is to investigate the semantic preference of the most frequent adjectives in research articles through a corpus-based analysis of texts published in journals in Applied Linguistics (AL). The corpus used in this study contains texts published in the period from 2014 to 2018 in the three journals: Language Learning and Technology; English for Academic Purposes, and TESOL Quaterly, totaling more than one million words. A corpus-based analysis was carried out on the corpus to identify the most frequent adjectives that co-occurred in the three journals. By observing the concordance lines of the adjectives and analyzing the words they associated with, the semantic preferences of each adjective were determined. Later, the AL corpus analysis was compared to the investigation of the same adjectives in a corpus of Chemistry. This second part of the study aimed to identify possible differences and similarities between the two corpora in relation to the use of the adjectives in research articles from both areas. The results show that there are some preferences which seem to be closely related not only to the academic genre of the texts but also to the specific domain of the discipline and, to a lesser extent, to the context of research in each journal. This research illustrates a possible contribution of Corpus Linguistics to explore the concept of semantic preference in more detail, considering the complex nature of the phenomenon.

Keywords: applied linguistics, corpus linguistics, chemistry, research article, semantic preference

Procedia PDF Downloads 185
28277 The Latent Model of Linguistic Features in Korean College Students’ L2 Argumentative Writings: Syntactic Complexity, Lexical Complexity, and Fluency

Authors: Jiyoung Bae, Gyoomi Kim

Abstract:

This study explores a range of linguistic features used in Korean college students’ argumentative writings for the purpose of developing a model that identifies variables which predict writing proficiencies. This study investigated the latent variable structure of L2 linguistic features, including syntactic complexity, the lexical complexity, and fluency. One hundred forty-six university students in Korea participated in this study. The results of the study’s confirmatory factor analysis (CFA) showed that indicators of linguistic features from this study-provided a foundation for re-categorizing indicators found in extant research on L2 Korean writers depending on each latent variable of linguistic features. The CFA models indicated one measurement model of L2 syntactic complexity and L2 learners’ writing proficiency; these two latent factors were correlated with each other. Based on the overall findings of the study, integrated linguistic features of L2 writings suggested some pedagogical implications in L2 writing instructions.

Keywords: linguistic features, syntactic complexity, lexical complexity, fluency

Procedia PDF Downloads 170
28276 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules

Authors: Mohsen Maraoui

Abstract:

In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.

Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing

Procedia PDF Downloads 141
28275 Visualization of Latent Sweat Fingerprints Deposit on Paper by Infrared Radiation and Blue Light

Authors: Xiaochun Huang, Xuejun Zhao, Yun Zou, Feiyu Yang, Wenbin Liu, Nan Deng, Ming Zhang, Nengbin Cai

Abstract:

A simple device termed infrared radiation (IR) was developed for rapid visualization of sweat fingerprints deposit on paper with blue light (450 nm, 11 W). In this approach, IR serves as the pretreatment device before the sweat fingerprints was illuminated by blue light. An annular blue light source was adopted for visualizing latent sweat fingerprints. Sample fingerprints were examined under various conditions after deposition, and experimental results indicate that the recovery rate of the latent sweat fingerprints is in the range of 50%-100% without chemical treatments. A mechanism for the observed visibility is proposed based on transportation and re-impregnation of fluorescer in paper at the region of water. And further exploratory experimental results gave the full support to the visible mechanism. Therefore, such a method as IR-pretreated in detecting latent fingerprints may be better for examination in the case where biological information of samples is needed for consequent testing.

Keywords: forensic science, visualization, infrared radiation, blue light, latent sweat fingerprints, detection

Procedia PDF Downloads 497
28274 Text Similarity in Vector Space Models: A Comparative Study

Authors: Omid Shahmirzadi, Adam Lugowski, Kenneth Younge

Abstract:

Automatic measurement of semantic text similarity is an important task in natural language processing. In this paper, we evaluate the performance of different vector space models to perform this task. We address the real-world problem of modeling patent-to-patent similarity and compare TFIDF (and related extensions), topic models (e.g., latent semantic indexing), and neural models (e.g., paragraph vectors). Contrary to expectations, the added computational cost of text embedding methods is justified only when: 1) the target text is condensed; and 2) the similarity comparison is trivial. Otherwise, TFIDF performs surprisingly well in other cases: in particular for longer and more technical texts or for making finer-grained distinctions between nearest neighbors. Unexpectedly, extensions to the TFIDF method, such as adding noun phrases or calculating term weights incrementally, were not helpful in our context.

Keywords: big data, patent, text embedding, text similarity, vector space model

Procedia PDF Downloads 175
28273 Development of Zinc Oxide Coated Carbon Nanoparticles from Pineapples Leaves Using SOL Gel Method for Optimal Adsorption of Copper ion and Reuse in Latent Fingerprint

Authors: Bienvenu Gael Fouda Mbanga, Zikhona Tywabi-Ngeva, Kriveshini Pillay

Abstract:

This work highlighted a new method for preparing Nitrogen carbon nanoparticles fused on zinc oxide nanoparticle nanocomposite (N-CNPs/ZnONPsNC) to remove copper ions (Cu²+) from wastewater by sol-gel method and applying the metal-loaded adsorbent in latent fingerprint application. The N-CNPs/ZnONPsNC showed to be an effective sorbent for optimum Cu²+ sorption at pH 8 and 0.05 g dose. The Langmuir isotherm was found to best fit the process, with a maximum adsorption capacity of 285.71 mg/g, which was higher than most values found in other research for Cu²+ removal. Adsorption was spontaneous and endothermic at 25oC. In addition, the Cu²+-N-CNPs/ZnONPsNC was found to be sensitive and selective for latent fingerprint (LFP) recognition on a range of porous surfaces. As a result, in forensic research, it is an effective distinguishing chemical for latent fingerprint detection.

Keywords: latent fingerprint, nanocomposite, adsorption, copper ions, metal loaded adsorption, adsorbent

Procedia PDF Downloads 83
28272 A Method of the Semantic on Image Auto-Annotation

Authors: Lin Huo, Xianwei Liu, Jingxiong Zhou

Abstract:

Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective.

Keywords: image auto-annotation, color correlograms, Hash code, image retrieval

Procedia PDF Downloads 497
28271 Phraseologisms With The Spices And Food Additives Component In Polish And Russian. Lexical And Semantic Aspects

Authors: Oliwia Bator

Abstract:

The subject of this description is phraseologisms with the component “spices and food additives component" in Polish and Russian. The purpose of the study is to analyze the phraseologisms from the point of view of lexis and semantics. The material for analysis was extracted from Phraseological Dictionaries of Polish and Russian. The phraseologisms were considered from the lexical point of view, taking into account the name of the " spices and food additives" component, which forms them. From the semantic point of view, 12 semantic groups of phraseologisms were separated in Polish, while 9 semantic groups were separated in Russian. In addition is shown their functioning in the contexts of contemporary Polish and Russian. The contexts were taken from the National Corpus of the Polish Language and the National Corpus of the Russian Language.

Keywords: phraseology, language, slavic studies, linguistics

Procedia PDF Downloads 37
28270 Hybrid Approximate Structural-Semantic Frequent Subgraph Mining

Authors: Montaceur Zaghdoud, Mohamed Moussaoui, Jalel Akaichi

Abstract:

Frequent subgraph mining refers usually to graph matching and it is widely used in when analyzing big data with large graphs. A lot of research works dealt with structural exact or inexact graph matching but a little attention is paid to semantic matching when graph vertices and/or edges are attributed and typed. Therefore, it seems very interesting to integrate background knowledge into the analysis and that extracted frequent subgraphs should become more pruned by applying a new semantic filter instead of using only structural similarity in graph matching process. Consequently, this paper focuses on developing a new hybrid approximate structuralsemantic graph matching to discover a set of frequent subgraphs. It uses simultaneously an approximate structural similarity function based on graph edit distance function and a possibilistic vertices similarity function based on affinity function. Both structural and semantic filters contribute together to prune extracted frequent set. Indeed, new hybrid structural-semantic frequent subgraph mining approach searches will be suitable to be applied to several application such as community detection in social networks.

Keywords: approximate graph matching, hybrid frequent subgraph mining, graph mining, possibility theory

Procedia PDF Downloads 402
28269 Online Topic Model for Broadcasting Contents Using Semantic Correlation Information

Authors: Chang-Uk Kwak, Sun-Joong Kim, Seong-Bae Park, Sang-Jo Lee

Abstract:

This paper proposes a method of learning topics for broadcasting contents. There are two kinds of texts related to broadcasting contents. One is a broadcasting script which is a series of texts including directions and dialogues. The other is blogposts which possesses relatively abstracted contents, stories and diverse information of broadcasting contents. Although two texts range over similar broadcasting contents, words in blogposts and broadcasting script are different. In order to improve the quality of topics, it needs a method to consider the word difference. In this paper, we introduce a semantic vocabulary expansion method to solve the word difference. We expand topics of the broadcasting script by incorporating the words in blogposts. Each word in blogposts is added to the most semantically correlated topics. We use word2vec to get the semantic correlation between words in blogposts and topics of scripts. The vocabularies of topics are updated and then posterior inference is performed to rearrange the topics. In experiments, we verified that the proposed method can learn more salient topics for broadcasting contents.

Keywords: broadcasting script analysis, topic expansion, semantic correlation analysis, word2vec

Procedia PDF Downloads 251
28268 Using the Semantic Web Technologies to Bring Adaptability in E-Learning Systems

Authors: Fatima Faiza Ahmed, Syed Farrukh Hussain

Abstract:

The last few decades have seen a large proportion of our population bending towards e-learning technologies, starting from learning tools used in primary and elementary schools to competency based e-learning systems specifically designed for applications like finance and marketing. The huge diversity in this crowd brings about a large number of challenges for the designers of these e-learning systems, one of which is the adaptability of such systems. This paper focuses on adaptability in the learning material in an e-learning course and how artificial intelligence and the semantic web can be used as an effective tool for this purpose. The study proved that the semantic web, still a hot topic in the area of computer science can prove to be a powerful tool in designing and implementing adaptable e-learning systems.

Keywords: adaptable e-learning, HTMLParser, information extraction, semantic web

Procedia PDF Downloads 338
28267 The Influence of Noise on Aerial Image Semantic Segmentation

Authors: Pengchao Wei, Xiangzhong Fang

Abstract:

Noise is ubiquitous in this world. Denoising is an essential technology, especially in image semantic segmentation, where noises are generally categorized into two main types i.e. feature noise and label noise. The main focus of this paper is aiming at modeling label noise, investigating the behaviors of different types of label noise on image semantic segmentation tasks using K-Nearest-Neighbor and Convolutional Neural Network classifier. The performance without label noise and with is evaluated and illustrated in this paper. In addition to that, the influence of feature noise on the image semantic segmentation task is researched as well and a feature noise reduction method is applied to mitigate its influence in the learning procedure.

Keywords: convolutional neural network, denoising, feature noise, image semantic segmentation, k-nearest-neighbor, label noise

Procedia PDF Downloads 220
28266 Semantics of the Word “Nas” in the Verse 24 of Surah Al-Baqarah Based on Izutsus’ Semantic Field Theory

Authors: Seyedeh Khadijeh. Mirbazel, Masoumeh Arjmandi

Abstract:

Semantics is a linguistic approach and a scientific stream, and like all scientific streams, it is dynamic. The study of meaning is carried out in the broad semantic collections of words that form the discourse. In other words, meaning is not something that can be found in a word; rather, the formation of meaning is a process that takes place in a discourse as a whole. One of the contemporary semantic theories is Izutsu's Semantic Field Theory. According to this theory, the discovery of meaning depends on the function of words and takes place within the context of language. The purpose of this research is to identify the meaning of the word "Nas" in the discourse of verse 24 of Surah Al-Baqarah, which introduces "Nas" as the firewood of hell, but the translators have translated it as "people". The present research has investigated the semantic structure of the word "Nas" using the aforementioned theory through the descriptive-analytical method. In the process of investigation, by matching the semantic fields of the Quranic word "Nas", this research came to the conclusion that "Nas" implies those persons who have forgotten God and His covenant in believing in His Oneness. For this reason, God called them "Nas (the forgetful)" - the imperfect participle of the noun /næsiwoɔn/ in single trinity of Arabic language, which means “to forget”. Therefore, the intended meaning of "Nas" in the verses that have the word "Nas" is not equivalent to "People" which is a general noun.

Keywords: Nas, people, semantics, semantic field theory.

Procedia PDF Downloads 189
28265 Neural Correlates of Arabic Digits Naming

Authors: Fernando Ojedo, Alejandro Alvarez, Pedro Macizo

Abstract:

In the present study, we explored electrophysiological correlates of Arabic digits naming to determine semantic processing of numbers. Participants named Arabic digits grouped by category or intermixed with exemplars of other semantic categories while the N400 event-related potential was examined. Around 350-450 ms after the presentation of Arabic digits, brain waves were more positive in anterior regions and more negative in posterior regions when stimuli were grouped by category relative to the mixed condition. Contrary to what was found in other studies, electrophysiological results suggested that the production of numerals involved semantic mediation.

Keywords: Arabic digit naming, event-related potentials, semantic processing, number production

Procedia PDF Downloads 582
28264 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 314
28263 Russian Spatial Impersonal Sentence Models in Translation Perspective

Authors: Marina Fomina

Abstract:

The paper focuses on the category of semantic subject within the framework of a functional approach to linguistics. The semantic subject is related to similar notions such as the grammatical subject and the bearer of predicative feature. It is the multifaceted nature of the category of subject that 1) triggers a number of issues that, syntax-wise, remain to be dealt with (cf. semantic vs. syntactic functions / sentence parts vs. parts of speech issues, etc.); 2) results in a variety of approaches to the category of subject, such as formal grammatical, semantic/syntactic (functional), communicative approaches, etc. Many linguists consider the prototypical approach to the category of subject to be the most instrumental as it reveals the integrity of denotative and linguistic components of the conceptual category. This approach relates to subject as a source of non-passive predicative feature, an element of subject-predicate-object situation that can take on a variety of semantic roles, cf.: 1) an agent (He carefully surveyed the valley stretching before him), 2) an experiencer (I feel very bitter about this), 3) a recipient (I received this book as a gift), 4) a causee (The plane broke into three pieces), 5) a patient (This stove cleans easily), etc. It is believed that the variety of roles stems from the radial (prototypical) structure of the category with some members more central than others. Translation-wise, the most “treacherous” subject types are the peripheral ones. The paper 1) features a peripheral status of spatial impersonal sentence models such as U menia v ukhe zvenit (lit. I-Gen. in ear buzzes) within the category of semantic subject, 2) makes a structural and semantic analysis of the models, 3) focuses on their Russian-English translation patterns, 4) reveals non-prototypical features of subjects in the English equivalents.

Keywords: bearer of predicative feature, grammatical subject, impersonal sentence model, semantic subject

Procedia PDF Downloads 370
28262 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer

Authors: Mahya Naghipoor

Abstract:

Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.

Keywords: lung cancer, radiomics, computer tomography, mutation

Procedia PDF Downloads 167
28261 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling

Authors: Sushma Ghogale

Abstract:

With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.

Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis

Procedia PDF Downloads 97
28260 A Study on Bilingual Semantic Processing: Category Effects and Age Effects

Authors: Lai Yi-Hsiu

Abstract:

The present study addressed the nature of bilingual semantic processing in Mandarin Chinese and Southern Min and examined category effects and age effects. Nineteen bilingual adults of Mandarin Chinese and Southern Min, nine monolingual seniors of Mandarin Chinese, and ten monolingual seniors of Southern Min in Taiwan individually completed two semantic tasks: Picture naming and category fluency tasks. The instruments for the naming task were sixty black-and-white pictures, including thirty-five object pictures and twenty-five action pictures. The category fluency task also consisted of two semantic categories – objects (or nouns) and actions (or verbs). The reaction time for each picture/question was additionally calculated and analyzed. Oral productions in Mandarin Chinese and in Southern Min were compared and discussed to examine the category effects and age effects. The results of the category fluency task indicated that the content of information of these seniors was comparatively deteriorated, and thus they produced a smaller number of semantic-lexical items. Significant group differences were also found in the reaction time results. Category effects were significant for both adults and seniors in the semantic fluency task. The findings of the present study will help characterize the nature of the bilingual semantic processing of adults and seniors, and contribute to the fields of contrastive and corpus linguistics.

Keywords: bilingual semantic processing, aging, Mandarin Chinese, Southern Min

Procedia PDF Downloads 571
28259 PaSA: A Dataset for Patent Sentiment Analysis to Highlight Patent Paragraphs

Authors: Renukswamy Chikkamath, Vishvapalsinhji Ramsinh Parmar, Christoph Hewel, Markus Endres

Abstract:

Given a patent document, identifying distinct semantic annotations is an interesting research aspect. Text annotation helps the patent practitioners such as examiners and patent attorneys to quickly identify the key arguments of any invention, successively providing a timely marking of a patent text. In the process of manual patent analysis, to attain better readability, recognising the semantic information by marking paragraphs is in practice. This semantic annotation process is laborious and time-consuming. To alleviate such a problem, we proposed a dataset to train machine learning algorithms to automate the highlighting process. The contributions of this work are: i) we developed a multi-class dataset of size 150k samples by traversing USPTO patents over a decade, ii) articulated statistics and distributions of data using imperative exploratory data analysis, iii) baseline Machine Learning models are developed to utilize the dataset to address patent paragraph highlighting task, and iv) future path to extend this work using Deep Learning and domain-specific pre-trained language models to develop a tool to highlight is provided. This work assists patent practitioners in highlighting semantic information automatically and aids in creating a sustainable and efficient patent analysis using the aptitude of machine learning.

Keywords: machine learning, patents, patent sentiment analysis, patent information retrieval

Procedia PDF Downloads 90
28258 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity

Authors: Vahid Ebrahimipour

Abstract:

Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.

Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation

Procedia PDF Downloads 105
28257 Numerical Analysis of the Melting of Nano-Enhanced Phase Change Material in a Rectangular Latent Heat Storage Unit

Authors: Radouane Elbahjaoui, Hamid El Qarnia

Abstract:

Melting of Paraffin Wax (P116) dispersed with Al2O3 nanoparticles in a rectangular latent heat storage unit (LHSU) is numerically investigated. The storage unit consists of a number of vertical and identical plates of nano-enhanced phase change material (NEPCM) separated by rectangular channels in which heat transfer fluid flows (HTF: Water). A two dimensional mathematical model is considered to investigate numerically the heat and flow characteristics of the LHSU. The melting problem was formulated using the enthalpy porosity method. The finite volume approach was used for solving equations. The effects of nanoparticles’ volumetric fraction and the Reynolds number on the thermal performance of the storage unit were investigated.

Keywords: nano-enhanced phase change material (NEPCM), phase change material (PCM), nanoparticles, latent heat storage unit (LHSU), melting.

Procedia PDF Downloads 407
28256 Text Mining of Twitter Data Using a Latent Dirichlet Allocation Topic Model and Sentiment Analysis

Authors: Sidi Yang, Haiyi Zhang

Abstract:

Twitter is a microblogging platform, where millions of users daily share their attitudes, views, and opinions. Using a probabilistic Latent Dirichlet Allocation (LDA) topic model to discern the most popular topics in the Twitter data is an effective way to analyze a large set of tweets to find a set of topics in a computationally efficient manner. Sentiment analysis provides an effective method to show the emotions and sentiments found in each tweet and an efficient way to summarize the results in a manner that is clearly understood. The primary goal of this paper is to explore text mining, extract and analyze useful information from unstructured text using two approaches: LDA topic modelling and sentiment analysis by examining Twitter plain text data in English. These two methods allow people to dig data more effectively and efficiently. LDA topic model and sentiment analysis can also be applied to provide insight views in business and scientific fields.

Keywords: text mining, Twitter, topic model, sentiment analysis

Procedia PDF Downloads 179
28255 Performance Evaluation of the Classic seq2seq Model versus a Proposed Semi-supervised Long Short-Term Memory Autoencoder for Time Series Data Forecasting

Authors: Aswathi Thrivikraman, S. Advaith

Abstract:

The study is aimed at designing encoders for deciphering intricacies in time series data by redescribing the dynamics operating on a lower-dimensional manifold. A semi-supervised LSTM autoencoder is devised and investigated to see if the latent representation of the time series data can better forecast the data. End-to-end training of the LSTM autoencoder, together with another LSTM network that is connected to the latent space, forces the hidden states of the encoder to represent the most meaningful latent variables relevant for forecasting. Furthermore, the study compares the predictions with those of a traditional seq2seq model.

Keywords: LSTM, autoencoder, forecasting, seq2seq model

Procedia PDF Downloads 155