Search results for: semantic web technologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2379

Search results for: semantic web technologies

2349 Network Word Discovery Framework Based on Sentence Semantic Vector Similarity

Authors: Ganfeng Yu, Yuefeng Ma, Shanliang Yang

Abstract:

The word discovery is a key problem in text information retrieval technology. Methods in new word discovery tend to be closely related to words because they generally obtain new word results by analyzing words. With the popularity of social networks, individual netizens and online self-media have generated various network texts for the convenience of online life, including network words that are far from standard Chinese expression. How detect network words is one of the important goals in the field of text information retrieval today. In this paper, we integrate the word embedding model and clustering methods to propose a network word discovery framework based on sentence semantic similarity (S³-NWD) to detect network words effectively from the corpus. This framework constructs sentence semantic vectors through a distributed representation model, uses the similarity of sentence semantic vectors to determine the semantic relationship between sentences, and finally realizes network word discovery by the meaning of semantic replacement between sentences. The experiment verifies that the framework not only completes the rapid discovery of network words but also realizes the standard word meaning of the discovery of network words, which reflects the effectiveness of our work.

Keywords: text information retrieval, natural language processing, new word discovery, information extraction

Procedia PDF Downloads 62
2348 Hybrid Approximate Structural-Semantic Frequent Subgraph Mining

Authors: Montaceur Zaghdoud, Mohamed Moussaoui, Jalel Akaichi

Abstract:

Frequent subgraph mining refers usually to graph matching and it is widely used in when analyzing big data with large graphs. A lot of research works dealt with structural exact or inexact graph matching but a little attention is paid to semantic matching when graph vertices and/or edges are attributed and typed. Therefore, it seems very interesting to integrate background knowledge into the analysis and that extracted frequent subgraphs should become more pruned by applying a new semantic filter instead of using only structural similarity in graph matching process. Consequently, this paper focuses on developing a new hybrid approximate structuralsemantic graph matching to discover a set of frequent subgraphs. It uses simultaneously an approximate structural similarity function based on graph edit distance function and a possibilistic vertices similarity function based on affinity function. Both structural and semantic filters contribute together to prune extracted frequent set. Indeed, new hybrid structural-semantic frequent subgraph mining approach searches will be suitable to be applied to several application such as community detection in social networks.

Keywords: approximate graph matching, hybrid frequent subgraph mining, graph mining, possibility theory

Procedia PDF Downloads 369
2347 A Study of Various Ontology Learning Systems from Text and a Look into Future

Authors: Fatima Al-Aswadi, Chan Yong

Abstract:

With the large volume of unstructured data that increases day by day on the web, the motivation of representing the knowledge in this data in the machine processable form is increased. Ontology is one of the major cornerstones of representing the information in a more meaningful way on the semantic Web. The goal of Ontology learning from text is to elicit and represent domain knowledge in the machine readable form. This paper aims to give a follow-up review on the ontology learning systems from text and some of their defects. Furthermore, it discusses how far the ontology learning process will enhance in the future.

Keywords: concept discovery, deep learning, ontology learning, semantic relation, semantic web

Procedia PDF Downloads 479
2346 An Approach to Integrate Ontologies of Open Educational Resources in Knowledge Base Management Systems

Authors: Firas A. Al Laban, Mohamed Chabi, Sammani Danwawu Abdullahi

Abstract:

There are a real needs to integrate types of Open Educational Resources (OER) with an intelligent system to extract information and knowledge in the semantic searching level. Those needs raised because most of current learning standard adopted web based learning and the e-learning systems does not always serve all educational goals. Semantic Web systems provide educators, students, and researchers with intelligent queries based on a semantic knowledge management learning system. An ontology-based learning system is an advanced system, where ontology plays the core of the semantic web in a smart learning environment. The objective of this paper is to discuss the potentials of ontologies and mapping different kinds of ontologies; heterogeneous or homogenous to manage and control different types of Open Educational Resources. The important contribution of this research is to approach a methodology uses logical rules and conceptual relations to map between ontologies of different educational resources. We expect from this methodology to establish for an intelligent educational system supporting student tutoring, self and lifelong learning system.

Keywords: knowledge management systems, ontologies, semantic web, open educational resources

Procedia PDF Downloads 466
2345 Russian Spatial Impersonal Sentence Models in Translation Perspective

Authors: Marina Fomina

Abstract:

The paper focuses on the category of semantic subject within the framework of a functional approach to linguistics. The semantic subject is related to similar notions such as the grammatical subject and the bearer of predicative feature. It is the multifaceted nature of the category of subject that 1) triggers a number of issues that, syntax-wise, remain to be dealt with (cf. semantic vs. syntactic functions / sentence parts vs. parts of speech issues, etc.); 2) results in a variety of approaches to the category of subject, such as formal grammatical, semantic/syntactic (functional), communicative approaches, etc. Many linguists consider the prototypical approach to the category of subject to be the most instrumental as it reveals the integrity of denotative and linguistic components of the conceptual category. This approach relates to subject as a source of non-passive predicative feature, an element of subject-predicate-object situation that can take on a variety of semantic roles, cf.: 1) an agent (He carefully surveyed the valley stretching before him), 2) an experiencer (I feel very bitter about this), 3) a recipient (I received this book as a gift), 4) a causee (The plane broke into three pieces), 5) a patient (This stove cleans easily), etc. It is believed that the variety of roles stems from the radial (prototypical) structure of the category with some members more central than others. Translation-wise, the most “treacherous” subject types are the peripheral ones. The paper 1) features a peripheral status of spatial impersonal sentence models such as U menia v ukhe zvenit (lit. I-Gen. in ear buzzes) within the category of semantic subject, 2) makes a structural and semantic analysis of the models, 3) focuses on their Russian-English translation patterns, 4) reveals non-prototypical features of subjects in the English equivalents.

Keywords: bearer of predicative feature, grammatical subject, impersonal sentence model, semantic subject

Procedia PDF Downloads 343
2344 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer

Authors: Mahya Naghipoor

Abstract:

Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.

Keywords: lung cancer, radiomics, computer tomography, mutation

Procedia PDF Downloads 127
2343 An Automatic Model Transformation Methodology Based on Semantic and Syntactic Comparisons and the Granularity Issue Involved

Authors: Tiexin Wang, Sebastien Truptil, Frederick Benaben

Abstract:

Model transformation, as a pivotal aspect of Model-driven engineering, attracts more and more attentions both from researchers and practitioners. Many domains (enterprise engineering, software engineering, knowledge engineering, etc.) use model transformation principles and practices to serve to their domain specific problems; furthermore, model transformation could also be used to fulfill the gap between different domains: by sharing and exchanging knowledge. Since model transformation has been widely used, there comes new requirement on it: effectively and efficiently define the transformation process and reduce manual effort that involved in. This paper presents an automatic model transformation methodology based on semantic and syntactic comparisons, and focuses particularly on granularity issue that existed in transformation process. Comparing to the traditional model transformation methodologies, this methodology serves to a general purpose: cross-domain methodology. Semantic and syntactic checking measurements are combined into a refined transformation process, which solves the granularity issue. Moreover, semantic and syntactic comparisons are supported by software tool; manual effort is replaced in this way.

Keywords: automatic model transformation, granularity issue, model-driven engineering, semantic and syntactic comparisons

Procedia PDF Downloads 362
2342 Reverse Logistics Information Management Using Ontological Approach

Authors: F. Lhafiane, A. Elbyed, M. Bouchoum

Abstract:

Reverse Logistics (RL) Process is considered as complex and dynamic network that involves many stakeholders such as: suppliers, manufactures, warehouse, retails, and costumers, this complexity is inherent in such process due to lack of perfect knowledge or conflicting information. Ontologies, on the other hand, can be considered as an approach to overcome the problem of sharing knowledge and communication among the various reverse logistics partners. In this paper, we propose a semantic representation based on hybrid architecture for building the Ontologies in an ascendant way, this method facilitates the semantic reconciliation between the heterogeneous information systems (ICT) that support reverse logistics Processes and product data.

Keywords: Reverse Logistics, information management, heterogeneity, ontologies, semantic web

Procedia PDF Downloads 465
2341 On the Framework of Contemporary Intelligent Mathematics Underpinning Intelligent Science, Autonomous AI, and Cognitive Computers

Authors: Yingxu Wang, Jianhua Lu, Jun Peng, Jiawei Zhang

Abstract:

The fundamental demand in contemporary intelligent science towards Autonomous AI (AI*) is the creation of unprecedented formal means of Intelligent Mathematics (IM). It is discovered that natural intelligence is inductively created rather than exhaustively trained. Therefore, IM is a family of algebraic and denotational mathematics encompassing Inference Algebra, Real-Time Process Algebra, Concept Algebra, Semantic Algebra, Visual Frame Algebra, etc., developed in our labs. IM plays indispensable roles in training-free AI* theories and systems beyond traditional empirical data-driven technologies. A set of applications of IM-driven AI* systems will be demonstrated in contemporary intelligence science, AI*, and cognitive computers.

Keywords: intelligence mathematics, foundations of intelligent science, autonomous AI, cognitive computers, inference algebra, real-time process algebra, concept algebra, semantic algebra, applications

Procedia PDF Downloads 25
2340 Combining Instance-Based and Reasoning-Based Approaches for Ontology Matching

Authors: Abderrahmane Khiat, Moussa Benaissa

Abstract:

Due to the increasing number of sources of information available on the web and their distribution and heterogeneity, ontology alignment became a very important and inevitable problem to ensure semantic interoperability. Instance-based ontology alignment is based on the comparison of the extensions of concepts; and represents a very promising technique to find semantic correspondences between entities of different ontologies. In practice, two situations may arise: ontologies that share many common instances and ontologies that share few or do not share common instances. In this paper, we describe an approach to manage the latter case. This approach exploits the reasoning on ontologies in order to create a corpus of common instances. We show that it is theoretically powerful because it is based on description logics and very useful in practice. We present the experimental results obtained by running our approach on ontologies of OAEI 2012 benchmark test. The results show the performance of our approach.

Keywords: description logic inference, instance-based ontology alignment, semantic interoperability, semantic web

Procedia PDF Downloads 414
2339 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data

Authors: Jaehyung An, Sungjoo Lee

Abstract:

Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.

Keywords: NLP, patent analysis, SAO, semantic-analysis

Procedia PDF Downloads 243
2338 The Impact of the Information Technologies on the Accounting Department of the Romanian Companies

Authors: Dumitru Valentin Florentin

Abstract:

The need to use high volumes of data and the high competition are only two reasons which make necessary the use of information technologies. The objective of our research is to establish the impact of information technologies on the accounting department of the Romanian companies. In order to achieve it, starting from the literature review we made an empirical research based on a questionnaire. We investigated the types of technologies used, the reasons which led to the implementation of certain technologies, the benefits brought by the use of the information technologies, the difficulties brought by the implementation and the future effects of the applications. The conclusions show that there is an evolution in the degree of implementation of the information technologies in the Romanian companies, compared with the results of other studies conducted a few years before.

Keywords: information technologies, impact, company, Romania, empirical study

Procedia PDF Downloads 394
2337 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman

Abstract:

With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.

Keywords: band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation

Procedia PDF Downloads 329
2336 Parallel Querying of Distributed Ontologies with Shared Vocabulary

Authors: Sharjeel Aslam, Vassil Vassilev, Karim Ouazzane

Abstract:

Ontologies and various semantic repositories became a convenient approach for implementing model-driven architectures of distributed systems on the Web. SPARQL is the standard query language for querying such. However, although SPARQL is well-established standard for querying semantic repositories in RDF and OWL format and there are commonly used APIs which supports it, like Jena for Java, its parallel option is not incorporated in them. This article presents a complete framework consisting of an object algebra for parallel RDF and an index-based implementation of the parallel query engine capable of dealing with the distributed RDF ontologies which share common vocabulary. It has been implemented in Java, and for validation of the algorithms has been applied to the problem of organizing virtual exhibitions on the Web.

Keywords: distributed ontologies, parallel querying, semantic indexing, shared vocabulary, SPARQL

Procedia PDF Downloads 166
2335 The Use of Social Stories and Digital Technology as Interventions for Autistic Children; A State-Of-The-Art Review and Qualitative Data Analysis

Authors: S. Hussain, C. Grieco, M. Brosnan

Abstract:

Background and Aims: Autism is a complex neurobehavioural disorder, characterised by impairments in the development of language and communication skills. The study involved a state-of-art systematic review, in addition to qualitative data analysis, to establish the evidence for social stories as an intervention strategy for autistic children. An up-to-date review of the use of digital technologies in the delivery of interventions to autistic children was also carried out; to propose the efficacy of digital technologies and the use of social stories to improve intervention outcomes for autistic children. Methods: Two student researchers reviewed a range of randomised control trials and observational studies. The aim of the review was to establish if there was adequate evidence to justify recommending social stories to autistic patients. Students devised their own search strategies to be used across a range of search engines, including Ovid-Medline, Google Scholar and PubMed. Students then critically appraised the generated literature. Additionally, qualitative data obtained from a comprehensive online questionnaire on social stories was also thematically analysed. The thematic analysis was carried out independently by each researcher, using a ‘bottom-up’ approach, meaning contributors read and analysed responses to questions and devised semantic themes from reading the responses to a given question. The researchers then placed each response into a semantic theme or sub-theme. The students then joined to discuss the merging of their theme headings. The Inter-rater reliability (IRR) was calculated before and after theme headings were merged, giving IRR for pre- and post-discussion. Lastly, the thematic analysis was assessed by a third researcher, who is a professor of psychology and the director for the ‘Centre for Applied Autism Research’ at the University of Bath. Results: A review of the literature, as well as thematic analysis of qualitative data found supporting evidence for social story use. The thematic analysis uncovered some interesting themes from the questionnaire responses, relating to the reasons why social stories were used and the factors influencing their effectiveness in each case. However, overall, the evidence for digital technologies interventions was limited, and the literature could not prove a causal link between better intervention outcomes for autistic children and the use of technologies. However, they did offer valid proposed theories for the suitability of digital technologies for autistic children. Conclusions: Overall, the review concluded that there was adequate evidence to justify advising the use of social stories with autistic children. The role of digital technologies is clearly a fast-emerging field and appears to be a promising method of intervention for autistic children; however, it should not yet be considered an evidence-based approach. The students, using this research, developed ideas on social story interventions which aim to help autistic children.

Keywords: autistic children, digital technologies, intervention, social stories

Procedia PDF Downloads 94
2334 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach

Authors: Aliaksandr Huminski

Abstract:

Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.

Keywords: decomposition, labeling, primitive verbs, semantic roles

Procedia PDF Downloads 339
2333 Online Topic Model for Broadcasting Contents Using Semantic Correlation Information

Authors: Chang-Uk Kwak, Sun-Joong Kim, Seong-Bae Park, Sang-Jo Lee

Abstract:

This paper proposes a method of learning topics for broadcasting contents. There are two kinds of texts related to broadcasting contents. One is a broadcasting script which is a series of texts including directions and dialogues. The other is blogposts which possesses relatively abstracted contents, stories and diverse information of broadcasting contents. Although two texts range over similar broadcasting contents, words in blogposts and broadcasting script are different. In order to improve the quality of topics, it needs a method to consider the word difference. In this paper, we introduce a semantic vocabulary expansion method to solve the word difference. We expand topics of the broadcasting script by incorporating the words in blogposts. Each word in blogposts is added to the most semantically correlated topics. We use word2vec to get the semantic correlation between words in blogposts and topics of scripts. The vocabularies of topics are updated and then posterior inference is performed to rearrange the topics. In experiments, we verified that the proposed method can learn more salient topics for broadcasting contents.

Keywords: broadcasting script analysis, topic expansion, semantic correlation analysis, word2vec

Procedia PDF Downloads 230
2332 Semantic-Based Collaborative Filtering to Improve Visitor Cold Start in Recommender Systems

Authors: Baba Mbaye

Abstract:

In collaborative filtering recommendation systems, a user receives suggested items based on the opinions and evaluations of a community of users. This type of recommendation system uses only the information (notes in numerical values) contained in a usage matrix as input data. This matrix can be constructed based on users' behaviors or by offering users to declare their opinions on the items they know. The cold start problem leads to very poor performance for new users. It is a phenomenon that occurs at the beginning of use, in the situation where the system lacks data to make recommendations. There are three types of cold start problems: cold start for a new item, a new system, and a new user. We are interested in this article at the cold start for a new user. When the system welcomes a new user, the profile exists but does not have enough data, and its communities with other users profiles are still unknown. This leads to recommendations not adapted to the profile of the new user. In this paper, we propose an approach that improves cold start by using the notions of similarity and semantic proximity between users profiles during cold start. We will use the cold-metadata available (metadata extracted from the new user's data) useful in positioning the new user within a community. The aim is to look for similarities and semantic proximities with the old and current user profiles of the system. Proximity is represented by close concepts considered to belong to the same group, while similarity groups together elements that appear similar. Similarity and proximity are two close but not similar concepts. This similarity leads us to the construction of similarity which is based on: a) the concepts (properties, terms, instances) independent of ontology structure and, b) the simultaneous representation of the two concepts (relations, presence of terms in a document, simultaneous presence of the authorities). We propose an ontology, OIVCSRS (Ontology of Improvement Visitor Cold Start in Recommender Systems), in order to structure the terms and concepts representing the meaning of an information field, whether by the metadata of a namespace, or the elements of a knowledge domain. This approach allows us to automatically attach the new user to a user community, partially compensate for the data that was not initially provided and ultimately to associate a better first profile with the cold start. Thus, the aim of this paper is to propose an approach to improving cold start using semantic technologies.

Keywords: visitor cold start, recommender systems, collaborative filtering, semantic filtering

Procedia PDF Downloads 195
2331 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 83
2330 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test

Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi

Abstract:

Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.

Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde

Procedia PDF Downloads 71
2329 The Effect of Information Technologies on Business Performance: An Application on Small Hotels

Authors: Abdullah Karaman, Kursad Sayin

Abstract:

In this research, which information technologies are used in small hotel businesses, and the information technologies-performance perception of the managers are pointed out. During the research, the questionnaire was prepared and the small scale hotel managers were interviewed face to face and they filled out the questionnaire and the answers acquired were evaluated. As the result of the research, it was obtained that the managers do not care much about the information technologies usage in practice even though they accepted that the information technologies are important in terms of performance.

Keywords: information technologies, managers, performance, small hotels

Procedia PDF Downloads 459
2328 Semantic Processing in Chinese: Category Effects, Task Effects and Age Effects

Authors: Yi-Hsiu Lai

Abstract:

The present study aimed to elucidate the nature of semantic processing in Chinese. Language and cognition related to the issue of aging are examined from the perspective of picture naming and category fluency tasks. Twenty Chinese-speaking adults (ranging from 25 to 45 years old) and twenty Chinese-speaking seniors (ranging from 65 to 75 years old) in Taiwan participated in this study. Each of them individually completed two tasks: a picture naming task and a category fluency task. Instruments for the naming task were sixty black-and-white pictures: thirty-five object and twenty-five action pictures. Category fluency task also consisted of two semantic categories – objects (or nouns) and actions (or verbs). Participants were asked to report as many items within a category as possible in one minute. Scores of action fluency and of object fluency were a summation of correct responses in these two categories. Category effects (actions vs. objects) and age effects were examined in these tasks. Objects were further divided into two major types: living objects and non-living objects. Actions were also categorized into two major types: action verbs and process verbs. Reaction time to each picture/question was additionally calculated and analyzed. Results of the category fluency task indicated that the content of information in Chinese seniors was comparatively deteriorated, thus producing smaller number of semantic-lexical items. Significant group difference was also found in the results of reaction time. Category Effect was significant for both Chinese adults and seniors in the semantic fluency task. Findings in the present study helped characterize the nature of semantic processing in Chinese-speaking adults and seniors and contributed to the issue of language and aging.

Keywords: semantic processing, aging, Chinese, category effects

Procedia PDF Downloads 334
2327 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 400
2326 An Ontology for Semantic Enrichment of RFID Systems

Authors: Haitham S. Hamza, Mohamed Maher, Shourok Alaa, Aya Khattab, Hadeal Ismail, Kamilia Hosny

Abstract:

Radio Frequency Identification (RFID) has become a key technology in the margining concept of Internet of Things (IoT). Naturally, business applications would require the deployment of various RFID systems that are developed by different vendors and use various data formats. This heterogeneity poses a real challenge in developing large-scale IoT systems with RFID as integration is becoming very complex and challenging. Semantic integration is a key approach to deal with this challenge. To do so, ontology for RFID systems need to be developed in order to annotated semantically RFID systems, and hence, facilitate their integration. Accordingly, in this paper, we propose ontology for RFID systems. The proposed ontology can be used to semantically enrich RFID systems, and hence, improve their usage and reasoning. The usage of the proposed ontology is explained through a simple scenario in the health care domain.

Keywords: RFID, semantic technology, ontology, sparql query language, heterogeneity

Procedia PDF Downloads 440
2325 Information Technologies in Automotive Assembly Industry in Thailand

Authors: Jirarat Teeravaraprug, Usawadee Inklay

Abstract:

This paper gave an attempt in prioritizing information technologies that organizations should give concentration. The case study was organizations in the automotive assembly industry in Thailand. Data were first collected to gather all information technologies known and used in the automotive assembly industry in Thailand. Five experts from the industries were surveyed based on the concept of fuzzy DEMATEL. The information technologies were categorized into six groups, which were communication, transaction, planning, organization management, warehouse management, and transportation. The cause groups of information technologies for each group were analysed and presented. Moreover, the relationship between the used and the significant information technologies was given. Discussions based on the used information technologies and the research results are given.

Keywords: information technology, automotive assembly industry, fuzzy DEMATEL

Procedia PDF Downloads 310
2324 New Ways of Vocabulary Enlargement

Authors: S. Pesina, T. Solonchak

Abstract:

Lexical invariants, being a sort of stereotypes within the frames of ordinary consciousness, are created by the members of a language community as a result of uniform division of reality. The invariant meaning is formed in person’s mind gradually in the course of different actualizations of secondary meanings in various contexts. We understand lexical the invariant as abstract language essence containing a set of semantic components. In one of its configurations it is the basis or all or a number of the meanings making up the semantic structure of the word.

Keywords: lexical invariant, invariant theories, polysemantic word, cognitive linguistics

Procedia PDF Downloads 285
2323 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 104
2322 Cerrado and Vereda: A Survey of Portuguese Lexicon for Brazilian Biomes

Authors: Daniel Marra

Abstract:

This paper analyses from a semantic-diachronic viewpoint the change of meanings that two lexical items of Brazilian-Portuguese language have gone through. Cerrado and Vereda designate currently the second largest Brazilian biome and one of its most important subsystems. Nevertheless, these two words have long individual histories that can be traced back to their Latin etymons. Therefore, the purpose of this work is to highlight the process by which meaning instantiated itself in these words’ formation and to discuss how semantic change installed subsequently in them. As this paper shows, the aforementioned words have been, in different past, synchronizes, created, and undergone changes of meanings by metaphor and metonymy. Besides, it is argued here that semantic change takes place due to external causes, such as generalization and specialization of meaning. It happens when a specialized use of a lexical item, restricted to a particular linguistic group, is adopted by other groups, having its meaning generalized by them. In these processes, the etymological idea of the word is generally lost, which gains, in the new group, less specific meaning in relation to its etymology, sometimes with no relation to the original idea. As a final point, it is claimed that both the creation of a lexical item and its change of meaning involve pragmatic goals, such as the need the language users have to express a new meaning related to a certain reality in the empirical world.

Keywords: Brazilian biomes, metaphor and metonymy, Portuguese lexicon, semantic change

Procedia PDF Downloads 94
2321 Understanding the Interactive Nature in Auditory Recognition of Phonological/Grammatical/Semantic Errors at the Sentence Level: An Investigation Based upon Japanese EFL Learners’ Self-Evaluation and Actual Language Performance

Authors: Hirokatsu Kawashima

Abstract:

One important element of teaching/learning listening is intensive listening such as listening for precise sounds, words, grammatical, and semantic units. Several classroom-based investigations have been conducted to explore the usefulness of auditory recognition of phonological, grammatical and semantic errors in such a context. The current study reports the results of one such investigation, which targeted auditory recognition of phonological, grammatical, and semantic errors at the sentence level. 56 Japanese EFL learners participated in this investigation, in which their recognition performance of phonological, grammatical and semantic errors was measured on a 9-point scale by learners’ self-evaluation from the perspective of 1) two types of similar English sound (vowel and consonant minimal pair words), 2) two types of sentence word order (verb phrase-based and noun phrase-based word orders), and 3) two types of semantic consistency (verb-purpose and verb-place agreements), respectively, and their general listening proficiency was examined using standardized tests. A number of findings have been made about the interactive relationships between the three types of auditory error recognition and general listening proficiency. Analyses based on the OPLS (Orthogonal Projections to Latent Structure) regression model have disclosed, for example, that the three types of auditory error recognition are linked in a non-linear way: the highest explanatory power for general listening proficiency may be attained when quadratic interactions between auditory recognition of errors related to vowel minimal pair words and that of errors related to noun phrase-based word order are embraced (R2=.33, p=.01).

Keywords: auditory error recognition, intensive listening, interaction, investigation

Procedia PDF Downloads 488
2320 Aspects of Semantics of Standard British English and Nigerian English: A Contrastive Study

Authors: Chris Adetuyi, Adeola Adeniran

Abstract:

The concept of meaning is a complex one in language study when cultural features are added. This is mandatory because language cannot be completely separated from the culture in which case language and culture complement each other. When there are two varieties of a language in a society, i.e. two varieties functioning side by side in a speech community, there is a tendency to view one of the varieties with each other. There is, therefore, the need to make a linguistic comparative study of varieties of such languages. In this paper, a semantic contrastive study is made between Standard British English (SBE) and Nigerian English (NB). The semantic study is limited to aspects of semantics: semantic extension (Kinship terms, metaphors), semantic shift (lexical items considered are ‘drop’ ‘befriend’ ‘dowry’ and escort) acronyms (NEPA, JAMB, NTA) linguistic borrowing or loan words (Seriki, Agbada, Eba, Dodo, Iroko) coinages (long leg, bush meat; bottom power and juju). In the study of these aspects of semantics of SBE and NE lexical terms, conservative statements are made, problems areas and hierarchy of difficulties are highlighted with a view to bringing out areas of differences are highlighted in this paper are concerned. The study will also serve as a guide in further contrastive studies in some other area of languages.

Keywords: aspect, British, English, Nigeria, semantics

Procedia PDF Downloads 320