Search results for: semantic markup
76 Migration of the Relational Data Base (RDB) to the Object Relational Data Base (ORDB)
Authors: Alae El Alami, Mohamed Bahaj
Abstract:
This paper proposes an approach for translating an existing relational database (RDB) schema into ORDB. The transition is done with methods that can extract various functions from a RDB which is based on aggregations, associations between the various tables, and the reflexive relationships. These methods can extract even the inheritance knowing that no process of reverse engineering can know that it is an Inheritance; therefore, our approach exceeded all of the previous studies made for the transition from RDB to ORDB. In summation, the creation of the New Data Model (NDM) that stocks the RDB in a form of a structured table, and from the NDM we create our navigational model in order to simplify the implementation object from which we develop our different types. Through these types we precede to the last step, the creation of tables.
The step mentioned above does not require any human interference. All this is done automatically, and a prototype has already been created which proves the effectiveness of this approach.
Keywords: Relational databases, Object-relational databases, Semantic enrichment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195475 A Neural Model of Object Naming
Authors: Alessio Plebe
Abstract:
One astonishing capability of humans is to recognize thousands of different objects visually, and to learn the semantic association between those objects and words referring to them. This work is an attempt to build a computational model of such capacity,simulating the process by which infants learn how to recognize objects and words through exposure to visual stimuli and vocal sounds.One of the main fact shaping the brain of a newborn is that lights and colors come from entities of the world. Gradually the visual system learn which light sensations belong to same entities, despite large changes in appearance. This experience is common between humans and several other mammals, like non-human primates. But humans only can recognize a huge variety of objects, most manufactured by himself, and make use of sounds to identify and categorize them. The aim of this model is to reproduce these processes in a biologically plausible way, by reconstructing the essential hierarchy of cortical circuits on the visual and auditory neural paths.
Keywords: Auditory cortex, object recognition, self-organizingmaps
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138574 Resources-Based Ontology Matching to Access Learning Resources
Authors: A. Elbyed
Abstract:
Nowadays, ontologies are used for achieving a common understanding within a user community and for sharing domain knowledge. However, the de-centralized nature of the web makes indeed inevitable that small communities will use their own ontologies to describe their data and to index their own resources. Certainly, accessing to resources from various ontologies created independently is an important challenge for answering end user queries. Ontology mapping is thus required for combining ontologies. However, mapping complete ontologies at run time is a computationally expensive task. This paper proposes a system in which mappings between concepts may be generated dynamically as the concepts are encountered during user queries. In this way, the interaction itself defines the context in which small and relevant portions of ontologies are mapped. We illustrate application of the proposed system in the context of Technology Enhanced Learning (TEL) where learners need to access to learning resources covering specific concepts.Keywords: Resources query, ontologies, ontology mapping, similarity measures, semantic web, e-learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148773 A Text Classification Approach Based on Natural Language Processing and Machine Learning Techniques
Authors: Rim Messaoudi, Nogaye-Gueye Gning, François Azelart
Abstract:
Automatic text classification applies mostly natural language processing (NLP) and other artificial intelligence (AI)-guided techniques to automatically classify text in a faster and more accurate manner. This paper discusses the subject of using predictive maintenance to manage incident tickets inside the sociality. It focuses on proposing a tool that treats and analyses comments and notes written by administrators after resolving an incident ticket. The goal here is to increase the quality of these comments. Additionally, this tool is based on NLP and machine learning techniques to realize the textual analytics of the extracted data. This approach was tested using real data taken from the French National Railways (SNCF) company and was given a high-quality result.
Keywords: Machine learning, text classification, NLP techniques, semantic representation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24472 FCA-based Conceptual Knowledge Discovery in Folksonomy
Authors: Yu-Kyung Kang, Suk-Hyung Hwang, Kyoung-Mo Yang
Abstract:
The tagging data of (users, tags and resources) constitutes a folksonomy that is the user-driven and bottom-up approach to organizing and classifying information on the Web. Tagging data stored in the folksonomy include a lot of very useful information and knowledge. However, appropriate approach for analyzing tagging data and discovering hidden knowledge from them still remains one of the main problems on the folksonomy mining researches. In this paper, we have proposed a folksonomy data mining approach based on FCA for discovering hidden knowledge easily from folksonomy. Also we have demonstrated how our proposed approach can be applied in the collaborative tagging system through our experiment. Our proposed approach can be applied to some interesting areas such as social network analysis, semantic web mining and so on.
Keywords: Folksonomy data mining, formal concept analysis, collaborative tagging, conceptual knowledge discovery, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202971 On the Interactive Search with Web Documents
Authors: Mario Kubek, Herwig Unger
Abstract:
Due to the large amount of information in the World Wide Web (WWW, web) and the lengthy and usually linearly ordered result lists of web search engines that do not indicate semantic relationships between their entries, the search for topically similar and related documents can become a tedious task. Especially, the process of formulating queries with proper terms representing specific information needs requires much effort from the user. This problem gets even bigger when the user's knowledge on a subject and its technical terms is not sufficient enough to do so. This article presents the new and interactive search application DocAnalyser that addresses this problem by enabling users to find similar and related web documents based on automatic query formulation and state-ofthe- art search word extraction. Additionally, this tool can be used to track topics across semantically connected web documents.
Keywords: DocAnalyser, interactive web search, search word extraction, query formulation, source topic detection, topic tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164870 VISUAL JESS: AN Expandable Visual Generator of Oriented Object Expert systems
Authors: Amel Grissa-Touzi, Habib Ounally, Aissa Boulila
Abstract:
The utility of expert system generators has been widely recognized in many applications. Several generators based on concept of the paradigm object, have been recently proposed. The generator of oriented object expert system (GSEOO) offers languages that are often complex and difficult to use. We propose in this paper an extension of the expert system generator, JESS, which permits a friendly use of this expert system. The new tool, called VISUAL JESS, bring two main improvements to JESS. The first improvement concerns the easiness of its utilization while giving back transparency to the syntax and semantic aspects of the JESS programming language. The second improvement permits an easy access and modification of the JESS knowledge basis. The implementation of VISUAL JESS is made so that it is extensible and portable.Keywords: Generator of Systems Expert, Programming oriented object classifies, object, inheritance, polymorphism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161669 On Formalizing Predefined OCL Properties
Authors: Meryem Lamrani, Younès El Amrani, Aziz Ettouhami
Abstract:
The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Keywords: Formal methods, Z, OCL, predefined properties, metamodel types.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77268 Modeling Peer-to-Peer Networks with Interest-Based Clusters
Authors: Bertalan Forstner, Dr. Hassan Charaf
Abstract:
In the world of Peer-to-Peer (P2P) networking different protocols have been developed to make the resource sharing or information retrieval more efficient. The SemPeer protocol is a new layer on Gnutella that transforms the connections of the nodes based on semantic information to make information retrieval more efficient. However, this transformation causes high clustering in the network that decreases the number of nodes reached, therefore the probability of finding a document is also decreased. In this paper we describe a mathematical model for the Gnutella and SemPeer protocols that captures clustering-related issues, followed by a proposition to modify the SemPeer protocol to achieve moderate clustering. This modification is a sort of link management for the individual nodes that allows the SemPeer protocol to be more efficient, because the probability of a successful query in the P2P network is reasonably increased. For the validation of the models, we evaluated a series of simulations that supported our results.Keywords: Peer-to-Peer, model, performance, networkmanagement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130667 Specialized Translation Teaching Strategies: A Corpus-Based Approach
Authors: Yingying Ding
Abstract:
This study presents a methodology of specialized translation with the objective of helping teachers to improve the strategies in teaching translation. In order to allow students to acquire skills to translate specialized texts, they need to become familiar with the semantic and syntactic features of source texts and target texts. The aim of our study is to use a corpus-based approach in the teaching of specialized translation between Chinese and Italian. This study proposes to construct a specialized Chinese - Italian comparable corpus that consists of 50 economic contracts from the domain of food. With the help of AntConc, we propose to compile a comparable corpus in for translation teaching purposes. This paper attempts to provide insight into how teachers could benefit from comparable corpus in the teaching of specialized translation from Italian into Chinese and through some examples of passive sentences how students could learn to apply different strategies for translating appropriately the voice.
Keywords: Corpus-based approach, translation teaching, specialized translation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127166 Injunctions, Disjunctions, Remnants: The Reverse of Unity
Authors: Igor Guatelli
Abstract:
The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.
Keywords: Clearing, interstice, negative, remnant, spectrum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42865 Adaptation of State/Transition-Based Methods for Embedded System Testing
Authors: Abdelaziz Guerrouat, Harald Richter
Abstract:
In this paper test generation methods and appropriate fault models for testing and analysis of embedded systems described as (extended) finite state machines ((E)FSMs) are presented. Compared to simple FSMs, EFSMs specify not only the control flow but also the data flow. Thus, we define a two-level fault model to cover both aspects. The goal of this paper is to reuse well-known FSM-based test generation methods for automation of embedded system testing. These methods have been widely used in testing and validation of protocols and communicating systems. In particular, (E)FSMs-based specification and testing is more advantageous because (E)FSMs support the formal semantic of already standardised formal description techniques (FDTs) despite of their popularity in the design of hardware and software systems.
Keywords: Formal methods, testing and validation, finite state machines, formal description techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 209464 Computational Networks for Knowledge Representation
Authors: Nhon Van Do
Abstract:
In the artificial intelligence field, knowledge representation and reasoning are important areas for intelligent systems, especially knowledge base systems and expert systems. Knowledge representation Methods has an important role in designing the systems. There have been many models for knowledge such as semantic networks, conceptual graphs, and neural networks. These models are useful tools to design intelligent systems. However, they are not suitable to represent knowledge in the domains of reality applications. In this paper, new models for knowledge representation called computational networks will be presented. They have been used in designing some knowledge base systems in education for solving problems such as the system that supports studying knowledge and solving analytic geometry problems, the program for studying and solving problems in Plane Geometry, the program for solving problems about alternating current in physics.Keywords: Artificial intelligence, artificial intelligence and education, knowledge engineering, knowledge representation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221863 Performance Evaluation of an Ontology-Based Arabic Sentiment Analysis
Authors: Salima Behdenna, Fatiha Barigou, Ghalem Belalem
Abstract:
Due to the quick increase in the volume of Arabic opinions posted on various social media, Arabic sentiment analysis has become one of the most important areas of research. Compared to English, there is very little works on Arabic sentiment analysis, in particular aspect-based sentiment analysis (ABSA). In ABSA, aspect extraction is the most important task. In this paper, we propose a semantic ABSA approach for standard Arabic reviews to extract explicit aspect terms and identify the polarity of the extracted aspects. The proposed approach was evaluated using HAAD datasets. Experiments showed that the proposed approach achieved a good level of performance compared with baseline results. The F-measure was improved by 19% for the aspect term extraction tasks and 55% aspect term polarity task.
Keywords: Sentiment analysis, opinion mining, Arabic, aspect level, opinion, polarity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46362 An Experiment on Personal Archiving and Retrieving Image System (PARIS)
Authors: Pei-Jeng Kuo, Terumasa Aoki, Hiroshi Yasuda
Abstract:
PARIS (Personal Archiving and Retrieving Image System) is an experiment personal photograph library, which includes more than 80,000 of consumer photographs accumulated within a duration of approximately five years, metadata based on our proposed MPEG-7 annotation architecture, Dozen Dimensional Digital Content (DDDC), and a relational database structure. The DDDC architecture is specially designed for facilitating the managing, browsing and retrieving of personal digital photograph collections. In annotating process, we also utilize a proposed Spatial and Temporal Ontology (STO) designed based on the general characteristic of personal photograph collections. This paper explains PRAIS system.Keywords: Ontology, Databases and Information Retrieval, MPEG-7, Spatial-Temporal, Digital Library Designs l, metadata, Semantic Web, semi-automatic annotation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111961 A Distributed Approach to Extract High Utility Itemsets from XML Data
Authors: S. Kannimuthu, K. Premalatha
Abstract:
This paper investigates a new data mining capability that entails mining of High Utility Itemsets (HUI) in a distributed environment. Existing research in data mining deals with only presence or absence of an items and do not consider the semantic measures like weight or cost of the items. Thus, HUI mining algorithm has evolved. HUI mining is the one kind of utility mining concept, aims to identify itemsets whose utility satisfies a given threshold. Although, the approach of mining HUIs in a distributed environment and mining of the same from XML data have not explored yet. In this work, a novel approach is proposed to mine HUIs from the XML based data in a distributed environment. This work utilizes Service Oriented Computing (SOC) paradigm which provides Knowledge as a Service (KaaS). The interesting patterns are provided via the web services with the help of knowledge server to answer the queries of the consumers. The performance of the approach is evaluated on various databases using execution time and memory consumption.
Keywords: Data mining, Knowledge as a Service, service oriented computing, utility mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 245560 Visualisation and Navigation in Large Scale P2P Service Networks
Authors: H. Unger, H. Coltzau
Abstract:
In Peer-to-Peer service networks, where peers offer any kind of publicly available services or applications, intuitive navigation through all services in the network becomes more difficult as the number of services increases. In this article, a concept is discussed that enables users to intuitively browse and use large scale P2P service networks. The concept extends the idea of creating virtual 3D-environments solely based on Peer-to-Peer technologies. Aside from browsing, users shall have the possibility to emphasize services of interest using their own semantic criteria. The appearance of the virtual world shall intuitively reflect network properties that may be of interest for the user. Additionally, the concept comprises options for load- and traffic-balancing. In this article, the requirements concerning the underlying infrastructure and the graphical user interface are defined. First impressions of the appearance of future systems are presented and the next steps towards a prototypical implementation are discussed.
Keywords: Internet Operating System, Peer-To-Peer, Service Exploration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 128359 An Ontology for Investment in Chinese Steel Company
Authors: Liming Chen, Baoxin Xiu, Zhaoyun Ding, Bin Liu, Xianqiang Zhu
Abstract:
In the era of big data, public investors are faced with more complicated information related to investment decisions than ever before. To survive in the fierce competition, it has become increasingly urgent for investors to combine multi-source knowledge and evaluate the companies’ true value efficiently. For this, a rule-based ontology reasoning method is proposed to support steel companies’ value assessment. Considering the delay in financial disclosure and based on cost-benefit analysis, this paper introduces the supply chain enterprises financial analysis and constructs the ontology model used to value the value of steel company. In addition, domain knowledge is formally expressed with the help of Web Ontology Language (OWL) language and SWRL (Semantic Web Rule Language) rules. Finally, a case study on a steel company in China proved the effectiveness of the method we proposed.
Keywords: Financial ontology, steel company, supply chain, ontology reasoning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59858 Hybrid Machine Learning Approach for Text Categorization
Authors: Nerijus Remeikis, Ignas Skucas, Vida Melninkaite
Abstract:
Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Keywords: Text categorization, decision trees, neural networks, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180757 The Spiral_OWL Model – Towards Spiral Knowledge Engineering
Authors: Hafizullah A. Hashim, Aniza. A
Abstract:
The Spiral development model has been used successfully in many commercial systems and in a good number of defense systems. This is due to the fact that cost-effective incremental commitment of funds, via an analogy of the spiral model to stud poker and also can be used to develop hardware or integrate software, hardware, and systems. To support adaptive, semantic collaboration between domain experts and knowledge engineers, a new knowledge engineering process, called Spiral_OWL is proposed. This model is based on the idea of iterative refinement, annotation and structuring of knowledge base. The Spiral_OWL model is generated base on spiral model and knowledge engineering methodology. A central paradigm for Spiral_OWL model is the concentration on risk-driven determination of knowledge engineering process. The collaboration aspect comes into play during knowledge acquisition and knowledge validation phase. Design rationales for the Spiral_OWL model are to be easy-to-implement, well-organized, and iterative development cycle as an expanding spiral.Keywords: Domain Expert, Knowledge Base, Ontology, Software Process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 176856 Modeling “Web of Trust“ with Web 2.0
Authors: Omer Mahmood, Selvakennedy Selvadurai
Abstract:
“Web of Trust" is one of the recognized goals for Web 2.0. It aims to make it possible for the people to take responsibility for what they publish on the web, including organizations, businesses and individual users. These objectives, among others, drive most of the technologies and protocols recently standardized by the governing bodies. One of the great advantages of Web infrastructure is decentralization of publication. The primary motivation behind Web 2.0 is to assist the people to add contents for Collective Intelligence (CI) while providing mechanisms to link content with people for evaluations and accountability of information. Such structure of contents will interconnect users and contents so that users can use contents to find participants and vice versa. This paper proposes conceptual information storage and linking model, based on decentralized information structure, that links contents and people together. The model uses FOAF, Atom, RDF and RDFS and can be used as a blueprint to develop Web 2.0 applications for any e-domain. However, primary target for this paper is online trust evaluation domain. The proposed model targets to assist the individuals to establish “Web of Trust" in online trust domain.Keywords: Web of Trust, Semantic Web, Electronic SocialNetworks, Information Management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222355 A Context-Sensitive Algorithm for Media Similarity Search
Authors: Guang-Ho Cha
Abstract:
This paper presents a context-sensitive media similarity search algorithm. One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. Many media search algorithms have used the Minkowski metric to measure similarity between image pairs. However those functions cannot adequately capture the aspects of the characteristics of the human visual system as well as the nonlinear relationships in contextual information given by images in a collection. Our search algorithm tackles this problem by employing a similarity measure and a ranking strategy that reflect the nonlinearity of human perception and contextual information in a dataset. Similarity search in an image database based on this contextual information shows encouraging experimental results.
Keywords: Context-sensitive search, image search, media search, similarity ranking, similarity search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64154 Minimizing Mutant Sets by Equivalence and Subsumption
Authors: Samia Alblwi, Amani Ayad
Abstract:
Mutation testing is the art of generating syntactic variations of a base program and checking whether a candidate test suite can identify all the mutants that are not semantically equivalent to the base; this technique can be used to assess the quality of test suite. One of the main obstacles to the widespread use of mutation testing is cost, as even small programs (a few dozen lines of code) can give rise to a large number of mutants (up to hundreds); this has created an incentive to seek to reduce the number of mutants while preserving their collective effectiveness. Two criteria have been used to reduce the size of mutant sets: equivalence, which aims to partition the set of mutants into equivalence classes modulo semantic equivalence, and selecting one representative per class; and, subsumption, which aims to define a partial ordering among mutants that ranks mutants by effectiveness and seeks to select maximal elements in this ordering. In this paper, we analyze these two policies using analytical and empirical criteria.
Keywords: Mutation testing, mutant sets, mutant equivalence, mutant subsumption, mutant set minimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19353 Use of Bayesian Network in Information Extraction from Unstructured Data Sources
Authors: Quratulain N. Rajput, Sajjad Haider
Abstract:
This paper applies Bayesian Networks to support information extraction from unstructured, ungrammatical, and incoherent data sources for semantic annotation. A tool has been developed that combines ontologies, machine learning, and information extraction and probabilistic reasoning techniques to support the extraction process. Data acquisition is performed with the aid of knowledge specified in the form of ontology. Due to the variable size of information available on different data sources, it is often the case that the extracted data contains missing values for certain variables of interest. It is desirable in such situations to predict the missing values. The methodology, presented in this paper, first learns a Bayesian network from the training data and then uses it to predict missing data and to resolve conflicts. Experiments have been conducted to analyze the performance of the presented methodology. The results look promising as the methodology achieves high degree of precision and recall for information extraction and reasonably good accuracy for predicting missing values.Keywords: Information Extraction, Bayesian Network, ontology, Machine Learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 223252 A Materialized Approach to the Integration of XML Documents: the OSIX System
Authors: H. Ahmad, S. Kermanshahani, A. Simonet, M. Simonet
Abstract:
The data exchanged on the Web are of different nature from those treated by the classical database management systems; these data are called semi-structured data since they do not have a regular and static structure like data found in a relational database; their schema is dynamic and may contain missing data or types. Therefore, the needs for developing further techniques and algorithms to exploit and integrate such data, and extract relevant information for the user have been raised. In this paper we present the system OSIX (Osiris based System for Integration of XML Sources). This system has a Data Warehouse model designed for the integration of semi-structured data and more precisely for the integration of XML documents. The architecture of OSIX relies on the Osiris system, a DL-based model designed for the representation and management of databases and knowledge bases. Osiris is a viewbased data model whose indexing system supports semantic query optimization. We show that the problem of query processing on a XML source is optimized by the indexing approach proposed by Osiris.Keywords: Data integration, semi-structured data, views, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159051 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes
Authors: O. F. Elkommos
Abstract:
“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.
Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity- resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141050 Optimizing the Capacity of a Convolutional Neural Network for Image Segmentation and Pattern Recognition
Authors: Yalong Jiang, Zheru Chi
Abstract:
In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.Keywords: CNN, capsule network, capacity optimization, character recognition, data augmentation; semantic segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 70149 Talent in Autism: Cognitive Style based on Weak Central Coherence and Special Sensory Characteristics in State of Kuwait: Case Study
Authors: Mariam Abdulaziz Y.Esmaeel
Abstract:
The study aimed to identify the nature of autistic talent, the manifestations of their weak central coherence, and their sensory characteristics. The case study consisted of four talented autistic males. Two of them in drawing, one in clay formation and one in jigsaw puzzle. Tools of data collection were Group Embedded Figures Test, Block Design Test, Sensory Profile Checklist Revised, Interview forms and direct observation. Results indicated that talent among autistics emerges in limited domain and being extraordinary for each case. Also overlapping construction properties. Indeed, they show three perceptual aspects of weak central coherence: The weak in visual spatial-constructional coherence, the weak in perceptual coherence and the weak in verbal – semantic coherence. Moreover, the majority of the study cases used the three strategies of weak central coherence (segmentation, obliqueness and rotation). As for the sensory characteristics, all study cases have numbers of that characteristics that especially emerges in the visual system.Keywords: Autism, Central Coherence, Savant, Sensory characteristics, Talent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 269048 DocPro: A Framework for Processing Semantic and Layout Information in Business Documents
Authors: Ming-Jen Huang, Chun-Fang Huang, Chiching Wei
Abstract:
With the recent advance of the deep neural network, we observe new applications of NLP (natural language processing) and CV (computer vision) powered by deep neural networks for processing business documents. However, creating a real-world document processing system needs to integrate several NLP and CV tasks, rather than treating them separately. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. In this paper, a framework that fulfills this unified approach is presented. The framework includes a representation model definition for holding the information generated by various tasks and specifications defining the coordination between these tasks. The framework is a blueprint for building a system that can process documents with rich formats, styles, and multiple types of elements. The flexible and lightweight design of the framework can help build a system for diverse business scenarios, such as contract monitoring and reviewing.
Keywords: Document processing, framework, formal definition, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64047 Security Architecture for At-Home Medical Care Using Sensor Network
Authors: S.S.Mohanavalli, Sheila Anand
Abstract:
This paper proposes a novel architecture for At- Home medical care which enables senior citizens, patients with chronic ailments and patients requiring post- operative care to be remotely monitored in the comfort of their homes. This architecture is implemented using sensors and wireless networking for transmitting patient data to the hospitals, health- care centers for monitoring by medical professionals. Patients are equipped with sensors to measure their physiological parameters, like blood pressure, pulse rate etc. and a Wearable Data Acquisition Unit is used to transmit the patient sensor data. Medical professionals can be alerted to any abnormal variations in these values for diagnosis and suitable treatment. Security threats and challenges inherent to wireless communication and sensor network have been discussed and a security mechanism to ensure data confidentiality and source authentication has been proposed. Symmetric key algorithm AES has been used for encrypting the data and a patent-free, two-pass block cipher mode CCFB has been used for implementing semantic security.Keywords: data confidentiality, integrity, remotemonitoring, source authentication
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742