Search results for: Semantic Web Languages
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 455

Search results for: Semantic Web Languages

95 Distillation Monitoring and Control using LabVIEW and SIMULINK Tools

Authors: J. Fernandez de Canete, P. Del Saz Orozco, S. Gonzalez-Perez

Abstract:

LabVIEW and SIMULINK are two most widely used graphical programming environments for designing digital signal processing and control systems. Unlike conventional text-based programming languages such as C, Cµ and MATLAB, graphical programming involves block-based code developments, allowing a more efficient mechanism to build and analyze control systems. In this paper a LabVIEW environment has been employed as a graphical user interface for monitoring the operation of a controlled distillation column, by visualizing both the closed loop performance and the user selected control conditions, while the column dynamics has been modeled under the SIMULINK environment. This tool has been applied to the PID based decoupled control of a binary distillation column. By means of such integrated environments the control designer is able to monitor and control the plant behavior and optimize the response when both, the quality improvement of distillation products and the operation efficiency tasks, are considered.

Keywords: Distillation control, software tools, SIMULINKLabVIEWinterface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3805
94 Sentiment Analysis: Comparative Analysis of Multilingual Sentiment and Opinion Classification Techniques

Authors: Sannikumar Patel, Brian Nolan, Markus Hofmann, Philip Owende, Kunjan Patel

Abstract:

Sentiment analysis and opinion mining have become emerging topics of research in recent years but most of the work is focused on data in the English language. A comprehensive research and analysis are essential which considers multiple languages, machine translation techniques, and different classifiers. This paper presents, a comparative analysis of different approaches for multilingual sentiment analysis. These approaches are divided into two parts: one using classification of text without language translation and second using the translation of testing data to a target language, such as English, before classification. The presented research and results are useful for understanding whether machine translation should be used for multilingual sentiment analysis or building language specific sentiment classification systems is a better approach. The effects of language translation techniques, features, and accuracy of various classifiers for multilingual sentiment analysis is also discussed in this study.

Keywords: Cross-language analysis, machine learning, machine translation, sentiment analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
93 An Automatic Tool for Checking Consistency between Data Flow Diagrams (DFDs)

Authors: Rosziati Ibrahim, Siow Yen Yen

Abstract:

System development life cycle (SDLC) is a process uses during the development of any system. SDLC consists of four main phases: analysis, design, implement and testing. During analysis phase, context diagram and data flow diagrams are used to produce the process model of a system. A consistency of the context diagram to lower-level data flow diagrams is very important in smoothing up developing process of a system. However, manual consistency check from context diagram to lower-level data flow diagrams by using a checklist is time-consuming process. At the same time, the limitation of human ability to validate the errors is one of the factors that influence the correctness and balancing of the diagrams. This paper presents a tool that automates the consistency check between Data Flow Diagrams (DFDs) based on the rules of DFDs. The tool serves two purposes: as an editor to draw the diagrams and as a checker to check the correctness of the diagrams drawn. The consistency check from context diagram to lower-level data flow diagrams is embedded inside the tool to overcome the manual checking problem.

Keywords: Data Flow Diagram, Context Diagram, ConsistencyCheck, Syntax and Semantic Rules

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3431
92 A Collaborative Platform for Multilingual Ontology Development

Authors: Ahmed Tawfik, Fausto Giunchiglia, Vincenzo Maltese

Abstract:

Ontologies provide a common understanding of a specific domain of interest that can be communicated between people and used as background knowledge for automated reasoning in a wide range of applications. In this paper, we address the design of multilingual ontologies following well-defined knowledge engineering methodologies with the support of novel collaborative development approaches. In particular, we present a collaborative platform which allows ontologies to be developed incrementally in multiple languages. This is made possible via an appropriate mapping between language independent concepts and one lexicalization per language (or a lexical gap in case such lexicalization does not exist). The collaborative platform has been designed to support the development of the Universal Knowledge Core, a multilingual ontology currently in English, Italian, Chinese, Mongolian, Hindi and Bangladeshi. Its design follows a workflow-based development methodology that models resources as a set of collaborative objects and assigns customizable workflows to build and maintain each collaborative object in a community driven manner, with extensive support of modern web 2.0 social and collaborative features.

Keywords: Knowledge Diversity, Knowledge Representation, Ontology Development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2200
91 A Medical Images Based Retrieval System using Soft Computing Techniques

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.

Keywords: CBIR, GA, Rough sets, CBMIR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2603
90 Sounds Alike Name Matching for Myanmar Language

Authors: Yuzana, Khin Marlar Tun

Abstract:

Personal name matching system is the core of essential task in national citizen database, text and web mining, information retrieval, online library system, e-commerce and record linkage system. It has necessitated to the all embracing research in the vicinity of name matching. Traditional name matching methods are suitable for English and other Latin based language. Asian languages which have no word boundary such as Myanmar language still requires sounds alike matching system in Unicode based application. Hence we proposed matching algorithm to get analogous sounds alike (phonetic) pattern that is convenient for Myanmar character spelling. According to the nature of Myanmar character, we consider for word boundary fragmentation, collation of character. Thus we use pattern conversion algorithm which fabricates words in pattern with fragmented and collated. We create the Myanmar sounds alike phonetic group to help in the phonetic matching. The experimental results show that fragmentation accuracy in 99.32% and processing time in 1.72 ms.

Keywords: natural language processing, name matching, phonetic matching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
89 The Effect of Culture on User Interface Design of Social Media - A Case Study on Preferences of Saudi Arabians on the Arabic User Interface of Facebook

Authors: Hana Almakky, Reza Sahandi, Jacqui Taylor

Abstract:

Social media continues to grow, and user interfaces may become more appealing if cultural characteristics are incorporated into their design. Facebook was designed in the west, and the original language was English. Subsequently, the words in the user interface were translated to other languages, including Arabic. Arabic words are written from right to left, and English is written from left to right. The translated version may misrepresent the original design and users’ preferences may be influenced by their culture, which should be considered in the user interface design. Previous research indicates that users are more comfortable when interacting with a user interface, which relates to their own culture. Therefore, this paper, using a survey, investigates the preferences of Saudi Arabians on the Arabic version of the user interface of Facebook.

Keywords: Culture, Facebook, Saudi Arabia, Social media, User Interface Design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3649
88 Studies on Determination of the Optimum Distance Between the Tmotes for Optimum Data Transfer in a Network with WLL Capability

Authors: N C Santhosh Kumar, N K Kishore

Abstract:

Using mini modules of Tmotes, it is possible to automate a small personal area network. This idea can be extended to large networks too by implementing multi-hop routing. Linking the various Tmotes using Programming languages like Nesc, Java and having transmitter and receiver sections, a network can be monitored. It is foreseen that, depending on the application, a long range at a low data transfer rate or average throughput may be an acceptable trade-off. To reduce the overall costs involved, an optimum number of Tmotes to be used under various conditions (Indoor/Outdoor) is to be deduced. By analyzing the data rates or throughputs at various locations of Tmotes, it is possible to deduce an optimal number of Tmotes for a specific network. This paper deals with the determination of optimum distances to reduce the cost and increase the reliability of the entire sensor network with Wireless Local Loop (WLL) capability.

Keywords: Average throughput, data rate, multi-hop routing, optimum data transfer, throughput, Tmotes, wireless local loop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1358
87 Fuzzy Sequential Algorithm for Discrimination and Decision Maker in Sporting Events

Authors: Mourad Moussa, Ali Douik, Hassani Messaoud

Abstract:

Events discrimination and decision maker in sport field are the subject of many interesting studies in computer vision and artificial intelligence. A large volume of research has been conducted for automatic semantic event detection and summarization of sports videos. Indeed the results of these researches have a very significant contribution, as well to television broadcasts as to the football teams, since the result of sporting event can be reflected on the economic field. In this paper, we propose a novel fuzzy sequential technique which lead to discriminate events and specify the technico-tactics on going the game, nor the fuzzy system or the sequential one, may be able to respond to the asked question, in fact fuzzy process is not sufficient, it does not respect the chronological order according the time of various events, similarly the sequential process needs flexibility about the parameters used in this study, it may affect a membership degree of each parameter on the one hand and respect the sequencing of events for each frame on the other hand. Indeed this technique describes special events such as dribbling, headings, short sprints, rapid acceleration or deceleration, turning, jumping, kicking, ball occupation, and tackling according velocity vectors of the two players and the ball direction.

Keywords: Sequential process, Event detection, Soccer videos analysis, Fuzzy process, Spatio-temporal parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875
86 Multi Language Text Editor for Burushaski and Urdu through Unicode

Authors: Irfan Qadir Baig, Muhammad Sharif, Aman Ullah Khan

Abstract:

This paper introduces an isolated and unique ancient language Burushaski, spoken in Hunza, Nagar, Yasin and parts of Gilgit in the Northern Areas of Pakistan. It explains the working mechanism of Multi Language Text Editor for Urdu and Burushaski. It is developed under the use of ISO/IEC 10646 Unicode standards for Urdu and Burushaski open-type fonts. It gives an ample opportunity to this regional ancient language to have a modern Information technology for its promotion and preservation. The main objective of this research paper is to help preserve the heritage of such rare languages and give smart way of automation. It also facilitates to those who are interested in undertaking research on Burushaski or keen to trace fonatic relationship between the national Urdu language and Burushaski. Since this editor covers both Burushaski and Urdu so it can play an important role to introduce Burusho linguistic culture to the world at large. Precisely, as a result of this research paper, Burushaski publication through IT means would be possible.

Keywords: Burushaski, Bri Naqsh, Unicode, Burusho, Hunza, Meshaski.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2100
85 A Novel Framework for User-Friendly Ontology-Mediated Access to Relational Databases

Authors: Efthymios Chondrogiannis, Vassiliki Andronikou, Efstathios Karanastasis, Theodora Varvarigou

Abstract:

A large amount of data is typically stored in relational databases (DB). The latter can efficiently handle user queries which intend to elicit the appropriate information from data sources. However, direct access and use of this data requires the end users to have an adequate technical background, while they should also cope with the internal data structure and values presented. Consequently the information retrieval is a quite difficult process even for IT or DB experts, taking into account the limited contributions of relational databases from the conceptual point of view. Ontologies enable users to formally describe a domain of knowledge in terms of concepts and relations among them and hence they can be used for unambiguously specifying the information captured by the relational database. However, accessing information residing in a database using ontologies is feasible, provided that the users are keen on using semantic web technologies. For enabling users form different disciplines to retrieve the appropriate data, the design of a Graphical User Interface is necessary. In this work, we will present an interactive, ontology-based, semantically enable web tool that can be used for information retrieval purposes. The tool is totally based on the ontological representation of underlying database schema while it provides a user friendly environment through which the users can graphically form and execute their queries.

Keywords: Ontologies, Relational Databases, SPARQL, Web Interface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
84 Machine Learning for Music Aesthetic Annotation Using MIDI Format: A Harmony-Based Classification Approach

Authors: Lin Yang, Zhian Mi, Jiacheng Xiao, Rong Li

Abstract:

Swimming with the tide of deep learning, the field of music information retrieval (MIR) experiences parallel development and a sheer variety of feature-learning models has been applied to music classification and tagging tasks. Among those learning techniques, the deep convolutional neural networks (CNNs) have been widespreadly used with better performance than the traditional approach especially in music genre classification and prediction. However, regarding the music recommendation, there is a large semantic gap between the corresponding audio genres and the various aspects of a song that influence user preference. In our study, aiming to bridge the gap, we strive to construct an automatic music aesthetic annotation model with MIDI format for better comparison and measurement of the similarity between music pieces in the way of harmonic analysis. We use the matrix of qualification converted from MIDI files as input to train two different classifiers, support vector machine (SVM) and Decision Tree (DT). Experimental results in performance of a tag prediction task have shown that both learning algorithms are capable of extracting high-level properties in an end-to end manner from music information. The proposed model is helpful to learn the audience taste and then the resulting recommendations are likely to appeal to a niche consumer.

Keywords: Harmonic analysis, machine learning, music classification and tagging, MIDI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745
83 Emotional Analysis for Text Search Queries on Internet

Authors: Gemma García López

Abstract:

The goal of this study is to analyze if search queries carried out in search engines such as Google, can offer emotional information about the user that performs them. Knowing the emotional state in which the Internet user is located can be a key to achieve the maximum personalization of content and the detection of worrying behaviors. For this, two studies were carried out using tools with advanced natural language processing techniques. The first study determines if a query can be classified as positive, negative or neutral, while the second study extracts emotional content from words and applies the categorical and dimensional models for the representation of emotions. In addition, we use search queries in Spanish and English to establish similarities and differences between two languages. The results revealed that text search queries performed by users on the Internet can be classified emotionally. This allows us to better understand the emotional state of the user at the time of the search, which could involve adapting the technology and personalizing the responses to different emotional states.

Keywords: Emotion classification, text search queries, emotional analysis, sentiment analysis in text, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 706
82 Extending the Aspect Oriented Programming Joinpoint Model for Memory and Type Safety

Authors: Amjad Nusayr

Abstract:

Software security is a general term used to any type of software architecture or model in which security aspects are incorporated in this architecture. These aspects are not part of the main logic of the underlying program. Software security can be achieved using a combination of approaches including but not limited to secure software designs, third part component validation, and secure coding practices. Memory safety is one feature in software security where we ensure that any object in memory is have a valid pointer or a reference with a valid type. Aspect Oriented Programming (AOP) is a paradigm that is concerned with capturing the cross-cutting concerns in code development. AOP is generally used for common cross-cutting concerns like logging and Database transaction managing. In this paper we introduce the concepts that enable AOP to be used for the purpose of memory and type safety. We also present ideas for extending AOP in software security practices.

Keywords: Aspect oriented programming, programming languages, software security, memory and type safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 401
81 Bottom Up Text Mining through Hierarchical Document Representation

Authors: Y. Djouadi., F. Souam.

Abstract:

Most of the existing text mining approaches are proposed, keeping in mind, transaction databases model. Thus, the mined dataset is structured using just one concept: the “transaction", whereas the whole dataset is modeled using the “set" abstract type. In such cases, the structure of the whole dataset and the relationships among the transactions themselves are not modeled and consequently, not considered in the mining process. We believe that taking into account structure properties of hierarchically structured information (e.g. textual document, etc ...) in the mining process, can leads to best results. For this purpose, an hierarchical associations rule mining approach for textual documents is proposed in this paper and the classical set-oriented mining approach is reconsidered profits to a Direct Acyclic Graph (DAG) oriented approach. Natural languages processing techniques are used in order to obtain the DAG structure. Based on this graph model, an hierarchical bottom up algorithm is proposed. The main idea is that each node is mined with its parent node.

Keywords: Graph based association rules mining, Hierarchical document structure, Text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053
80 Development of Innovative Islamic Web Applications

Authors: Farrukh Shahzad

Abstract:

The rich Islamic resources related to religious text, Islamic sciences, and history are widely available in print and in electronic format online. However, most of these works are only available in Arabic language. In this research, an attempt is made to utilize these resources to create interactive web applications in Arabic, English and other languages. The system utilizes the Pattern Recognition, Knowledge Management, Data Mining, Information Retrieval and Management, Indexing, storage and data-analysis techniques to parse, store, convert and manage the information from authentic Arabic resources. These interactive web Apps provide smart multi-lingual search, tree based search, on-demand information matching and linking. In this paper, we provide details of application architecture, design, implementation and technologies employed. We also presented the summary of web applications already developed. We have also included some screen shots from the corresponding web sites. These web applications provide an Innovative On-line Learning Systems (eLearning and computer based education).

Keywords: Islamic resources, Muslim scholars, hadith, narrators, history, fiqh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294
79 Conceptual Model for Massive Open Online Blended Courses Based on Disciplines’ Concepts Capitalization and Obstacles’ Detection

Authors: N. Hammid, F. Bouarab-Dahmani, T. Berkane

Abstract:

Since its appearance, the MOOC (massive open online course) is gaining more and more intention of the educational communities over the world. Apart from the current MOOCs design and purposes, the creators of MOOC focused on the importance of the connection and knowledge exchange between individuals in learning. In this paper, we present a conceptual model for massive open online blended courses where teachers over the world can collaborate and exchange their experience to get a common efficient content designed as a MOOC opened to their students to live a better learning experience. This model is based on disciplines’ concepts capitalization and the detection of the obstacles met by their students when faced with problem situations (exercises, projects, case studies, etc.). This detection is possible by analyzing the frequently of semantic errors committed by the students. The participation of teachers in the design of the course and the attendance by their students can guarantee an efficient and extensive participation (an important number of participants) in the course, the learners’ motivation and the evaluation issues, in the way that the teachers designing the course assess their students. Thus, the teachers review, together with their knowledge, offer a better assessment and efficient connections to their students.

Keywords: MOOC, Massive Open Online Courses, Online learning, E-learning, Blended learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 936
78 Enhancing Operational Effectiveness in the Norwegian Army through Simulation-Based Training

Authors: B. Bakken, O. Boe

Abstract:

The Norwegian Military Academy (Army) has initiated a project with the main ambition to explore possible avenues to enhancing operational effectiveness through an increased use of simulation-based training and exercises. Within a cost/benefit framework, we discuss opportunities and limitations of vertical and horizontal integration of the existing tactical training system. Vertical integration implies expanding the existing training system to span the full range of training from tactical level (platoon, company) to command and staff level (battalion, brigade). Horizontal integration means including other domains than army tactics and staff procedures in the training, such as military ethics, foreign languages, leadership and decision making. We discuss each of the integration options with respect to purpose and content of training, "best practice" for organising and conducting simulation-based training, and suggest how to evaluate training procedures and measure learning outcomes. We conclude by giving guidelines towards further explorative work and possible implementation.

Keywords: Effectiveness, integration, simulation, training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
77 Ultra High Speed Approach for Document Skew Detection and Correction Based On Centre of Gravity

Authors: Seyyed Yasser Hashemi

Abstract:

Skew detection and correction (SDC) has a direct effect in efficiency and exactitude of documents’ segmentation and analysis and thus is considered as a very important step in documents’ analysis field. Skew is a major problem in documents’ analysis for every language. For Arabic/Persian document scripts this problem is more severe because of special features of these languages. In this paper an efficient and fast algorithm for Document Skew Detection (DSD) based on the concept of segmentation and Center of Gravity (COG) is proposed. This algorithm is examined for 150 Arabic/Persian and English documents and SDC process are done successfully for 93 percent of documents with error rate of less than 1°. This algorithm shows better results for English documents compared to Arabic/Persian documents. The proposed method is also represents favorable results for handwritten, printed and also complicated documents such as newspapers and journals even with very low quality and resolution.

Keywords: Arabic/Persian document, Baseline, Centre of gravity, Document segmentation, Skew detection and correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902
76 Computer Proven Correctness of the Rabin Public-Key Scheme

Authors: Johannes Buchmann, Markus Kaiser

Abstract:

We decribe a formal specification and verification of the Rabin public-key scheme in the formal proof system Is-abelle/HOL. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. The analysis presented uses a given database to prove formal properties of our implemented functions with computer support. Thema in task in designing a practical formalization of correctness as well as security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as eficient formal proofs. This yields the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Consequently, we get reliable proofs with a minimal error rate augmenting the used database. This provides a formal basis for more computer proof constructions in this area.

Keywords: public-key encryption, Rabin public-key scheme, formalproof system, higher-order logic, formal verification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
75 Bibliometric Analysis of the Research Progress on Graphene Inks from 2008 to 2018

Authors: Jean C. A. Sousa, Julio Cesar Maciel Santos, Andressa J. Rubio, Edneia A. S. Paccola, Natália U. Yamaguchi

Abstract:

A bibliometric analysis in the Web of Science database was used to identify overall scientific results of graphene inks to date (2008 to 2018). The objective of this study was to evaluate the evolutionary tendency of graphene inks research and to identify its aspects, aiming to provide data that can guide future work. The contributions of different researches, languages, thematic categories, periodicals, place of publication, institutes, funding agencies, articles cited and applications were analyzed. The results revealed a growing number of annual publications, of 258 papers found, 107 were included because they met the inclusion criteria. Three main applications were identified: synthesis and characterization, electronics and surfaces. The most relevant research on graphene inks has been summarized in this article, and graphene inks for electronic devices presented the most incident theme according to the research trends during the studied period. It is estimated that this theme will remain in evidence and will contribute to the direction of future research in this area.

Keywords: Bibliometric, coating, nanomaterials, scientometrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 996
74 OCR for Script Identification of Hindi (Devnagari) Numerals using Feature Sub Selection by Means of End-Point with Neuro-Memetic Model

Authors: Banashree N. P., R. Vasanta

Abstract:

Recognition of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], a character or symbol to be recognized can be machine printed or handwritten characters/numerals. There are several approaches that deal with problem of recognition of numerals/character depending on the type of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent. Our work focused on a technique in feature extraction i.e. global based approach using end-points information, which is extracted from images of isolated numerals. These feature vectors are fed to neuro-memetic model [18] that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. . In proposed scheme data sets are fed to neuro-memetic algorithm, which identifies the rule with highest fitness value of nearly 100 % & template associates with this rule is nothing but identified numerals. Experimentation result shows that recognition rate is 92-97 % compared to other models.

Keywords: OCR, Global Feature, End-Points, Neuro-Memetic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
73 The Traditional Malay Textile (TMT)Knowledge Model: Transformation towards Automated Mapping

Authors: Syerina Azlin Md Nasir, Nor Laila Md Noor, Suriyati Razali

Abstract:

The growing interest on national heritage preservation has led to intensive efforts on digital documentation of cultural heritage knowledge. Encapsulated within this effort is the focus on ontology development that will help facilitate the organization and retrieval of the knowledge. Ontologies surrounding cultural heritage domain are related to archives, museum and library information such as archaeology, artifacts, paintings, etc. The growth in number and size of ontologies indicates the well acceptance of its semantic enrichment in many emerging applications. Nowadays, there are many heritage information systems available for access. Among others is community-based e-museum designed to support the digital cultural heritage preservation. This work extends previous effort of developing the Traditional Malay Textile (TMT) Knowledge Model where the model is designed with the intention of auxiliary mapping with CIDOC CRM. Due to its internal constraints, the model needs to be transformed in advance. This paper addresses the issue by reviewing the previous harmonization works with CIDOC CRM as exemplars in refining the facets in the model particularly involving TMT-Artifact class. The result is an extensible model which could lead to a common view for automated mapping with CIDOC CRM. Hence, it promotes integration and exchange of textile information especially batik-related between communities in e-museum applications.

Keywords: automated mapping, cultural heritage, knowledgemodel, textile practice

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2294
72 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, marketing, online marketplace, recommendation system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 458
71 Defining a Semantic Web-based Framework for Enabling Automatic Reasoning on CIM-based Management Platforms

Authors: Fernando Alonso, Rafael Fernandez, Sonia Frutos, Javier Soriano

Abstract:

CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping provides CIM diagrams with precise semantics and can be used for automatic reasoning about the management information models, as a design aid, by means of newgeneration CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.

Keywords: CIM, Knowledge-based Information Models, OntologyLanguages, OWL, Description Logics, Integrated Network Management, Intelligent Agents, Automatic Reasoning Techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
70 Change Management in Business Process Modeling Based on Object Oriented Petri Net

Authors: Bassam Atieh Rajabi, Sai Peck Lee

Abstract:

Business Process Modeling (BPM) is the first and most important step in business process management lifecycle. Graph based formalism and rule based formalism are the two most predominant formalisms on which process modeling languages are developed. BPM technology continues to face challenges in coping with dynamic business environments where requirements and goals are constantly changing at the execution time. Graph based formalisms incur problems to react to dynamic changes in Business Process (BP) at the runtime instances. In this research, an adaptive and flexible framework based on the integration between Object Oriented diagramming technique and Petri Net modeling language is proposed in order to support change management techniques for BPM and increase the representation capability for Object Oriented modeling for the dynamic changes in the runtime instances. The proposed framework is applied in a higher education environment to achieve flexible, updatable and dynamic BP.

Keywords: Business Process Modeling, Change Management, Graph Based Modeling, Rule Based Modeling, Object Oriented PetriNet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
69 Powerful Tool to Expand Business Intelligence: Text Mining

Authors: Li Gao, Elizabeth Chang, Song Han

Abstract:

With the extensive inclusion of document, especially text, in the business systems, data mining does not cover the full scope of Business Intelligence. Data mining cannot deliver its impact on extracting useful details from the large collection of unstructured and semi-structured written materials based on natural languages. The most pressing issue is to draw the potential business intelligence from text. In order to gain competitive advantages for the business, it is necessary to develop the new powerful tool, text mining, to expand the scope of business intelligence. In this paper, we will work out the strong points of text mining in extracting business intelligence from huge amount of textual information sources within business systems. We will apply text mining to each stage of Business Intelligence systems to prove that text mining is the powerful tool to expand the scope of BI. After reviewing basic definitions and some related technologies, we will discuss the relationship and the benefits of these to text mining. Some examples and applications of text mining will also be given. The motivation behind is to develop new approach to effective and efficient textual information analysis. Thus we can expand the scope of Business Intelligence using the powerful tool, text mining.

Keywords: Business intelligence, document warehouse, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2651
68 Skew Detection Technique for Binary Document Images based on Hough Transform

Authors: Manjunath Aradhya V N, Hemantha Kumar G, Shivakumara P

Abstract:

Document image processing has become an increasingly important technology in the automation of office documentation tasks. During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithm for layout analysis and character recognition are generally very sensitive to the page skew. Hence, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method is presented for binary document images. The method considered the some selected characters of the text which may be subjected to thinning and Hough transform to estimate skew angle accurately. Several experiments have been conducted on various types of documents such as documents containing English Documents, Journals, Text-Book, Different Languages and Document with different fonts, Documents with different resolutions, to reveal the robustness of the proposed method. The experimental results revealed that the proposed method is accurate compared to the results of well-known existing methods.

Keywords: Optical Character Recognition, Skew angle, Thinning, Hough transform, Document processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2089
67 A New Model for Question Answering Systems

Authors: Mohammad Reza Kangavari, Samira Ghandchi, Manak Golpour

Abstract:

Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems. If this module doesn't work properly, it will make problems for other sections. Moreover answer processing module is an emerging topic in Question Answering, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic classification. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. Answer processing module, consists of candidate answer filtering, candidate answer ordering components and also it has a validation section for interacting with user. This module makes it more suitable to find exact answer. In this paper we have described question and answer processing modules with modeling, implementing and evaluating the system. System implemented in two versions. Results show that 'Version No.1' gave correct answer to 70% of questions (30 correct answers to 50 asked questions) and 'version No.2' gave correct answers to 94% of questions (47 correct answers to 50 asked questions).

Keywords: Answer Processing, Classification, QuestionAnswering and Query Reformulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121
66 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: Cooccurrence graph, entity relation graph, unstructured text, weighted distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 674