Search results for: graph search
907 A New Effective Local Search Heuristic for the Maximum Clique Problem
Authors: S. Balaji
Abstract:
An edge based local search algorithm, called ELS, is proposed for the maximum clique problem (MCP), a well-known combinatorial optimization problem. ELS is a two phased local search method effectively £nds the near optimal solutions for the MCP. A parameter ’support’ of vertices de£ned in the ELS greatly reduces the more number of random selections among vertices and also the number of iterations and running times. Computational results on BHOSLIB and DIMACS benchmark graphs indicate that ELS is capable of achieving state-of-the-art-performance for the maximum clique with reasonable average running times.
Keywords: Maximum clique, local search, heuristic, NP-complete.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2256906 Interactive Concept-based Search using MOEA:The Hierarchical Preferences Case
Authors: Gideon Avigad, Amiram Moshaiov, Neima Brauner
Abstract:
An IEC technique is described for a multi-objective search of conceptual solutions. The survivability of solutions is influenced by both model-based fitness and subjective human preferences. The concepts- preferences are articulated via a hierarchy of sub-concepts. The suggested method produces an objectivesubjective front. Academic example is employed to demonstrate the proposed approach.Keywords: Conceptual solution, engineering design, hierarchical planning, multi-objective search, problem reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1008905 An Analysis of Dynamic Economic Dispatch Using Search Space Reduction Based Gravitational Search Algorithm
Authors: K. C. Meher, R. K. Swain, C. K. Chanda
Abstract:
This paper presents the performance analysis of dynamic search space reduction (DSR) based gravitational search algorithm (GSA) to solve dynamic economic dispatch of thermal generating units with valve point effects. Dynamic economic dispatch basically dictates the best setting of generator units with anticipated load demand over a definite period of time. In this paper, the presented technique is considered that deals an inequality constraints treatment mechanism known as DSR strategy to accelerate the optimization process. The presented method is demonstrated through five-unit test systems to verify its effectiveness and robustness. The simulation results are compared with other existing evolutionary methods reported in the literature. It is intuited from the comparison that the fuel cost and other performances of the presented approach yield fruitful results with marginal value of simulation time.Keywords: Dynamic economic dispatch, dynamic search space reduction strategy, gravitational search algorithm, ramp rate limits, valve-point effects.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495904 Organization Model of Semantic Document Repository and Search Techniques for Studying Information Technology
Authors: Nhon Do, Thuong Huynh, An Pham
Abstract:
Nowadays, organizing a repository of documents and resources for learning on a special field as Information Technology (IT), together with search techniques based on domain knowledge or document-s content is an urgent need in practice of teaching, learning and researching. There have been several works related to methods of organization and search by content. However, the results are still limited and insufficient to meet user-s demand for semantic document retrieval. This paper presents a solution for the organization of a repository that supports semantic representation and processing in search. The proposed solution is a model which integrates components such as an ontology describing domain knowledge, a database of document repository, semantic representation for documents and a file system; with problems, semantic processing techniques and advanced search techniques based on measuring semantic similarity. The solution is applied to build a IT learning materials management system of a university with semantic search function serving students, teachers, and manager as well. The application has been implemented, tested at the University of Information Technology, Ho Chi Minh City, Vietnam and has achieved good results.Keywords: document retrieval system, knowledgerepresentation, document representation, semantic search, ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710903 Detecting Community Structure in Amino Acid Interaction Networks
Authors: Omar GACI, Stefan BALEV, Antoine DUTOT
Abstract:
In this paper we introduce the notion of protein interaction network. This is a graph whose vertices are the protein-s amino acids and whose edges are the interactions between them. Using a graph theory approach, we observe that according to their structural roles, the nodes interact differently. By leading a community structure detection, we confirm this specific behavior and describe thecommunities composition to finally propose a new approach to fold a protein interaction network.
Keywords: interaction network, protein structure, community structure detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519902 Evolutionary Dynamics on Small-World Networks
Authors: Jan Rychtar, Brian Stadler
Abstract:
We study how the outcome of evolutionary dynamics on graphs depends on a randomness on the graph structure. We gradually change the underlying graph from completely regular (e.g. a square lattice) to completely random. We find that the fixation probability increases as the randomness increases; nevertheless, the increase is not significant and thus the fixation probability could be estimated by the known formulas for underlying regular graphs.Keywords: evolutionary dynamics, small-world networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237901 Compression of Semistructured Documents
Authors: Leo Galambos, Jan Lansky, Katsiaryna Chernik
Abstract:
EGOTHOR is a search engine that indexes the Web and allows us to search the Web documents. Its hit list contains URL and title of the hits, and also some snippet which tries to shortly show a match. The snippet can be almost always assembled by an algorithm that has a full knowledge of the original document (mostly HTML page). It implies that the search engine is required to store the full text of the documents as a part of the index. Such a requirement leads us to pick up an appropriate compression algorithm which would reduce the space demand. One of the solutions could be to use common compression methods, for instance gzip or bzip2, but it might be preferable if we develop a new method which would take advantage of the document structure, or rather, the textual character of the documents. There already exist a special compression text algorithms and methods for a compression of XML documents. The aim of this paper is an integration of the two approaches to achieve an optimal level of the compression ratioKeywords: Compression, search engine, HTML, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577900 Problem Solving in Chilean Higher Education: Figurations Prior in Interpretations of Cartesian Graphs
Authors: Verónica Díaz
Abstract:
A Cartesian graph, as a mathematical object, becomes a tool for configuration of change. Its best comprehension is done through everyday life problem-solving associated with its representation. Despite this, the current educational framework favors general graphs, without consideration of their argumentation. Students are required to find the mathematical function without associating it to the development of graphical language. This research describes the use made by students of configurations made prior to Cartesian graphs with regards to an everyday life problem related to a time and distance variation phenomenon. The theoretical framework describes the function conditions of study and their modeling. This is a qualitative, descriptive study involving six undergraduate case studies that were carried out during the first term in 2016 at University of Los Lagos. The research problem concerned the graphic modeling of a real person’s movement phenomenon, and two levels of analysis were identified. The first level aims to identify local and global graph interpretations; a second level describes the iconicity and referentiality degree of an image. According to the results, students were able to draw no figures before the Cartesian graph, highlighting the need for students to represent the context and the movement of which causes the phenomenon change. From this, they managed Cartesian graphs representing changes in position, therefore, achieved an overall view of the graph. However, the local view only indicates specific events in the problem situation, using graphic and verbal expressions to represent movement. This view does not enable us to identify what happens on the graph when the movement characteristics change based on possible paths in the person’s walking speed.
Keywords: Cartesian graphs, higher education, movement modeling, problem solving.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1177899 Impact of Similarity Ratings on Human Judgement
Authors: Ian A. McCulloh, Madelaine Zinser, Jesse Patsolic, Michael Ramos
Abstract:
Recommender systems are a common artificial intelligence (AI) application. For any given input, a search system will return a rank-ordered list of similar items. As users review returned items, they must decide when to halt the search and either revise search terms or conclude their requirement is novel with no similar items in the database. We present a statistically designed experiment that investigates the impact of similarity ratings on human judgement to conclude a search item is novel and halt the search. In the study, 450 participants were recruited from Amazon Mechanical Turk to render judgement across 12 decision tasks. We find the inclusion of ratings increases the human perception that items are novel. Percent similarity increases novelty discernment when compared with star-rated similarity or the absence of a rating. Ratings reduce the time to decide and improve decision confidence. This suggests that the inclusion of similarity ratings can aid human decision-makers in knowledge search tasks.
Keywords: Ratings, rankings, crowdsourcing, empirical studies, user studies, similarity measures, human-centered computing, novelty in information retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 417898 Decision Maturity Framework: Introducing Maturity In Heuristic Search
Authors: Ayed Salman, Fawaz Al-Anzi, Aseel Al-Minayes
Abstract:
Heuristics-based search methodologies normally work on searching a problem space of possible solutions toward finding a “satisfactory" solution based on “hints" estimated from the problem-specific knowledge. Research communities use different types of methodologies. Unfortunately, most of the times, these hints are immature and can lead toward hindering these methodologies by a premature convergence. This is due to a decrease of diversity in search space that leads to a total implosion and ultimately fitness stagnation of the population. In this paper, a novel Decision Maturity framework (DMF) is introduced as a solution to this problem. The framework simply improves the decision on the direction of the search by materializing hints enough before using them. Ideas from this framework are injected into the particle swarm optimization methodology. Results were obtained under both static and dynamic environment. The results show that decision maturity prevents premature converges to a high degree.Keywords: Heuristic Search, hints, Particle Swarm Optimization, Decision Maturity Framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355897 Holomorphic Prioritization of Sets within Decagram of Strategic Decision Making of POSM Using Operational Research (OR): Analytic Hierarchy Process (AHP) Analysis
Authors: Elias O. Tembe, Hussain A. Al-Salamin
Abstract:
There is decagram of strategic decisions of operations and production/service management (POSM) within operational research (OR) which must collate, namely: design, inventory, quality, location, process and capacity, layout, scheduling, maintain ace, and supply chain. This paper presents an architectural configuration conceptual framework of a decagram of sets decisions in a form of mathematical complete graph and abelian graph. Mathematically, a complete graph is undirected (UDG), and directed (DG) a relationship where every pair of vertices is connected, collated, confluent, and holomorphic. There has not been any study conducted which, however, prioritizes the holomorphic sets which of POMS within OR field of study. The study utilizes OR structured technique known as The Analytic Hierarchy Process (AHP) analysis for organizing, sorting and prioritizing(ranking) the sets within the decagram of POMS according to their attribution (propensity), and provides an analysis how the prioritization has real-world application within the 21st century.
Keywords: AHP analysis, Decagram, Decagon, Holomorphic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000896 Bottom Up Text Mining through Hierarchical Document Representation
Authors: Y. Djouadi., F. Souam.
Abstract:
Most of the existing text mining approaches are proposed, keeping in mind, transaction databases model. Thus, the mined dataset is structured using just one concept: the “transaction", whereas the whole dataset is modeled using the “set" abstract type. In such cases, the structure of the whole dataset and the relationships among the transactions themselves are not modeled and consequently, not considered in the mining process. We believe that taking into account structure properties of hierarchically structured information (e.g. textual document, etc ...) in the mining process, can leads to best results. For this purpose, an hierarchical associations rule mining approach for textual documents is proposed in this paper and the classical set-oriented mining approach is reconsidered profits to a Direct Acyclic Graph (DAG) oriented approach. Natural languages processing techniques are used in order to obtain the DAG structure. Based on this graph model, an hierarchical bottom up algorithm is proposed. The main idea is that each node is mined with its parent node.Keywords: Graph based association rules mining, Hierarchical document structure, Text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057895 Incorporating Semantic Similarity Measure in Genetic Algorithm : An Approach for Searching the Gene Ontology Terms
Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias, Hany T. Alashwal, Rohayanti Hassan, FarhanMohamed
Abstract:
The most important property of the Gene Ontology is the terms. These control vocabularies are defined to provide consistent descriptions of gene products that are shareable and computationally accessible by humans, software agent, or other machine-readable meta-data. Each term is associated with information such as definition, synonyms, database references, amino acid sequences, and relationships to other terms. This information has made the Gene Ontology broadly applied in microarray and proteomic analysis. However, the process of searching the terms is still carried out using traditional approach which is based on keyword matching. The weaknesses of this approach are: ignoring semantic relationships between terms, and highly depending on a specialist to find similar terms. Therefore, this study combines semantic similarity measure and genetic algorithm to perform a better retrieval process for searching semantically similar terms. The semantic similarity measure is used to compute similitude strength between two terms. Then, the genetic algorithm is employed to perform batch retrievals and to handle the situation of the large search space of the Gene Ontology graph. The computational results are presented to show the effectiveness of the proposed algorithm.Keywords: Gene Ontology, Semantic similarity measure, Genetic algorithm, Ontology search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490894 A Hybridization of Constructive Beam Search with Local Search for Far From Most Strings Problem
Authors: Sayyed R Mousavi
Abstract:
The Far From Most Strings Problem (FFMSP) is to obtain a string which is far from as many as possible of a given set of strings. All the input and the output strings are of the same length, and two strings are said to be far if their hamming distance is greater than or equal to a given positive integer. FFMSP belongs to the class of sequences consensus problems which have applications in molecular biology. The problem is NP-hard; it does not admit a constant-ratio approximation either, unless P = NP. Therefore, in addition to exact and approximate algorithms, (meta)heuristic algorithms have been proposed for the problem in recent years. On the other hand, in the recent years, hybrid algorithms have been proposed and successfully used for many hard problems in a variety of domains. In this paper, a new metaheuristic algorithm, called Constructive Beam and Local Search (CBLS), is investigated for the problem, which is a hybridization of constructive beam search and local search algorithms. More specifically, the proposed algorithm consists of two phases, the first phase is to obtain several candidate solutions via the constructive beam search and the second phase is to apply local search to the candidate solutions obtained by the first phase. The best solution found is returned as the final solution to the problem. The proposed algorithm is also similar to memetic algorithms in the sense that both use local search to further improve individual solutions. The CBLS algorithm is compared with the most recent published algorithm for the problem, GRASP, with significantly positive results; the improvement is by order of magnitudes in most cases.
Keywords: Bioinformatics, Far From Most Strings Problem, Hybrid metaheuristics, Matheuristics, Sequences consensus problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742893 Optimum Design of Trusses by Cuckoo Search
Authors: M. Saravanan, J. Raja Murugadoss, V. Jayanthi
Abstract:
Optimal design of structure has a main role in reduction of material usage which leads to deduction in the final cost of construction projects. Evolutionary approaches are found to be more successful techniques for solving size and shape structural optimization problem since it uses a stochastic random search instead of a gradient search. By reviewing the recent literature works the problem found was the optimization of weight. A new meta-heuristic algorithm called as Cuckoo Search (CS) Algorithm has used for the optimization of the total weight of the truss structures. This paper has used set of 10 bars and 25 bars trusses for the testing purpose. The main objective of this work is to reduce the number of iterations, weight and the total time consumption. In order to demonstrate the effectiveness of the present method, minimum weight design of truss structures is performed and the results of the CS are compared with other algorithms.
Keywords: Cuckoo search algorithm, levy’s flight, meta-heuristic, optimal weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2105892 Generating Qualitative Causal Graph using Modeling Constructs of Qualitative Process Theory for Explaining Organic Chemistry Reactions
Authors: Alicia Y. C. Tang, Rukaini Abdullah, Sharifuddin M. Zain, Noorsaadah A. Rahman
Abstract:
This paper discusses the causal explanation capability of QRIOM, a tool aimed at supporting learning of organic chemistry reactions. The development of the tool is based on the hybrid use of Qualitative Reasoning (QR) technique and Qualitative Process Theory (QPT) ontology. Our simulation combines symbolic, qualitative description of relations with quantity analysis to generate causal graphs. The pedagogy embedded in the simulator is to both simulate and explain organic reactions. Qualitative reasoning through a causal chain will be presented to explain the overall changes made on the substrate; from initial substrate until the production of final outputs. Several uses of the QPT modeling constructs in supporting behavioral and causal explanation during run-time will also be demonstrated. Explaining organic reactions through causal graph trace can help improve the reasoning ability of learners in that their conceptual understanding of the subject is nurtured.Keywords: Qualitative reasoning, causal graph, organicreactions, explanation, QPT, modeling constructs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419891 Application of Pattern Search Method to Power System Security Constrained Economic Dispatch
Authors: A. K. Al-Othman, K. M. EL-Nagger
Abstract:
Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED). Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using one system. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED).
Keywords: Security Constrained Economic Dispatch, Direct Search method, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207890 Urban Search and Rescue and Rapid Field Assessment of Damaged and Collapsed Building Structures
Authors: Abid I. Abu-Tair, Gavin M. Wilde, John M. Kinuthia
Abstract:
Urban Search and Rescue (USAR) is a functional capability that has been developed to allow the United Kingdom Fire and Rescue Service to deal with ‘major incidents’ primarily involving structural collapse. The nature of the work undertaken by USAR means that staying out of a damaged or collapsed building structure is not usually an option for search and rescue personnel. As a result there is always a risk that they themselves could become victims. For this paper, a systematic and investigative review using desk research was undertaken to explore the role which structural engineering can play in assisting search and rescue personnel to conduct structural assessments when in the field. The focus is on how search and rescue personnel can assess damaged and collapsed building structures, not just in terms of structural damage that may been countered, but also in relation to structural stability. Natural disasters, accidental emergencies, acts of terrorism and other extreme events can vary significantly in nature and ferocity, and can cause a wide variety of damage to building structures. It is not possible or, even realistic, to provide search and rescue personnel with definitive guidelines and procedures to assess damaged and collapsed building structures as there are too many variables to consider. However, understanding what implications damage may have upon the structural stability of a building structure will enable search and rescue personnel to better judge and quantify risk from a life-safety standpoint. It is intended that this will allow search and rescue personnel to make informed decisions and ensure every effort is made to mitigate risk, so that they themselves do not become victims.
Keywords: Damaged and collapsed building structures, life safety, quantifying risk, search and rescue personnel, structural assessments in the field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3120889 Optimal Management of Internal Capital of Company
Authors: S. Sadallah
Abstract:
In this paper, dynamic programming is used to determine the optimal management of financial resources in company. Solution of the problem by consider into simpler substructures is constructed. The optimal management of internal capital of company are simulated. The tools applied in this development are based on graph theory. The software of given problems is built by using greedy algorithm. The obtained model and program maintenance enable us to define the optimal version of management of proper financial flows by using visual diagram on each level of investment.Keywords: Management, software, optimal, greedy algorithm, graph-diagram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1104888 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702887 AJcFgraph - AspectJ Control Flow Graph Builder for Aspect-Oriented Software
Authors: Reza Meimandi Parizi, Abdul Azim Abdul Ghani
Abstract:
The ever-growing usage of aspect-oriented development methodology in the field of software engineering requires tool support for both research environments and industry. So far, tool support for many activities in aspect-oriented software development has been proposed, to automate and facilitate their development. For instance, the AJaTS provides a transformation system to support aspect-oriented development and refactoring. In particular, it is well established that the abstract interpretation of programs, in any paradigm, pursued in static analysis is best served by a high-level programs representation, such as Control Flow Graph (CFG). This is why such analysis can more easily locate common programmatic idioms for which helpful transformation are already known as well as, association between the input program and intermediate representation can be more closely maintained. However, although the current researches define the good concepts and foundations, to some extent, for control flow analysis of aspectoriented programs but they do not provide a concrete tool that can solely construct the CFG of these programs. Furthermore, most of these works focus on addressing the other issues regarding Aspect- Oriented Software Development (AOSD) such as testing or data flow analysis rather than CFG itself. Therefore, this study is dedicated to build an aspect-oriented control flow graph construction tool called AJcFgraph Builder. The given tool can be applied in many software engineering tasks in the context of AOSD such as, software testing, software metrics, and so forth.Keywords: Aspect-Oriented Software Development, AspectJ, Control Flow Graph, Data Flow Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2100886 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text
Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni
Abstract:
The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.Keywords: Cooccurrence graph, entity relation graph, unstructured text, weighted distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684885 Non-Population Search Algorithms for Capacitated Material Requirement Planning in Multi-Stage Assembly Flow Shop with Alternative Machines
Authors: Watcharapan Sukkerd, Teeradej Wuttipornpun
Abstract:
This paper aims to present non-population search algorithms called tabu search (TS), simulated annealing (SA) and variable neighborhood search (VNS) to minimize the total cost of capacitated MRP problem in multi-stage assembly flow shop with two alternative machines. There are three main steps for the algorithm. Firstly, an initial sequence of orders is constructed by a simple due date-based dispatching rule. Secondly, the sequence of orders is repeatedly improved to reduce the total cost by applying TS, SA and VNS separately. Finally, the total cost is further reduced by optimizing the start time of each operation using the linear programming (LP) model. Parameters of the algorithm are tuned by using real data from automotive companies. The result shows that VNS significantly outperforms TS, SA and the existing algorithm.
Keywords: Capacitated MRP, non-population search algorithms, linear programming, assembly flow shop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 958884 Cross-Search Technique and its Visualization of Peer-to-Peer Distributed Clinical Documents
Authors: Yong Jun Choi, Juman Byun, Simon Berkovich
Abstract:
One of the ubiquitous routines in medical practice is searching through voluminous piles of clinical documents. In this paper we introduce a distributed system to search and exchange clinical documents. Clinical documents are distributed peer-to-peer. Relevant information is found in multiple iterations of cross-searches between the clinical text and its domain encyclopedia.
Keywords: Clinical documents, cross-search, document exchange, information retrieval, peer-to-peer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302883 Selection of Material for Gear Used in Fuel Pump Using Graph Theory and Matrix Approach
Authors: Sahil, Rajeev Saha, Sanjeev Kumar
Abstract:
Material selection is one of the key issues for the production of reliable and quality products in industries. A number of materials are available for a single product due to which material selection become a difficult task. The aim of this paper is to select appropriate material for gear used in fuel pump by using Graph Theory and Matrix Approach (GTMA). GTMA is a logical and systematic approach that can be used to model and analyze various engineering systems. In present work, four alternative material and their seven attributes are used to identify the best material for given product.
Keywords: Material, GTMA, MADM, digraph, decision making.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029882 Measuring the Structural Similarity of Web-based Documents: A Novel Approach
Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian
Abstract:
Most known methods for measuring the structural similarity of document structures are based on, e.g., tag measures, path metrics and tree measures in terms of their DOM-Trees. Other methods measures the similarity in the framework of the well known vector space model. In contrast to these we present a new approach to measuring the structural similarity of web-based documents represented by so called generalized trees which are more general than DOM-Trees which represent only directed rooted trees.We will design a new similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as strings of linear integers, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments to solve a novel and challenging problem: Measuring the structural similarity of generalized trees. More precisely, we first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based documents.
Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557881 Web Server with Multi-Agent Support for Medical Practitioners by JADE Technology
Authors: O. Saravanan, A. Nagappan, P. Gnanasekar, S. Sharavanan, D. Vinodkumar, T. Elayabharathi, G. Karthik
Abstract:
The multi-agent system for processing Bio-signals will help the medical practitioners to have a standard examination procedure stored in web server. Web Servers supporting any standard Search Engine follow all possible combinations of the search keywords as an input by the user to a Search Engine. As a result, a huge number of Web-pages are shown in the Web browser. It also helps the medical practitioner to interact with the expert in the field his need in order to make a proper judgment in the diagnosis phase [3].A web server uses a web server plug in to establish and maintained the medical practitioner to make a fast analysis. If the user uses the web server client can get a related data requesting their search. DB agent, EEG / ECG / EMG agents- user placed with difficult aspects for updating medical information-s in web server.Keywords: DB agent, EEG, ECG, EMG, Web server agent, JADE
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076880 A Comparative Study of the Effectiveness of Trained Inspectors in Different Workloads between Feed Forward and Feedback Training
Authors: Sittichai K., Anucha W., Phonsak L.
Abstract:
Objective of this study was to study and compare the effectiveness of inspectors who had different workloads for feed forward and feedback training. The visual search task was simulated to search for specified alphabets called defects. These defects were included of four alphabets in Thai and English such as s ภ, ถ, X, and V with different background. These defects were combined in the specified alphabets and were given the different three backgrounds i.e., Thai, English, and mixed English and Thai alphabets. Sixty students were chosen as a sample in this study and test for final selection subject. Finally, five subjects were taken into testing process. They were asked to search for defects after they were provided basic information. Experiment design was used factorial design and subjects were trained for feed forward and the feedback training. The results show that both trainings were affected on mean search time. It was also found that the feedback training can increase the effectiveness of visual inspectors rather than the feed forward training significantly different at the level of .05
Keywords: visual search, feed forward, feedback training.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1156879 Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches
Authors: Shilpy Sharma
Abstract:
As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.Keywords: Search engines; machine learning, Informationretrieval, Active logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2083878 Change Management in Business Process Modeling Based on Object Oriented Petri Net
Authors: Bassam Atieh Rajabi, Sai Peck Lee
Abstract:
Business Process Modeling (BPM) is the first and most important step in business process management lifecycle. Graph based formalism and rule based formalism are the two most predominant formalisms on which process modeling languages are developed. BPM technology continues to face challenges in coping with dynamic business environments where requirements and goals are constantly changing at the execution time. Graph based formalisms incur problems to react to dynamic changes in Business Process (BP) at the runtime instances. In this research, an adaptive and flexible framework based on the integration between Object Oriented diagramming technique and Petri Net modeling language is proposed in order to support change management techniques for BPM and increase the representation capability for Object Oriented modeling for the dynamic changes in the runtime instances. The proposed framework is applied in a higher education environment to achieve flexible, updatable and dynamic BP.Keywords: Business Process Modeling, Change Management, Graph Based Modeling, Rule Based Modeling, Object Oriented PetriNet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038