Search results for: retrieval algorithms
1481 Optimal Document Archiving and Fast Information Retrieval
Authors: Hazem M. El-Bakry, Ahmed A. Mohammed
Abstract:
In this paper, an intelligent algorithm for optimal document archiving is presented. It is kown that electronic archives are very important for information system management. Minimizing the size of the stored data in electronic archive is a main issue to reduce the physical storage area. Here, the effect of different types of Arabic fonts on electronic archives size is discussed. Simulation results show that PDF is the best file format for storage of the Arabic documents in electronic archive. Furthermore, fast information detection in a given PDF file is introduced. Such approach uses fast neural networks (FNNs) implemented in the frequency domain. The operation of these networks relies on performing cross correlation in the frequency domain rather than spatial one. It is proved mathematically and practically that the number of computation steps required for the presented FNNs is less than that needed by conventional neural networks (CNNs). Simulation results using MATLAB confirm the theoretical computations.Keywords: Information Storage and Retrieval, Electronic Archiving, Fast Information Detection, Cross Correlation, Frequency Domain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15861480 A Study of Touching Characters in Degraded Gurmukhi Text
Authors: M. K. Jindal, G. S. Lehal, R. K. Sharma
Abstract:
Character segmentation is an important preprocessing step for text recognition. In degraded documents, existence of touching characters decreases recognition rate drastically, for any optical character recognition (OCR) system. In this paper a study of touching Gurmukhi characters is carried out and these characters have been divided into various categories after a careful analysis.Structural properties of the Gurmukhi characters are used for defining the categories. New algorithms have been proposed to segment the touching characters in middle zone. These algorithms have shown a reasonable improvement in segmenting the touching characters in degraded Gurmukhi script. The algorithms proposed in this paper are applicable only to machine printed text.Keywords: Character Segmentation, Middle Zone, Touching Characters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18411479 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.
Keywords: Biometrics, finger vein recognition, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19621478 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
This paper aims to provide an interpretation of artificial neural networks (ANNs) and explore some of its implications. The interpretation views ANNs as a memory which encodes instances of experience. An experiment explores the behavior of encoding and retrieval of instances from memory. A localised representation ANN is created that allows control over encoding and retrieved memory sample size and is experimented with using the MNIST digits dataset. The relationship between input familiarity, conflict within retrieved samples, and error rates is described and demonstrated to be an effective driver for memory encoding. Results indicate that selective encoding and retrieval samples that allow detection of memory conflicts produce optimal performance, and that error rates are normally distributed with input familiarity and conflict. By using input familiarity and sample consistency to guide memory encoding, the number of encoding trials on the dataset were reduced to 18.33% of the training data while maintaining good recognition performance on the test data.
Keywords: Artificial Neural Networks, ANNs, representation, memory, conflict monitoring, confidence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5061477 Speech Recognition Using Scaly Neural Networks
Authors: Akram M. Othman, May H. Riadh
Abstract:
This research work is aimed at speech recognition using scaly neural networks. A small vocabulary of 11 words were established first, these words are “word, file, open, print, exit, edit, cut, copy, paste, doc1, doc2". These chosen words involved with executing some computer functions such as opening a file, print certain text document, cutting, copying, pasting, editing and exit. It introduced to the computer then subjected to feature extraction process using LPC (linear prediction coefficients). These features are used as input to an artificial neural network in speaker dependent mode. Half of the words are used for training the artificial neural network and the other half are used for testing the system; those are used for information retrieval. The system components are consist of three parts, speech processing and feature extraction, training and testing by using neural networks and information retrieval. The retrieve process proved to be 79.5-88% successful, which is quite acceptable, considering the variation to surrounding, state of the person, and the microphone type.Keywords: Feature extraction, Liner prediction coefficients, neural network, Speech Recognition, Scaly ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17371476 Comparative Study of Ant Colony and Genetic Algorithms for VLSI Circuit Partitioning
Authors: Sandeep Singh Gill, Rajeevan Chandel, Ashwani Chandel
Abstract:
This paper presents a comparative study of Ant Colony and Genetic Algorithms for VLSI circuit bi-partitioning. Ant colony optimization is an optimization method based on behaviour of social insects [27] whereas Genetic algorithm is an evolutionary optimization technique based on Darwinian Theory of natural evolution and its concept of survival of the fittest [19]. Both the methods are stochastic in nature and have been successfully applied to solve many Non Polynomial hard problems. Results obtained show that Genetic algorithms out perform Ant Colony optimization technique when tested on the VLSI circuit bi-partitioning problem.
Keywords: Partitioning, genetic algorithm, ant colony optimization, non-polynomial hard, netlist, mutation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22481475 Implementation of Security Algorithms for u-Health Monitoring System
Authors: Jiho Park, Yong-Gyu Lee, Gilwon Yoon
Abstract:
Data security in u-Health system can be an important issue because wireless network is vulnerable to hacking. However, it is not easy to implement a proper security algorithm in an embedded u-health monitoring because of hardware constraints such as low performance, power consumption and limited memory size and etc. To secure data that contain personal and biosignal information, we implemented several security algorithms such as Blowfish, data encryption standard (DES), advanced encryption standard (AES) and Rivest Cipher 4 (RC4) for our u-Health monitoring system and the results were successful. Under the same experimental conditions, we compared these algorithms. RC4 had the fastest execution time. Memory usage was the most efficient for DES. However, considering performance and safety capability, however, we concluded that AES was the most appropriate algorithm for a personal u-Health monitoring system.Keywords: biosignal, data encryption, security measures, u-health
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21301474 Controlled Vocabularies and Information Retrieval: 1918 Pandemic’s Scientific Literature as an Example
Authors: M. Garcia-Alsina, J. Cobarsí
Abstract:
The role of controlled vocabularies in information retrieval is broadly recognized as a relevant feature. Besides, there is a standing demand that editors and databases should consider the effective introduction of controlled vocabularies in their procedures to index scientific literature. That is especially important because information retrieval is pointed out as a significant point to drive systematic literature review. Hence, a first question emerges: Are the controlled vocabularies at this moment considered? On the other hand, subject searching in the catalogs is complex mainly due to the dichotomy between keywords from authors versus keywords based on controlled vocabularies. Finally, there is some demand to unify the terminology related to health to make easier the medical history exploitation and research. Considering these features, this paper focuses on controlled vocabularies related to the health field and their role for storing, classifying, and retrieving relevant literature. The objective is knowing which role plays the controlled vocabularies related to the health field to index and retrieve research literature in data bases such as Web of Science (WoS) and Scopus. So, this exploratory research is grounded over two research questions: 1) Which are the terms considered in specific controlled vocabularies of the health field; and 2) How papers are indexed in relevant databases to be easily retrieved, considering keywords vs specific health’ controlled vocabularies? This research takes as fieldwork the controlled vocabularies related to health and the scientific interest for 1918 flu pandemic, also known equivocally as ‘Spanish flu’. This interest has been fostered by the emergence in the early 21st of epidemics of pneumonic diseases caused by virus. Searches about and with controlled vocabularies on WoS and Scopus databases are conducted. First results of this work in progress are surprising. There are different controlled vocabularies for the health field, into which the terms collected and preferred related to ‘1918 pandemic’ are identified. To summarize, ‘Spanish influenza epidemic’ or ‘Spanish flu’ are collected as not preferred terms. The preferred terms are: ‘influenza’ or ‘influenza pandemic, 1918-1919’. Although the controlled vocabularies are clear in their election, most of the literature about ‘1918 pandemic’ is retrievable either by ‘Spanish’ or by ‘1918’ disjunct, and the dominant word to retrieve literature is ‘Spanish’ rather than ‘1918’. This is surprising considering the existence of suitable controlled vocabularies related to health topics, and the modern guidelines of World Health Organization concerning naming of diseases that point out to other preferred terms. A first conclusion is the failure of using controlled vocabularies for a field such as health, and in consequence for WoS and Scopus. This research opens further research questions about which is the role that controlled vocabularies play in the instructions to authors that journals deliver to documents’ authors.
Keywords: Controlled vocabularies, indexing, 1918 influenza, information retrieval, keywords, 1918 pandemic, scientific databases.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4271473 Prediction of MicroRNA-Target Gene by Machine Learning Algorithms in Lung Cancer Study
Authors: Nilubon Kurubanjerdjit, Nattakarn Iam-On, Ka-Lok Ng
Abstract:
MicroRNAs are small non-coding RNA found in many different species. They play crucial roles in cancer such as biological processes of apoptosis and proliferation. The identification of microRNA-target genes can be an essential first step towards to reveal the role of microRNA in various cancer types. In this paper, we predict miRNA-target genes for lung cancer by integrating prediction scores from miRanda and PITA algorithms used as a feature vector of miRNA-target interaction. Then, machine-learning algorithms were implemented for making a final prediction. The approach developed in this study should be of value for future studies into understanding the role of miRNAs in molecular mechanisms enabling lung cancer formation.Keywords: MicroRNA, miRNAs, lung cancer, machine learning, Naïve Bayes, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23871472 Semi-Automatic Analyzer to Detect Authorial Intentions in Scientific Documents
Authors: Kanso Hassan, Elhore Ali, Soule-dupuy Chantal, Tazi Said
Abstract:
Information Retrieval has the objective of studying models and the realization of systems allowing a user to find the relevant documents adapted to his need of information. The information search is a problem which remains difficult because the difficulty in the representing and to treat the natural languages such as polysemia. Intentional Structures promise to be a new paradigm to extend the existing documents structures and to enhance the different phases of documents process such as creation, editing, search and retrieval. The intention recognition of the author-s of texts can reduce the largeness of this problem. In this article, we present intentions recognition system is based on a semi-automatic method of extraction the intentional information starting from a corpus of text. This system is also able to update the ontology of intentions for the enrichment of the knowledge base containing all possible intentions of a domain. This approach uses the construction of a semi-formal ontology which considered as the conceptualization of the intentional information contained in a text. An experiments on scientific publications in the field of computer science was considered to validate this approach.Keywords: Information research, text analyzes, intentionalstructure, segmentation, ontology, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16381471 The Negative Effect of Traditional Loops Style on the Performance of Algorithms
Authors: Mahmoud Moh'd Mhashi
Abstract:
A new algorithm called Character-Comparison to Character-Access (CCCA) is developed to test the effect of both: 1) converting character-comparison and number-comparison into character-access and 2) the starting point of checking on the performance of the checking operation in string searching. An experiment is performed using both English text and DNA text with different sizes. The results are compared with five algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Cycle. With the CCCA algorithm, the results suggest that the evaluation criteria of the average number of total comparisons are improved up to 35%. Furthermore, the results suggest that the clock time required by the other algorithms is improved in range from 22.13% to 42.33% by the new CCCA algorithm.
Keywords: Pattern matching, string searching, charactercomparison, character-access, text type, and checking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12701470 Hexagonal Honeycomb Sandwich Plate Optimization Using Gravitational Search Algorithm
Authors: A. Boudjemai, A. Zafrane, R. Hocine
Abstract:
Honeycomb sandwich panels are increasingly used in the construction of space vehicles because of their outstanding strength, stiffness and light weight properties. However, the use of honeycomb sandwich plates comes with difficulties in the design process as a result of the large number of design variables involved, including composite material design, shape and geometry. Hence, this work deals with the presentation of an optimal design of hexagonal honeycomb sandwich structures subjected to space environment. The optimization process is performed using a set of algorithms including the gravitational search algorithm (GSA). Numerical results are obtained and presented for a set of algorithms. The results obtained by the GSA algorithm are much better compared to other algorithms used in this study.
Keywords: Optimization, Gravitational search algorithm, Genetic algorithm, Honeycomb plate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32911469 Genetic Combined with a Simplex Algorithm as an Efficient Method for the Detection of a Depressed Ellipsoidal Flaw using the Boundary Element Method
Authors: Clio G. Vossou, Ioannis N. Koukoulis, Christopher G. Provatidis
Abstract:
The present work encounters the solution of the defect identification problem with the use of an evolutionary algorithm combined with a simplex method. In more details, a Matlab implementation of Genetic Algorithms is combined with a Simplex method in order to lead to the successful identification of the defect. The influence of the location and the orientation of the depressed ellipsoidal flaw was investigated as well as the use of different amount of static data in the cost function. The results were evaluated according to the ability of the simplex method to locate the global optimum in each test case. In this way, a clear impression regarding the performance of the novel combination of the optimization algorithms, and the influence of the geometrical parameters of the flaw in defect identification problems was obtained.
Keywords: Defect identification, genetic algorithms, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12901468 Stochastic Learning Algorithms for Modeling Human Category Learning
Authors: Toshihiko Matsuka, James E. Corter
Abstract:
Most neural network (NN) models of human category learning use a gradient-based learning method, which assumes that locally-optimal changes are made to model parameters on each learning trial. This method tends to under predict variability in individual-level cognitive processes. In addition many recent models of human category learning have been criticized for not being able to replicate rapid changes in categorization accuracy and attention processes observed in empirical studies. In this paper we introduce stochastic learning algorithms for NN models of human category learning and show that use of the algorithms can result in (a) rapid changes in accuracy and attention allocation, and (b) different learning trajectories and more realistic variability at the individual-level.Keywords: category learning, cognitive modeling, radial basis function, stochastic optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16291467 Comparison of Back-Projection with Non-Uniform Fast Fourier Transform for Real-Time Photoacoustic Tomography
Authors: Moung Young Lee, Chul Gyu Song
Abstract:
Photoacoustic imaging is the imaging technology that combines the optical imaging and ultrasound. This provides the high contrast and resolution due to optical imaging and ultrasound imaging, respectively. We developed the real-time photoacoustic tomography (PAT) system using linear-ultrasound transducer and digital acquisition (DAQ) board. There are two types of algorithm for reconstructing the photoacoustic signal. One is back-projection algorithm, the other is FFT algorithm. Especially, we used the non-uniform FFT algorithm. To evaluate the performance of our system and algorithms, we monitored two wires that stands at interval of 2.89 mm and 0.87 mm. Then, we compared the images reconstructed by algorithms. Finally, we monitored the two hairs crossed and compared between these algorithms.Keywords: Back-projection, image comparison, non-uniform FFT, photoacoustic tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18921466 Synthesis of Digital Circuits with Genetic Algorithms: A Fractional-Order Approach
Authors: Cecília Reis, J. A. Tenreiro Machado, J. Boaventura Cunha
Abstract:
This paper analyses the performance of a genetic algorithm using a new concept, namely a fractional-order dynamic fitness function, for the synthesis of combinational logic circuits. The experiments reveal superior results in terms of speed and convergence to achieve a solution.
Keywords: Circuit design, fractional-order systems, genetic algorithms, logic circuits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14241465 A Copyright Protection Scheme for Color Images using Secret Sharing and Wavelet Transform
Authors: Shang-Lin Hsieh, Lung-Yao Hsu, I-Ju Tsai
Abstract:
This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.
Keywords: Color image, copyright protection, discrete wavelet transform, secret sharing, watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18421464 Application and Limitation of Parallel Modelingin Multidimensional Sequential Pattern
Authors: Mahdi Esmaeili, Mansour Tarafdar
Abstract:
The goal of data mining algorithms is to discover useful information embedded in large databases. One of the most important data mining problems is discovery of frequently occurring patterns in sequential data. In a multidimensional sequence each event depends on more than one dimension. The search space is quite large and the serial algorithms are not scalable for very large datasets. To address this, it is necessary to study scalable parallel implementations of sequence mining algorithms. In this paper, we present a model for multidimensional sequence and describe a parallel algorithm based on data parallelism. Simulation experiments show good load balancing and scalable and acceptable speedup over different processors and problem sizes and demonstrate that our approach can works efficiently in a real parallel computing environment.Keywords: Sequential Patterns, Data Mining, ParallelAlgorithm, Multidimensional Sequence Data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14761463 Choosing Search Algorithms in Bayesian Optimization Algorithm
Authors: Hao Wu, Jonathan L. Shapiro
Abstract:
The Bayesian Optimization Algorithm (BOA) is an algorithm based on the estimation of distributions. It uses techniques from modeling data by Bayesian networks to estimating the joint distribution of promising solutions. To obtain the structure of Bayesian network, different search algorithms can be used. The key point that BOA addresses is whether the constructed Bayesian network could generate new and useful solutions (strings), which could lead the algorithm in the right direction to solve the problem. Undoubtedly, this ability is a crucial factor of the efficiency of BOA. Varied search algorithms can be used in BOA, but their performances are different. For choosing better ones, certain suitable method to present their ability difference is needed. In this paper, a greedy search algorithm and a stochastic search algorithm are used in BOA to solve certain optimization problem. A method using Kullback-Leibler (KL) Divergence to reflect their difference is described.
Keywords: Bayesian optimization algorithm, greedy search, KL divergence, stochastic search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16981462 Predicting Groundwater Areas Using Data Mining Techniques: Groundwater in Jordan as Case Study
Authors: Faisal Aburub, Wael Hadi
Abstract:
Data mining is the process of extracting useful or hidden information from a large database. Extracted information can be used to discover relationships among features, where data objects are grouped according to logical relationships; or to predict unseen objects to one of the predefined groups. In this paper, we aim to investigate four well-known data mining algorithms in order to predict groundwater areas in Jordan. These algorithms are Support Vector Machines (SVMs), Naïve Bayes (NB), K-Nearest Neighbor (kNN) and Classification Based on Association Rule (CBA). The experimental results indicate that the SVMs algorithm outperformed other algorithms in terms of classification accuracy, precision and F1 evaluation measures using the datasets of groundwater areas that were collected from Jordanian Ministry of Water and Irrigation.Keywords: Classification, data mining, evaluation measures, groundwater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25951461 Pruning Algorithm for the Minimum Rule Reduct Generation
Authors: Şahin Emrah Amrahov, Fatih Aybar, Serhat Doğan
Abstract:
In this paper we consider the rule reduct generation problem. Rule Reduct Generation (RG) and Modified Rule Generation (MRG) algorithms, that are used to solve this problem, are well-known. Alternative to these algorithms, we develop Pruning Rule Generation (PRG) algorithm. We compare the PRG algorithm with RG and MRG.
Keywords: Rough sets, Decision rules, Rule induction, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20491460 Proposing a Pareto-based Multi-Objective Evolutionary Algorithm to Flexible Job Shop Scheduling Problem
Authors: Seyed Habib A. Rahmati
Abstract:
During last decades, developing multi-objective evolutionary algorithms for optimization problems has found considerable attention. Flexible job shop scheduling problem, as an important scheduling optimization problem, has found this attention too. However, most of the multi-objective algorithms that are developed for this problem use nonprofessional approaches. In another words, most of them combine their objectives and then solve multi-objective problem through single objective approaches. Of course, except some scarce researches that uses Pareto-based algorithms. Therefore, in this paper, a new Pareto-based algorithm called controlled elitism non-dominated sorting genetic algorithm (CENSGA) is proposed for the multi-objective FJSP (MOFJSP). Our considered objectives are makespan, critical machine work load, and total work load of machines. The proposed algorithm is also compared with one the best Pareto-based algorithms of the literature on some multi-objective criteria, statistically.Keywords: Scheduling, Flexible job shop scheduling problem, controlled elitism non-dominated sorting genetic algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19361459 Two Wheels Balancing Robot with Line Following Capability
Authors: Nor Maniha Abdul Ghani, Faradila Naim, Tan Piow Yon
Abstract:
This project focuses on the development of a line follower algorithm for a Two Wheels Balancing Robot. In this project, ATMEGA32 is chosen as the brain board controller to react towards the data received from Balance Processor Chip on the balance board to monitor the changes of the environment through two infra-red distance sensor to solve the inclination angle problem. Hence, the system will immediately restore to the set point (balance position) through the implementation of internal PID algorithms at the balance board. Application of infra-red light sensors with the PID control is vital, in order to develop a smooth line follower robot. As a result of combination between line follower program and internal self balancing algorithms, we are able to develop a dynamically stabilized balancing robot with line follower function.Keywords: infra-red sensor, PID algorithms, line followerBalancing robot
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75241458 Business Domain Modelling Using an Integrated Framework
Authors: Mohammed Salahat, Steve Wade
Abstract:
This paper presents an application of a “Systematic Soft Domain Driven Design Framework” as a soft systems approach to domain-driven design of information systems development. The framework use SSM as a guiding methodology within which we have embedded a sequence of design tasks based on the UML leading to the implementation of a software system using the Naked Objects framework. This framework have been used in action research projects that have involved the investigation and modelling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, and a real case study “Information Retrieval System for academic research” is used, in this paper, to show further practice and evaluation of the framework in different business domain. We argue that there are advantages from combining and using techniques from different methodologies in this way for business domain modelling. The framework is overviewed and justified as multimethodology using Mingers multimethodology ideas.Keywords: SSM, UML, domain-driven design, soft domaindriven design, naked objects, soft language, information retrieval, multimethodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17771457 A Hybrid Nature Inspired Algorithm for Generating Optimal Query Plan
Authors: R. Gomathi, D. Sharmila
Abstract:
The emergence of the Semantic Web technology increases day by day due to the rapid growth of multiple web pages. Many standard formats are available to store the semantic web data. The most popular format is the Resource Description Framework (RDF). Querying large RDF graphs becomes a tedious procedure with a vast increase in the amount of data. The problem of query optimization becomes an issue in querying large RDF graphs. Choosing the best query plan reduces the amount of query execution time. To address this problem, nature inspired algorithms can be used as an alternative to the traditional query optimization techniques. In this research, the optimal query plan is generated by the proposed SAPSO algorithm which is a hybrid of Simulated Annealing (SA) and Particle Swarm Optimization (PSO) algorithms. The proposed SAPSO algorithm has the ability to find the local optimistic result and it avoids the problem of local minimum. Experiments were performed on different datasets by changing the number of predicates and the amount of data. The proposed algorithm gives improved results compared to existing algorithms in terms of query execution time.
Keywords: Semantic web, RDF, Query optimization, Nature inspired algorithms, PSO, SA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22391456 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.
Keywords: Clustering, k-means, categorical datasets, pattern recognition, unsupervised learning, knowledge discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35451455 A Hybridization of Constructive Beam Search with Local Search for Far From Most Strings Problem
Authors: Sayyed R Mousavi
Abstract:
The Far From Most Strings Problem (FFMSP) is to obtain a string which is far from as many as possible of a given set of strings. All the input and the output strings are of the same length, and two strings are said to be far if their hamming distance is greater than or equal to a given positive integer. FFMSP belongs to the class of sequences consensus problems which have applications in molecular biology. The problem is NP-hard; it does not admit a constant-ratio approximation either, unless P = NP. Therefore, in addition to exact and approximate algorithms, (meta)heuristic algorithms have been proposed for the problem in recent years. On the other hand, in the recent years, hybrid algorithms have been proposed and successfully used for many hard problems in a variety of domains. In this paper, a new metaheuristic algorithm, called Constructive Beam and Local Search (CBLS), is investigated for the problem, which is a hybridization of constructive beam search and local search algorithms. More specifically, the proposed algorithm consists of two phases, the first phase is to obtain several candidate solutions via the constructive beam search and the second phase is to apply local search to the candidate solutions obtained by the first phase. The best solution found is returned as the final solution to the problem. The proposed algorithm is also similar to memetic algorithms in the sense that both use local search to further improve individual solutions. The CBLS algorithm is compared with the most recent published algorithm for the problem, GRASP, with significantly positive results; the improvement is by order of magnitudes in most cases.
Keywords: Bioinformatics, Far From Most Strings Problem, Hybrid metaheuristics, Matheuristics, Sequences consensus problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17431454 Slime Mould Optimization Algorithms for Optimal Distributed Generation Integration in Distribution Electrical Network
Authors: F. Fissou Amigue, S. Ndjakomo Essiane, S. Pérabi Ngoffé, G. Abessolo Ondoa, G. Mengata Mengounou, T. P. Nna Nna
Abstract:
This document proposes a method for determining the optimal point of integration of distributed generation (DG) in distribution grid. Slime mould optimization is applied to determine best node in case of one and two injection point. Problem has been modeled as an optimization problem where the objective is to minimize joule loses and main constraint is to regulate voltage in each point. The proposed method has been implemented in MATLAB and applied in IEEE network 33 and 69 nodes. Comparing results obtained with other algorithms showed that slime mould optimization algorithms (SMOA) have the best reduction of power losses and good amelioration of voltage profile.
Keywords: Optimization, distributed generation, integration, slime mould algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6441453 Simulated Annealing and Genetic Algorithm in Telecommunications Network Planning
Authors: Aleksandar Tsenov
Abstract:
The main goal of this work is to propose a way for combined use of two nontraditional algorithms by solving topological problems on telecommunications concentrator networks. The algorithms suggested are the Simulated Annealing algorithm and the Genetic Algorithm. The Algorithm of Simulated Annealing unifies the well known local search algorithms. In addition - Simulated Annealing allows acceptation of moves in the search space witch lead to decisions with higher cost in order to attempt to overcome any local minima obtained. The Genetic Algorithm is a heuristic approach witch is being used in wide areas of optimization works. In the last years this approach is also widely implemented in Telecommunications Networks Planning. In order to solve less or more complex planning problem it is important to find the most appropriate parameters for initializing the function of the algorithm.Keywords: Concentrator network, genetic algorithm, simulated annealing, UCPL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17221452 A New Routing Algorithm: MIRAD
Authors: Amir Gholami Pastaki, Ali Reza Sahab, Seyed Mehdi Sadeghi
Abstract:
LSP routing is among the prominent issues in MPLS networks traffic engineering. The objective of this routing is to increase number of the accepted requests while guaranteeing the quality of service (QoS). Requested bandwidth is the most important QoS criterion that is considered in literatures, and a various number of heuristic algorithms have been presented with that regards. Many of these algorithms prevent flows through bottlenecks of the network in order to perform load balancing, which impedes optimum operation of the network. Here, a modern routing algorithm is proposed as MIRAD: having a little information of the network topology, links residual bandwidth, and any knowledge of the prospective requests it provides every request with a maximum bandwidth as well as minimum end-to-end delay via uniform load distribution across the network. Simulation results of the proposed algorithm show a better efficiency in comparison with similar algorithms.Keywords: new generation networks, QoS, traffic engineering, MPLS, QoS based routing, LSP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917