Search results for: Scheduling algorithms.
1326 A Characterized and Optimized Approach for End-to-End Delay Constrained QoS Routing
Authors: P.S.Prakash, S.Selvan
Abstract:
QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we analyzed two algorithms namely Characterized Delay Constrained Routing (CDCR) and Optimized Delay Constrained Routing (ODCR). The CDCR algorithm dealt an approach for delay constrained routing that captures the trade-off between cost minimization and risk level regarding the delay constraint. The ODCR which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.Keywords: QoS, Delay, Routing, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12741325 Open Source Algorithms for 3D Geo-Representation of Subsurface Formations Properties in the Oil and Gas Industry
Authors: Gabriel Quintero
Abstract:
This paper presents the result of the implementation of a series of algorithms intended to be used for representing in most of the 3D geographic software, even Google Earth, the subsurface formations properties combining 2D charts or 3D plots over a 3D background, allowing everyone to use them, no matter the economic size of the company for which they work. Besides the existence of complex and expensive specialized software for modeling subsurface formations based on the same information provided to this one, the use of this open source development shows a higher and easier usability and good results, limiting the rendered properties and polygons to a basic set of charts and tubes.
Keywords: Chart, earth, formations, subsurface, visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19181324 The Projection Methods for Computing the Pseudospectra of Large Scale Matrices
Authors: Zhengsheng Wang, Xiangyong Ji, Yong Du
Abstract:
The projection methods, usually viewed as the methods for computing eigenvalues, can also be used to estimate pseudospectra. This paper proposes a kind of projection methods for computing the pseudospectra of large scale matrices, including orthogonalization projection method and oblique projection method respectively. This possibility may be of practical importance in applications involving large scale highly nonnormal matrices. Numerical algorithms are given and some numerical experiments illustrate the efficiency of the new algorithms.Keywords: Pseudospectra, eigenvalue, projection method, Arnoldi, IOM(q)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13251323 Classification of Political Affiliations by Reduced Number of Features
Authors: Vesile Evrim, Aliyu Awwal
Abstract:
By the evolvement in technology, the way of expressing opinions switched direction to the digital world. The domain of politics, as one of the hottest topics of opinion mining research, merged together with the behavior analysis for affiliation determination in texts, which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 were constituted by Linguistic Inquiry and Word Count (LIWC) features were tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that the “Decision Tree”, “Rule Induction” and “M5 Rule” classifiers when used with “SVM” and “IGR” feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “Function”, as an aggregate feature of the linguistic category, was found as the most differentiating feature among the 68 features with the accuracy of 81% in classifying articles either as Republican or Democrat.Keywords: Politics, machine learning, feature selection, LIWC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23651322 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms
Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang
Abstract:
Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.
Keywords: Bioassay, machine learning, preprocessing, virtual screen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9821321 Siding Mode Control of Pitch-Rate of an F-16 Aircraft
Authors: Ekprasit Promtun, Sridhar Seshagiri
Abstract:
This paper considers the control of the longitudinal flight dynamics of an F-16 aircraft. The primary design objective is model-following of the pitch rate q, which is the preferred system for aircraft approach and landing. Regulation of the aircraft velocity V (or the Mach-hold autopilot) is also considered, but as a secondary objective. The problem is challenging because the system is nonlinear, and also non-affine in the input. A sliding mode controller is designed for the pitch rate, that exploits the modal decomposition of the linearized dynamics into its short-period and phugoid approximations. The inherent robustness of the SMC design provides a convenient way to design controllers without gain scheduling, with a steady-state response that is comparable to that of a conventional polynomial based gain-scheduled approach with integral control, but with improved transient performance. Integral action is introduced in the sliding mode design using the recently developed technique of “conditional integrators", and it is shown that robust regulation is achieved with asymptotically constant exogenous signals, without degrading the transient response. Through extensive simulation on the nonlinear multiple-input multiple-output (MIMO) longitudinal model of the F-16 aircraft, it is shown that the conditional integrator design outperforms the one based on the conventional linear control, without requiring any scheduling.Keywords: Sliding-mode Control, Integral Control, Model Following, F-16 Longitudinal Dynamics, Pitch-Rate Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32211320 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio D. Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from a real-life pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: Constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19751319 Optimal Algorithm for Constructing the Delaunay Triangulation in Ed
Authors: V. Tereshchenko, D. Taran
Abstract:
In this paper we propose a new approach to constructing the Delaunay Triangulation and the optimum algorithm for the case of multidimensional spaces (d ≥ 2). Analysing the modern state, it is possible to draw a conclusion, that the ideas for the existing effective algorithms developed for the case of d ≥ 2 are not simple to generalize on a multidimensional case, without the loss of efficiency. We offer for the solving this problem an effective algorithm that satisfies all the given requirements. But theoretical complexity of the problem it is impossible to improve as the Worst - Case Optimality for algorithms of solving such a problem is proved.
Keywords: Delaunay triangulation, multidimensional space, Voronoi Diagram, optimal algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19811318 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14921317 Research of Data Cleaning Methods Based on Dependency Rules
Authors: Yang Bao, Shi Wei Deng, Wang Qun Lin
Abstract:
This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSql), and gives 6 data cleaning methods based on these algorithms.Keywords: Data cleaning, dependency rules, violation data discovery, data repair.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26121316 A Review on Image Segmentation Techniques and Performance Measures
Authors: David Libouga Li Gwet, Marius Otesteanu, Ideal Oscar Libouga, Laurent Bitjoka, Gheorghe D. Popa
Abstract:
Image segmentation is a method to extract regions of interest from an image. It remains a fundamental problem in computer vision. The increasing diversity and the complexity of segmentation algorithms have led us firstly, to make a review and classify segmentation techniques, secondly to identify the most used measures of segmentation performance and thirdly, discuss deeply on segmentation philosophy in order to help the choice of adequate segmentation techniques for some applications. To justify the relevance of our analysis, recent algorithms of segmentation are presented through the proposed classification.Keywords: Classification, image segmentation, measures of performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20531315 Implicit Force Control of a Position Controlled Robot – A Comparison with Explicit Algorithms
Authors: Alexander Winkler, Jozef Suchý
Abstract:
This paper investigates simple implicit force control algorithms realizable with industrial robots. A lot of approaches already published are difficult to implement in commercial robot controllers, because the access to the robot joint torques is necessary or the complete dynamic model of the manipulator is used. In the past we already deal with explicit force control of a position controlled robot. Well known schemes of implicit force control are stiffness control, damping control and impedance control. Using such algorithms the contact force cannot be set directly. It is further the result of controller impedance, environment impedance and the commanded robot motion/position. The relationships of these properties are worked out in this paper in detail for the chosen implicit approaches. They have been adapted to be implementable on a position controlled robot. The behaviors of stiffness control and damping control are verified by practical experiments. For this purpose a suitable test bed was configured. Using the full mechanical impedance within the controller structure will not be practical in the case when the robot is in physical contact with the environment. This fact will be verified by simulation.Keywords: Damping control, impedance control, robot force control, stability, stiffness control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28701314 Smart Power Scheduling to Reduce Peak Demand and Cost of Energy in Smart Grid
Authors: Hemant I. Joshi, Vivek J. Pandya
Abstract:
This paper discusses the simulation and experimental work of small Smart Grid containing ten consumers. Smart Grid is characterized by a two-way flow of real-time information and energy. RTP (Real Time Pricing) based tariff is implemented in this work to reduce peak demand, PAR (peak to average ratio) and cost of energy consumed. In the experimental work described here, working of Smart Plug, HEC (Home Energy Controller), HAN (Home Area Network) and communication link between consumers and utility server are explained. Algorithms for Smart Plug, HEC, and utility server are presented and explained in this work. After receiving the Real Time Price for different time slots of the day, HEC interacts automatically by running an algorithm which is based on Linear Programming Problem (LPP) method to find the optimal energy consumption schedule. Algorithm made for utility server can handle more than one off-peak time period during the day. Simulation and experimental work are carried out for different cases. At the end of this work, comparison between simulation results and experimental results are presented to show the effectiveness of the minimization method adopted.
Keywords: Smart Grid, Real Time Pricing, Peak to Average Ratio, Home Area Network, Home Energy Controller, Smart Plug, Utility Server, Linear Programming Problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16851313 Distribution Voltage Regulation Under Three- Phase Fault by Using D-STATCOM
Authors: Chaiyut Sumpavakup, Thanatchai Kulworawanichpong
Abstract:
This paper presents the voltage regulation scheme of D-STATCOM under three-phase faults. It consists of the voltage detection and voltage regulation schemes in the 0dq reference. The proposed control strategy uses the proportional controller in which the proportional gain, kp, is appropriately adjusted by using genetic algorithms. To verify its use, a simplified 4-bus test system is situated by assuming a three-phase fault at bus 4. As a result, the DSTATCOM can resume the load voltage to the desired level within 1.8 ms. This confirms that the proposed voltage regulation scheme performs well under three-phase fault events.Keywords: D-STATCOM, proportional controller, genetic algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17901312 Classic and Heuristic Approaches in Robot Motion Planning A Chronological Review
Authors: Ellips Masehian, Davoud Sedighizadeh
Abstract:
This paper reviews the major contributions to the Motion Planning (MP) field throughout a 35-year period, from classic approaches to heuristic algorithms. Due to the NP-Hardness of the MP problem, heuristic methods have outperformed the classic approaches and have gained wide popularity. After surveying around 1400 papers in the field, the amount of existing works for each method is identified and classified. Especially, the history and applications of numerous heuristic methods in MP is investigated. The paper concludes with comparative tables and graphs demonstrating the frequency of each MP method's application, and so can be used as a guideline for MP researchers.Keywords: Robot motion planning, Heuristic algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51911311 Segmentation Problems and Solutions in Printed Degraded Gurmukhi Script
Authors: M. K. Jindal, G. S. Lehal, R. K. Sharma
Abstract:
Character segmentation is an important preprocessing step for text recognition. In degraded documents, existence of touching characters decreases recognition rate drastically, for any optical character recognition (OCR) system. In this paper we have proposed a complete solution for segmenting touching characters in all the three zones of printed Gurmukhi script. A study of touching Gurmukhi characters is carried out and these characters have been divided into various categories after a careful analysis. Structural properties of the Gurmukhi characters are used for defining the categories. New algorithms have been proposed to segment the touching characters in middle zone, upper zone and lower zone. These algorithms have shown a reasonable improvement in segmenting the touching characters in degraded printed Gurmukhi script. The algorithms proposed in this paper are applicable only to machine printed text. We have also discussed a new and useful technique to segment the horizontally overlapping lines.Keywords: Character Segmentation, Middle Zone, Upper Zone, Lower Zone, Touching Characters, Horizontally Overlapping Lines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16961310 Analysis of Modified Heap Sort Algorithm on Different Environment
Authors: Vandana Sharma, Parvinder S. Sandhu, Satwinder Singh, Baljit Saini
Abstract:
In field of Computer Science and Mathematics, sorting algorithm is an algorithm that puts elements of a list in a certain order i.e. ascending or descending. Sorting is perhaps the most widely studied problem in computer science and is frequently used as a benchmark of a system-s performance. This paper presented the comparative performance study of four sorting algorithms on different platform. For each machine, it is found that the algorithm depends upon the number of elements to be sorted. In addition, as expected, results show that the relative performance of the algorithms differed on the various machines. So, algorithm performance is dependent on data size and there exists impact of hardware also.Keywords: Algorithm, Analysis, Complexity, Sorting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24131309 Faster FPGA Routing Solution using DNA Computing
Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon
Abstract:
There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.Keywords: FPGA, Routing, DNA Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15921308 Sample-Weighted Fuzzy Clustering with Regularizations
Authors: Miin-Shen Yang, Yee-Shan Pan
Abstract:
Although there have been many researches in cluster analysis to consider on feature weights, little effort is made on sample weights. Recently, Yu et al. (2011) considered a probability distribution over a data set to represent its sample weights and then proposed sample-weighted clustering algorithms. In this paper, we give a sample-weighted version of generalized fuzzy clustering regularization (GFCR), called the sample-weighted GFCR (SW-GFCR). Some experiments are considered. These experimental results and comparisons demonstrate that the proposed SW-GFCR is more effective than the most clustering algorithms.
Keywords: Clustering; fuzzy c-means, fuzzy clustering, sample weights, regularization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17671307 Data Preprocessing for Supervised Leaning
Authors: S. B. Kotsiantis, D. Kanellopoulos, P. E. Pintelas
Abstract:
Many factors affect the success of Machine Learning (ML) on a given task. The representation and quality of the instance data is first and foremost. If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is more difficult. It is well known that data preparation and filtering steps take considerable amount of processing time in ML problems. Data pre-processing includes data cleaning, normalization, transformation, feature extraction and selection, etc. The product of data pre-processing is the final training set. It would be nice if a single sequence of data pre-processing algorithms had the best performance for each data set but this is not happened. Thus, we present the most well know algorithms for each step of data pre-processing so that one achieves the best performance for their data set.Keywords: Data mining, feature selection, data cleaning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60911306 Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine
Authors: Djamila Benhaddouche, Abdelkader Benyettou
Abstract:
In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.
Keywords: A classifier, Algorithms decision tree, knowledge extraction, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18701305 On Reversal and Transposition Medians
Authors: Martin Bader
Abstract:
During the last years, the genomes of more and more species have been sequenced, providing data for phylogenetic recon- struction based on genome rearrangement measures. A main task in all phylogenetic reconstruction algorithms is to solve the median of three problem. Although this problem is NP-hard even for the sim- plest distance measures, there are exact algorithms for the breakpoint median and the reversal median that are fast enough for practical use. In this paper, this approach is extended to the transposition median as well as to the weighted reversal and transposition median. Although there is no exact polynomial algorithm known even for the pairwise distances, we will show that it is in most cases possible to solve these problems exactly within reasonable time by using a branch and bound algorithm.Keywords: Comparative genomics, genome rearrangements, me-dian, reversals, transpositions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16881304 Speed Regulation of a Small BLDC Motor Using Genetic-Based Proportional Control
Authors: S. Poonsawat, T. Kulworawanichpong
Abstract:
This paper presents the speed regulation scheme of a small brushless dc motor (BLDC motor) with trapezoidal back-emf consideration. The proposed control strategy uses the proportional controller in which the proportional gain, kp, is appropriately adjusted by using genetic algorithms. As a result, the proportional control can perform well in order to compensate the BLDC motor with load disturbance. This confirms that the proposed speed regulation scheme gives satisfactory results.
Keywords: BLDC motor, proportional controller, genetic algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20961303 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy
Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie
Abstract:
In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.
Keywords: Data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25581302 Optimum Cascaded Design for Speech Enhancement Using Kalman Filter
Authors: T. Kishore Kumar
Abstract:
Speech enhancement is the process of eliminating noise and increasing the quality of a speech signal, which is contaminated with other kinds of distortions. This paper is on developing an optimum cascaded system for speech enhancement. This aim is attained without diminishing any relevant speech information and without much computational and time complexity. LMS algorithm, Spectral Subtraction and Kalman filter have been deployed as the main de-noising algorithms in this work. Since these algorithms suffer from respective shortcomings, this work has been undertaken to design cascaded systems in different combinations and the evaluation of such cascades by qualitative (listening) and quantitative (SNR) tests.Keywords: LMS, Kalman filter, Speech Enhancement and Spectral Subtraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17331301 Harris Extraction and SIFT Matching for Correlation of Two Tablets
Authors: Ali Alzaabi, Georges Alquié, Hussain Tassadaq, Ali Seba
Abstract:
This article presents the developments of efficient algorithms for tablet copies comparison. Image recognition has specialized use in digital systems such as medical imaging, computer vision, defense, communication etc. Comparison between two images that look indistinguishable is a formidable task. Two images taken from different sources might look identical but due to different digitizing properties they are not. Whereas small variation in image information such as cropping, rotation, and slight photometric alteration are unsuitable for based matching techniques. In this paper we introduce different matching algorithms designed to facilitate, for art centers, identifying real painting images from fake ones. Different vision algorithms for local image features are implemented using MATLAB. In this framework a Table Comparison Computer Tool “TCCT" is designed to facilitate our research. The TCCT is a Graphical Unit Interface (GUI) tool used to identify images by its shapes and objects. Parameter of vision system is fully accessible to user through this graphical unit interface. And then for matching, it applies different description technique that can identify exact figures of objects.Keywords: Harris Extraction and SIFT Matching
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17341300 Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm
Authors: Jehad A. H. Hammad, Nur'Aini binti Abdul Rashid
Abstract:
With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.Keywords: Biological sequence, Database index, N-gram indexing, Parallel computing, Sequence retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21361299 Using A Hybrid Algorithm to Improve the Quality of Services in Multicast Routing Problem
Authors: Mohammad Reza Karami Nejad
Abstract:
A hybrid learning automata-genetic algorithm (HLGA) is proposed to solve QoS routing optimization problem of next generation networks. The algorithm complements the advantages of the learning Automato Algorithm(LA) and Genetic Algorithm(GA). It firstly uses the good global search capability of LA to generate initial population needed by GA, then it uses GA to improve the Quality of Service(QoS) and acquiring the optimization tree through new algorithms for crossover and mutation operators which are an NP-Complete problem. In the proposed algorithm, the connectivity matrix of edges is used for genotype representation. Some novel heuristics are also proposed for mutation, crossover, and creation of random individuals. We evaluate the performance and efficiency of the proposed HLGA-based algorithm in comparison with other existing heuristic and GA-based algorithms by the result of simulation. Simulation results demonstrate that this paper proposed algorithm not only has the fast calculating speed and high accuracy but also can improve the efficiency in Next Generation Networks QoS routing. The proposed algorithm has overcome all of the previous algorithms in the literature.
Keywords: Routing, Quality of Service, Multicaset, Learning Automata, Genetic, Next Generation Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17381298 Parameter Tuning of Complex Systems Modeled in Agent Based Modeling and Simulation
Authors: Rabia Korkmaz Tan, Şebnem Bora
Abstract:
The major problem encountered when modeling complex systems with agent-based modeling and simulation techniques is the existence of large parameter spaces. A complex system model cannot be expected to reflect the whole of the real system, but by specifying the most appropriate parameters, the actual system can be represented by the model under certain conditions. When the studies conducted in recent years were reviewed, it has been observed that there are few studies for parameter tuning problem in agent based simulations, and these studies have focused on tuning parameters of a single model. In this study, an approach of parameter tuning is proposed by using metaheuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colonies (ABC), Firefly (FA) algorithms. With this hybrid structured study, the parameter tuning problems of the models in the different fields were solved. The new approach offered was tested in two different models, and its achievements in different problems were compared. The simulations and the results reveal that this proposed study is better than the existing parameter tuning studies.
Keywords: Parameter tuning, agent based modeling and simulation, metaheuristic algorithms, complex systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12441297 An Innovational Intermittent Algorithm in Networks-On-Chip (NOC)
Authors: Ahmad M. Shafiee, Mehrdad Montazeri, Mahdi Nikdast
Abstract:
Every day human life experiences new equipments more automatic and with more abilities. So the need for faster processors doesn-t seem to finish. Despite new architectures and higher frequencies, a single processor is not adequate for many applications. Parallel processing and networks are previous solutions for this problem. The new solution to put a network of resources on a chip is called NOC (network on a chip). The more usual topology for NOC is mesh topology. There are several routing algorithms suitable for this topology such as XY, fully adaptive, etc. In this paper we have suggested a new algorithm named Intermittent X, Y (IX/Y). We have developed the new algorithm in simulation environment to compare delay and power consumption with elders' algorithms.Keywords: Computer architecture, parallel computing, NOC, routing algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678