Search results for: 5-star benchmarks.
38 FHOJ: A New Java Benchmark Framework
Authors: Vinh Quang La, Dirk Jansen
Abstract:
There are some existing Java benchmarks, application benchmarks as well as micro benchmarks or mixture both of them,such as: Java Grande, Spec98, CaffeMark, HBech, etc. But none of them deal with behaviors of multi tasks operating systems. As a result, the achieved outputs are not satisfied for performance evaluation engineers. Behaviors of multi tasks operating systems are based on a schedule management which is employed in these systems. Different processes can have different priority to share the same resources. The time is measured by estimating from applications started to it is finished does not reflect the real time value which the system need for running those programs. New approach to this problem should be done. Having said that, in this paper we present a new Java benchmark, named FHOJ benchmark, which directly deals with multi tasks behaviors of a system. Our study shows that in some cases, results from FHOJ benchmark are far more reliable in comparison with some existing Java benchmarks.
Keywords: Java Virtual Machine, Java benchmark, FHOJ framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152737 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 102436 An Architecture for High Performance File SystemI/O
Authors: Mikulas Patocka
Abstract:
This paper presents an architecture of current filesystem implementations as well as our new filesystem SpadFS and operating system Spad with rewritten VFS layer targeted at high performance I/O applications. The paper presents microbenchmarks and real-world benchmarks of different filesystems on the same kernel as well as benchmarks of the same filesystem on different kernels – enabling the reader to make conclusion how much is the performance of various tasks affected by operating system and how much by physical layout of data on disk. The paper describes our novel features–most notably continuous allocation of directories and cross-file readahead – and shows their impact on performance.Keywords: Filesystem, operating system, VFS, performance, readahead
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144635 Comparative Study of Universities’ Web Structure Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
This paper is meant to analyze the ranking of University of Malaysia Terengganu, UMT’s website in the World Wide Web. There are only few researches have been done on comparing the ranking of universities’ websites so this research will be able to determine whether the existing UMT’s website is serving its purpose which is to introduce UMT to the world. The ranking is based on hub and authority values which are accordance to the structure of the website. These values are computed using two websearching algorithms, HITS and SALSA. Three other universities’ websites are used as the benchmarks which are UM, Harvard and Stanford. The result is clearly showing that more work has to be done on the existing UMT’s website where important pages according to the benchmarks, do not exist in UMT’s pages. The ranking of UMT’s website will act as a guideline for the web-developer to develop a more efficient website.Keywords: Algorithm, ranking, website, web structure mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166734 Performance Analysis of Digital Signal Processors Using SMV Benchmark
Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang
Abstract:
Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160733 A Patricia-Tree Approach for Frequent Closed Itemsets
Authors: Moez Ben Hadj Hamida, Yahya SlimaniI
Abstract:
In this paper, we propose an adaptation of the Patricia-Tree for sparse datasets to generate non redundant rule associations. Using this adaptation, we can generate frequent closed itemsets that are more compact than frequent itemsets used in Apriori approach. This adaptation has been experimented on a set of datasets benchmarks.
Keywords: Datamining, Frequent itemsets, Frequent closeditemsets, Sparse datasets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188332 Scalable Deployment and Configuration of High-Performance Virtual Clusters
Authors: Kyrre M Begnum, Matthew Disney
Abstract:
Virtualization and high performance computing have been discussed from a performance perspective in recent publications. We present and discuss a flexible and efficient approach to the management of virtual clusters. A virtual machine management tool is extended to function as a fabric for cluster deployment and management. We show how features such as saving the state of a running cluster can be used to avoid disruption. We also compare our approach to the traditional methods of cluster deployment and present benchmarks which illustrate the efficiency of our approach.
Keywords: Cluster management, clusters, high-performance, virtual machines, Xen
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 140331 Selective Minterms Based Tabular Method for BDD Manipulations
Authors: P. W. C. Prasad, A. Assi, M. Raseen, A. Harb
Abstract:
The goal of this work is to describe a new algorithm for finding the optimal variable order, number of nodes for any order and other ROBDD parameters, based on a tabular method. The tabular method makes use of a pre-built backend database table that stores the ROBDD size for selected combinations of min-terms. The user uses the backend table and the proposed algorithm to find the necessary ROBDD parameters, such as best variable order, number of nodes etc. Experimental results on benchmarks are given for this technique.
Keywords: Tabular Method, Binary Decision Diagram, BDD Manipulation, Boolean Function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189330 Learning and Evaluating Possibilistic Decision Trees using Information Affinity
Authors: Ilyes Jenhani, Salem Benferhat, Zied Elouedi
Abstract:
This paper investigates the issue of building decision trees from data with imprecise class values where imprecision is encoded in the form of possibility distributions. The Information Affinity similarity measure is introduced into the well-known gain ratio criterion in order to assess the homogeneity of a set of possibility distributions representing instances-s classes belonging to a given training partition. For the experimental study, we proposed an information affinity based performance criterion which we have used in order to show the performance of the approach on well-known benchmarks.Keywords: Data mining from uncertain data, Decision Trees, Possibility Theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151429 A Comparative Study of Main Memory Databases and Disk-Resident Databases
Authors: F. Raja, M.Rahgozar, N. Razavi, M. Siadaty
Abstract:
Main Memory Database systems (MMDB) store their data in main physical memory and provide very high-speed access. Conventional database systems are optimized for the particular characteristics of disk storage mechanisms. Memory resident systems, on the other hand, use different optimizations to structure and organize data, as well as to make it reliable. This paper provides a brief overview on MMDBs and one of the memory resident systems named FastDB and compares the processing time of this system with a typical disc resident database based on the results of the implementation of TPC benchmarks environment on both.Keywords: Disk-Resident Database, FastDB, Main MemoryDatabase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 306328 Data Mining Using Learning Automata
Authors: M. R. Aghaebrahimi, S. H. Zahiri, M. Amiri
Abstract:
In this paper a data miner based on the learning automata is proposed and is called LA-miner. The LA-miner extracts classification rules from data sets automatically. The proposed algorithm is established based on the function optimization using learning automata. The experimental results on three benchmarks indicate that the performance of the proposed LA-miner is comparable with (sometimes better than) the Ant-miner (a data miner algorithm based on the Ant Colony optimization algorithm) and CNZ (a well-known data mining algorithm for classification).Keywords: Data mining, Learning automata, Classification rules, Knowledge discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 193427 A Study of Water Consumption in Two Malaysian Resorts
Authors: Fu E. Tang
Abstract:
In the effort to reduce water consumption for resorts, more water conservation practices need to be implemented. Hence water audits need to be performed to obtain a baseline of water consumption, before planning water conservation practices. In this study, a water audit framework specifically for resorts was created, and the audit was performed on two resorts: Resort A in Langkawi, Malaysia; and Resort B in Miri, Malaysia. From the audit, the total daily water consumption for Resorts A and B were estimated to be 180m3 and 330 m3 respectively, while the actual water consumption (based on water meter readings) were 175 m3 and 325 m3. This suggests that the audit framework is reasonably accurate and may be used to account for most of the water consumption sources in a resort. The daily water consumption per guest is about 500 litres. The water consumption of both resorts is poorly rated compared with established benchmarks. Water conservation measures were suggested for both resorts.Keywords: water consumption patterns, water conservation practices, water audit, water audit framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664226 Kernel’s Parameter Selection for Support Vector Domain Description
Authors: Mohamed EL Boujnouni, Mohamed Jedra, Noureddine Zahid
Abstract:
Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.
Keywords: Gravity centers, Kernel’s parameter, Support Vector Domain Description, Variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 183125 Improving the Quality of e-learning Courses in Higher Education through Student Satisfaction
Authors: Susana Lemos, Neuza Pedro
Abstract:
Thepurpose of the research is to characterize the levels of satisfaction of the students in e-learning post-graduate courses, taking into account specific dimensions of the course which were considered as benchmarks for the quality of this type of online learning initiative, as well as the levels of satisfaction towards each specific indicator identified in each dimension. It was also an aim of this study to understand how thesedimensions relate to one another. Using a quantitative research approach in the collection and analysis of the data, the study involves the participation of the students who attended on e-learning course in 2010/2011. The conclusions of this study suggest that online students present relatively high levels of satisfaction, which points towards a positive experience during the course. It is possible to note that there is a correlation between the different dimensions studied, consequently leading to different improvement strategies. Ultimately, this investigation aims to contribute to the promotion of quality and the success of e-learning initiatives in Higher Education.Keywords: e-learning, higher education, quality, students satisfaction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159724 Analysis of Dropped Call Rate for Long Term Evolution Networks in Bayelsa State, Nigeria
Authors: Chibuzo Emeruwa, Nnamdi N. Omehe
Abstract:
This work attempts to effectively compare Dropped Call Rate (DCR) against industry benchmarks and competitor networks to identify areas for improvement and sets performance targets. Four mobile telecommunication networks operational in Bayelsa State Nigeria were considered. Results obtained shows that MTN and Airtel performed well within the regulator’s benchmark of ≤ 1% in all cases, while Globacom and 9moblie had instances where their performance fell outside the benchmark. The maximum values obtained within the period in view was 18.52% and it was in March 2016 for Globacom while the least value recorded is 0.00% and it was in September 2018 for 9mobile. In the seven years period under review, MTN and Airtel performed within the Nigerian Communication Commission’s (NCC) benchmark all through. This affirms that it is possible for the networks to perform optimally if adequate measures are put in place for improved Quality of Service (QoS).
Keywords: Attempted calls, data, dropped call rate, handover failure rate, key performance indicator, mobile network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12123 Hybrid Machine Learning Approach for Text Categorization
Authors: Nerijus Remeikis, Ignas Skucas, Vida Melninkaite
Abstract:
Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Keywords: Text categorization, decision trees, neural networks, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180522 An Evaluation Model for Semantic Enablement of Virtual Research Environments
Authors: Tristan O'Neill, Trina Myers, Jarrod Trevathan
Abstract:
The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for crossdomain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.
Keywords: Virtual research environment, Semantic Web, performance analysis, tropical data hub.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178321 A Decision Matrix for the Evaluation of Triplestores for Use in a Virtual Research Environment
Authors: Tristan O’Neill, Trina Myers, Jarrod Trevathan
Abstract:
The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for cross-domain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.
Keywords: Virtual research environment, Semantic Web, performance analysis, tropical data hub.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170420 A Tabu Search Heuristic for Scratch-Pad Memory Management
Authors: Maha Idrissi Aouad, Rene Schott, Olivier Zendra
Abstract:
Reducing energy consumption of embedded systems requires careful memory management. It has been shown that Scratch- Pad Memories (SPMs) are low size, low cost, efficient (i.e. energy saving) data structures directly managed at the software level. In this paper, the focus is on heuristic methods for SPMs management. A method is efficient if the number of accesses to SPM is as large as possible and if all available space (i.e. bits) is used. A Tabu Search (TS) approach for memory management is proposed which is, to the best of our knowledge, a new original alternative to the best known existing heuristic (BEH). In fact, experimentations performed on benchmarks show that the Tabu Search method is as efficient as BEH (in terms of energy consumption) but BEH requires a sorting which can be computationally expensive for a large amount of data. TS is easy to implement and since no sorting is necessary, unlike BEH, the corresponding sorting time is saved. In addition to that, in a dynamic perspective where the maximum capacity of the SPM is not known in advance, the TS heuristic will perform better than BEH.
Keywords: Energy consumption, memory allocation management, optimization, tabu search heuristic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167819 An Application-Based Indoor Environmental Quality (IEQ) Calculator for Residential Buildings
Authors: Kwok W. Mui, Ling T. Wong, Chin T. Cheung, Ho C. Yu
Abstract:
Based on an indoor environmental quality (IEQ) index established by previous work that indicates the overall IEQ acceptance from the prospect of an occupant in residential buildings in terms of four IEQ factors - thermal comfort, indoor air quality, visual and aural comforts, this study develops a user-friendly IEQ calculator for iOS and Android users to calculate the occupant acceptance and compare the relative performance of IEQ in apartments. “IEQ calculator” is easy to use and it preliminarily illustrates the overall indoor environmental quality on the spot. Users simply input indoor parameters such as temperature, number of people and windows are opened or closed for the mobile application to calculate the scores in four areas: the comforts of temperature, brightness, noise and indoor air quality. The calculator allows the prediction of the best IEQ scenario on a quantitative scale. Any indoor environments under the specific IEQ conditions can be benchmarked against the predicted IEQ acceptance range. This calculator can also suggest how to achieve the best IEQ acceptance among a group of residents.
Keywords: Calculator, indoor environmental quality (IEQ), residential buildings, 5-star benchmarks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 241318 An IM-COH Algorithm Neural Network Optimization with Cuckoo Search Algorithm for Time Series Samples
Authors: Wullapa Wongsinlatam
Abstract:
Back propagation algorithm (BP) is a widely used technique in artificial neural network and has been used as a tool for solving the time series problems, such as decreasing training time, maximizing the ability to fall into local minima, and optimizing sensitivity of the initial weights and bias. This paper proposes an improvement of a BP technique which is called IM-COH algorithm (IM-COH). By combining IM-COH algorithm with cuckoo search algorithm (CS), the result is cuckoo search improved control output hidden layer algorithm (CS-IM-COH). This new algorithm has a better ability in optimizing sensitivity of the initial weights and bias than the original BP algorithm. In this research, the algorithm of CS-IM-COH is compared with the original BP, the IM-COH, and the original BP with CS (CS-BP). Furthermore, the selected benchmarks, four time series samples, are shown in this research for illustration. The research shows that the CS-IM-COH algorithm give the best forecasting results compared with the selected samples.Keywords: Artificial neural networks, back propagation algorithm, time series, local minima problem, metaheuristic optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109317 Adapting the Chemical Reaction Optimization Algorithm to the Printed Circuit Board Drilling Problem
Authors: Taisir Eldos, Aws Kanan, Waleed Nazih, Ahmad Khatatbih
Abstract:
Chemical Reaction Optimization (CRO) is an optimization metaheuristic inspired by the nature of chemical reactions as a natural process of transforming the substances from unstable to stable states. Starting with some unstable molecules with excessive energy, a sequence of interactions takes the set to a state of minimum energy. Researchers reported successful application of the algorithm in solving some engineering problems, like the quadratic assignment problem, with superior performance when compared with other optimization algorithms. We adapted this optimization algorithm to the Printed Circuit Board Drilling Problem (PCBDP) towards reducing the drilling time and hence improving the PCB manufacturing throughput. Although the PCBDP can be viewed as instance of the popular Traveling Salesman Problem (TSP), it has some characteristics that would require special attention to the transactions that explore the solution landscape. Experimental test results using the standard CROToolBox are not promising for practically sized problems, while it could find optimal solutions for artificial problems and small benchmarks as a proof of concept.
Keywords: Evolutionary Algorithms, Chemical Reaction Optimization, Traveling Salesman, Board Drilling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 323016 Competitiveness and Value Creation of Tourism Sector: In the Case of 10 ASEAN Economies
Authors: Apirada Chinprateep
Abstract:
The ASEAN Economic Community (AEC) is the goal of regional economic integration by 2015. In the region, tourism is an activity that is important, especially as a source of foreign currency, a source of employment creation and a source of income bringing to the region. Given the complexity of the issues entailing the concept of sustainable tourism, this paper tries to assess tourism sustainability with the ASEAN, based on a number of quantitative indicators for all the ten economies, Thailand, Myanmar, Laos, Vietnam, Malaysia, Singapore, Indonesia, Philippines, Cambodia, and Brunei. The methodological framework will provide a number of benchmarks of tourism activities in these countries. They include identification of the dimensions; for example, economic, socio-ecologic, infrastructure and indicators, method of scaling, chart representation and evaluation on Asian countries. This specification shows that a similar level of tourism activity might introduce different implementation in the tourism activity and might have different consequences for the socioecological environment and sustainability. The heterogeneity of developing countries exposed briefly here would be useful to detect and prepare for coping with the main problems of each country in their tourism activities, as well as competitiveness and value creation of tourism for ASEAN economic community, and will compare with other parts of the world.Keywords: AEC, ASEAN, sustainable, tourism, competitiveness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 176215 Extraction of Symbolic Rules from Artificial Neural Networks
Authors: S. M. Kamruzzaman, Md. Monirul Islam
Abstract:
Although backpropagation ANNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability.Keywords: Backpropagation, clustering algorithm, constructivealgorithm, continuous activation function, pruning algorithm, ruleextraction algorithm, symbolic rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161514 Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection
Authors: Malay K. Pakhira, Rajat K. De
Abstract:
In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.Keywords: Hardware evaluation, Hardware pipeline, Optimization, Pipelined genetic algorithm, SA-selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144213 Deep Learning Based, End-to-End Metaphor Detection in Greek with Recurrent and Convolutional Neural Networks
Authors: Konstantinos Perifanos, Eirini Florou, Dionysis Goutsos
Abstract:
This paper presents and benchmarks a number of end-to-end Deep Learning based models for metaphor detection in Greek. We combine Convolutional Neural Networks and Recurrent Neural Networks with representation learning to bear on the metaphor detection problem for the Greek language. The models presented achieve exceptional accuracy scores, significantly improving the previous state-of-the-art results, which had already achieved accuracy 0.82. Furthermore, no special preprocessing, feature engineering or linguistic knowledge is used in this work. The methods presented achieve accuracy of 0.92 and F-score 0.92 with Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory networks (LSTMs). Comparable results of 0.91 accuracy and 0.91 F-score are also achieved with bidirectional Gated Recurrent Units (GRUs) and Convolutional Recurrent Neural Nets (CRNNs). The models are trained and evaluated only on the basis of training tuples, the related sentences and their labels. The outcome is a state-of-the-art collection of metaphor detection models, trained on limited labelled resources, which can be extended to other languages and similar tasks.Keywords: Metaphor detection, deep learning, representation learning, embeddings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55312 A Modified Run Length Coding Technique for Test Data Compression Based on Multi-Level Selective Huffman Coding
Authors: C. Kalamani, K. Paramasivam
Abstract:
Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.
Keywords: Modified run length coding, multilevel selective Huffman coding, built-in-self-test modified selective Huffman coding, automatic test equipment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127411 Hybrid Structure Learning Approach for Assessing the Phosphate Laundries Impact
Authors: Emna Benmohamed, Hela Ltifi, Mounir Ben Ayed
Abstract:
Bayesian Network (BN) is one of the most efficient classification methods. It is widely used in several fields (i.e., medical diagnostics, risk analysis, bioinformatics research). The BN is defined as a probabilistic graphical model that represents a formalism for reasoning under uncertainty. This classification method has a high-performance rate in the extraction of new knowledge from data. The construction of this model consists of two phases for structure learning and parameter learning. For solving this problem, the K2 algorithm is one of the representative data-driven algorithms, which is based on score and search approach. In addition, the integration of the expert's knowledge in the structure learning process allows the obtainment of the highest accuracy. In this paper, we propose a hybrid approach combining the improvement of the K2 algorithm called K2 algorithm for Parents and Children search (K2PC) and the expert-driven method for learning the structure of BN. The evaluation of the experimental results, using the well-known benchmarks, proves that our K2PC algorithm has better performance in terms of correct structure detection. The real application of our model shows its efficiency in the analysis of the phosphate laundry effluents' impact on the watershed in the Gafsa area (southwestern Tunisia).
Keywords: Classification, Bayesian network; structure learning, K2 algorithm, expert knowledge, surface water analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51210 An Advanced Nelder Mead Simplex Method for Clustering of Gene Expression Data
Authors: M. Pandi, K. Premalatha
Abstract:
The DNA microarray technology concurrently monitors the expression levels of thousands of genes during significant biological processes and across the related samples. The better understanding of functional genomics is obtained by extracting the patterns hidden in gene expression data. It is handled by clustering which reveals natural structures and identify interesting patterns in the underlying data. In the proposed work clustering gene expression data is done through an Advanced Nelder Mead (ANM) algorithm. Nelder Mead (NM) method is a method designed for optimization process. In Nelder Mead method, the vertices of a triangle are considered as the solutions. Many operations are performed on this triangle to obtain a better result. In the proposed work, the operations like reflection and expansion is eliminated and a new operation called spread-out is introduced. The spread-out operation will increase the global search area and thus provides a better result on optimization. The spread-out operation will give three points and the best among these three points will be used to replace the worst point. The experiment results are analyzed with optimization benchmark test functions and gene expression benchmark datasets. The results show that ANM outperforms NM in both benchmarks.
Keywords: Spread out, simplex, multi-minima, fitness function, optimization, search area, monocyte, solution, genomes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25589 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data
Authors: M. A. Meslem
Abstract:
For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.
Keywords: Quasigeoid, gravity anomalies, covariance, GGM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895