Publications | Computer and Information Engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4322

World Academy of Science, Engineering and Technology

[Computer and Information Engineering]

Online ISSN : 1307-6892

872 MIBiClus: Mutual Information based Biclustering Algorithm

Authors: Neelima Gupta, Seema Aggarwal

Abstract:

Most of the biclustering/projected clustering algorithms are based either on the Euclidean distance or correlation coefficient which capture only linear relationships. However, in many applications, like gene expression data and word-document data, non linear relationships may exist between the objects. Mutual Information between two variables provides a more general criterion to investigate dependencies amongst variables. In this paper, we improve upon our previous algorithm that uses mutual information for biclustering in terms of computation time and also the type of clusters identified. The algorithm is able to find biclusters with mixed relationships and is faster than the previous one. To the best of our knowledge, none of the other existing algorithms for biclustering have used mutual information as a similarity measure. We present the experimental results on synthetic data as well as on the yeast expression data. Biclusters on the yeast data were found to be biologically and statistically significant using GO Tool Box and FuncAssociate.

Keywords: Biclustering, mutual information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600
871 Efficient System for Speech Recognition using General Regression Neural Network

Authors: Abderrahmane Amrouche, Jean Michel Rouvaen

Abstract:

In this paper we present an efficient system for independent speaker speech recognition based on neural network approach. The proposed architecture comprises two phases: a preprocessing phase which consists in segmental normalization and features extraction and a classification phase which uses neural networks based on nonparametric density estimation namely the general regression neural network (GRNN). The relative performances of the proposed model are compared to the similar recognition systems based on the Multilayer Perceptron (MLP), the Recurrent Neural Network (RNN) and the well known Discrete Hidden Markov Model (HMM-VQ) that we have achieved also. Experimental results obtained with Arabic digits have shown that the use of nonparametric density estimation with an appropriate smoothing factor (spread) improves the generalization power of the neural network. The word error rate (WER) is reduced significantly over the baseline HMM method. GRNN computation is a successful alternative to the other neural network and DHMM.

Keywords: Speech Recognition, General Regression NeuralNetwork, Hidden Markov Model, Recurrent Neural Network, ArabicDigits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
870 Recognition Machine (RM) for On-line and Isolated Flight Deck Officer (FDO) Gestures

Authors: Deniz T. Sodiri, Venkat V S S Sastry

Abstract:

The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.

Keywords: On-line Recognition Algorithm, IsolatedDynamic/Static Gesture Recognition, On-line Markovian/DynamicProgramming, Training in Virtual Environments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
869 Customer Knowledge and Service Development, the Web 2.0 Role in Co-production

Authors: Roberto Boselli, Mirko Cesarini, Mario Mezzanzanica

Abstract:

The paper is concerned with relationships between SSME and ICTs and focuses on the role of Web 2.0 tools in the service development process. The research presented aims at exploring how collaborative technologies can support and improve service processes, highlighting customer centrality and value coproduction. The core idea of the paper is the centrality of user participation and the collaborative technologies as enabling factors; Wikipedia is analyzed as an example. The result of such analysis is the identification and description of a pattern characterising specific services in which users collaborate by means of web tools with value co-producers during the service process. The pattern of collaborative co-production concerning several categories of services including knowledge based services is then discussed.

Keywords: Service Interaction Patterns, Services Science, Web2.0 tools, Service Development Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
868 A New IT-Convergence Service Design Framework

Authors: Hwa-Jong Kim

Abstract:

In many countries, digital city or ubiquitous city (u-City) projects have been initiated to provide digitalized economic environments to cities. Recently in Korea, Kangwon Province has started the u-Kangwon project to boost local economy with digitalized tourism services. We analyze the limitations of the ubiquitous IT approach through the u-Kangwon case. We have found that travelers are more interested in quality over speed in access of information. For improved service quality, we are looking to develop an IT-convergence service design framework (ISDF). The ISDF is based on the service engineering technique and composed of three parts: Service Design, Service Simulation, and the Service Platform.

Keywords: Service design, service simulation, service platform, service design framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1239
867 An Approach to Adaptive Load Balancing for RFID Middlewares

Authors: Heung Seok Chae, Jaegeol Park

Abstract:

Recently, there have been an increasing interest in RFID system and RFID systems have been applied to various applications. Load balancing is a fundamental technique for providing scalability of systems by moving workload from overloaded nodes to under-loaded nodes. This paper presents an approach to adaptive load balancing for RFID middlewares. Workloads of RFID middlewares can have a considerable variation according to the location of the connected RFID readers and can abruptly change at a particular instance. The proposed approach considers those characteristics of RFID middle- wares to provide an efficient load balancing.

Keywords: RFID middleware, Adaptive load balancing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
866 A Text Clustering System based on k-means Type Subspace Clustering and Ontology

Authors: Liping Jing, Michael K. Ng, Xinhua Yang, Joshua Zhexue Huang

Abstract:

This paper presents a text clustering system developed based on a k-means type subspace clustering algorithm to cluster large, high dimensional and sparse text data. In this algorithm, a new step is added in the k-means clustering process to automatically calculate the weights of keywords in each cluster so that the important words of a cluster can be identified by the weight values. For understanding and interpretation of clustering results, a few keywords that can best represent the semantic topic are extracted from each cluster. Two methods are used to extract the representative words. The candidate words are first selected according to their weights calculated by our new algorithm. Then, the candidates are fed to the WordNet to identify the set of noun words and consolidate the synonymy and hyponymy words. Experimental results have shown that the clustering algorithm is superior to the other subspace clustering algorithms, such as PROCLUS and HARP and kmeans type algorithm, e.g., Bisecting-KMeans. Furthermore, the word extraction method is effective in selection of the words to represent the topics of the clusters.

Keywords: Subspace Clustering, Text Mining, Feature Weighting, Cluster Interpretation, Ontology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2429
865 Shift Invariant Support Vector Machines Face Recognition System

Authors: J. Ruiz-Pinales, J. J. Acosta-Reyes, A. Salazar-Garibay, R. Jaime-Rivas

Abstract:

In this paper, we present a new method for incorporating global shift invariance in support vector machines. Unlike other approaches which incorporate a feature extraction stage, we first scale the image and then classify it by using the modified support vector machines classifier. Shift invariance is achieved by replacing dot products between patterns used by the SVM classifier with the maximum cross-correlation value between them. Unlike the normal approach, in which the patterns are treated as vectors, in our approach the patterns are treated as matrices (or images). Crosscorrelation is computed by using computationally efficient techniques such as the fast Fourier transform. The method has been tested on the ORL face database. The tests indicate that this method can improve the recognition rate of an SVM classifier.

Keywords: Face recognition, support vector machines, shiftinvariance, image registration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
864 New Features for Specific JPEG Steganalysis

Authors: Johann Barbier, Eric Filiol, Kichenakoumar Mayoura

Abstract:

We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.

Keywords: Compressed frequency domain, Fisher discriminant, specific JPEG steganalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2142
863 Design Techniques and Implementation of Low Power High-Throughput Discrete Wavelet Transform Tilters for JPEG 2000 Standard

Authors: Grigorios D. Dimitroulakos, N. D. Zervas, N. Sklavos, Costas E. Goutis

Abstract:

In this paper, the implementation of low power, high throughput convolutional filters for the one dimensional Discrete Wavelet Transform and its inverse are presented. The analysis filters have already been used for the implementation of a high performance DWT encoder [15] with minimum memory requirements for the JPEG 2000 standard. This paper presents the design techniques and the implementation of the convolutional filters included in the JPEG2000 standard for the forward and inverse DWT for achieving low-power operation, high performance and reduced memory accesses. Moreover, they have the ability of performing progressive computations so as to minimize the buffering between the decomposition and reconstruction phases. The experimental results illustrate the filters- low power high throughput characteristics as well as their memory efficient operation.

Keywords: Discrete Wavelet Transform; JPEG2000 standard; VLSI design; Low Power-Throughput-optimized filters

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1258
862 A New Implementation of PCA for Fast Face Detection

Authors: Hazem M. El-Bakry

Abstract:

Principal Component Analysis (PCA) has many different important applications especially in pattern detection such as face detection / recognition. Therefore, for real time applications, the response time is required to be as small as possible. In this paper, new implementation of PCA for fast face detection is presented. Such new implementation is designed based on cross correlation in the frequency domain between the input image and eigenvectors (weights). Simulation results show that the proposed implementation of PCA is faster than conventional one.

Keywords: Fast Face Detection, PCA, Cross Correlation, Frequency Domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773
861 A Symbol by Symbol Clustering Based Blind Equalizer

Authors: Kristina Georgoulakis

Abstract:

A new blind symbol by symbol equalizer is proposed. The operation of the proposed equalizer is based on the geometric properties of the two dimensional data constellation. An unsupervised clustering technique is used to locate the clusters formed by the received data. The symmetric properties of the clusters labels are subsequently utilized in order to label the clusters. Following this step, the received data are compared to clusters and decisions are made on a symbol by symbol basis, by assigning to each data the label of the nearest cluster. The operation of the equalizer is investigated both in linear and nonlinear channels. The performance of the proposed equalizer is compared to the performance of a CMAbased blind equalizer.

Keywords: Blind equalization, channel equalization, cluster based equalisers

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1410
860 Representation of Coloured Petri Net in Abductive Logic Programming (CPN-LP) and Its Application in Modeling an Intelligent Agent

Authors: T. H. Fung

Abstract:

Coloured Petri net (CPN) has been widely adopted in various areas in Computer Science, including protocol specification, performance evaluation, distributed systems and coordination in multi-agent systems. It provides a graphical representation of a system and has a strong mathematical foundation for proving various properties. This paper proposes a novel representation of a coloured Petri net using an extension of logic programming called abductive logic programming (ALP), which is purely based on classical logic. Under such a representation, an implementation of a CPN could be directly obtained, in which every inference step could be treated as a kind of equivalence preserved transformation. We would describe how to implement a CPN under such a representation using common meta-programming techniques in Prolog. We call our framework CPN-LP and illustrate its applications in modeling an intelligent agent.

Keywords: Abduction, coloured petri net, intelligent agent, logic programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
859 Bio-Inspired Generalized Global Shape Approach for Writer Identification

Authors: Azah Kamilah Muda, Siti Mariyam Shamsuddin, Maslina Darus

Abstract:

Writer identification is one of the areas in pattern recognition that attract many researchers to work in, particularly in forensic and biometric application, where the writing style can be used as biometric features for authenticating an identity. The challenging task in writer identification is the extraction of unique features, in which the individualistic of such handwriting styles can be adopted into bio-inspired generalized global shape for writer identification. In this paper, the feasibility of generalized global shape concept of complimentary binding in Artificial Immune System (AIS) for writer identification is explored. An experiment based on the proposed framework has been conducted to proof the validity and feasibility of the proposed approach for off-line writer identification.

Keywords: Writer identification, generalized global shape, individualistic, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1204
858 An Efficient Algorithm for Computing all Program Forward Static Slices

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program backward slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. The existing algorithms for computing program slices are introduced to compute a slice at a program point. In these algorithms, the program, or the model that represents the program, is traversed completely or partially once. To compute more than one slice, the same algorithm is applied for every point of interest in the program. Thus, the same program, or program representation, is traversed several times. In this paper, an algorithm is introduced to compute all forward static slices of a computer program by traversing the program representation graph once. Therefore, the introduced algorithm is useful for software engineering applications that require computing program slices at different points of a program. The program representation graph used in this paper is called Program Dependence Graph (PDG).

Keywords: Program slicing, static slicing, forward slicing, program dependence graph (PDG).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444
857 MJPEG Real-Time Transmission in Industrial Environments Using a CBR Channel

Authors: J. Silvestre, L. Almeida, R. Marau, P. Pedreiras

Abstract:

Currently, there are many local area industrial networks that can give guaranteed bandwidth to synchronous traffic, particularly providing CBR channels (Constant Bit Rate), which allow improved bandwidth management. Some of such networks operate over Ethernet, delivering channels with enough capacity, specially with compressors, to integrate multimedia traffic in industrial monitoring and image processing applications with many sources. In these industrial environments where a low latency is an essential requirement, JPEG is an adequate compressing technique but it generates VBR traffic (Variable Bit Rate). Transmitting VBR traffic in CBR channels is inefficient and current solutions to this problem significantly increase the latency or further degrade the quality. In this paper an R(q) model is used which allows on-line calculation of the JPEG quantification factor. We obtained increased quality, a lower requirement for the CBR channel with reduced number of discarded frames along with better use of the channel bandwidth.

Keywords: Industrial Networks, Multimedia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
856 Bridging the Communication Gap at NASA - A Case Study in Communities of Practice

Authors: Daria Topousis, Keri Murphy, Jeanne Holm

Abstract:

Following the loss of NASA's Space Shuttle Columbia in 2003, it was determined that problems in the agency's organization created an environment that led to the accident. One component of the proposed solution resulted in the formation of the NASA Engineering Network (NEN), a suite of information retrieval and knowledge-sharing tools. This paper describes the implementation of communities of practice, which are formed along engineering disciplines. Communities of practice enable engineers to leverage their knowledge and best practices to collaborate and take information learning back to their jobs and embed it into the procedures of the agency. This case study offers insight into using traditional engineering disciplines for virtual collaboration, including lessons learned during the creation and establishment of NASA-s communities.

Keywords: Collaboration, communities of practice, knowledge management, virtual teams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844
855 Image Mapping with Cumulative Distribution Function for Quick Convergence of Counter Propagation Neural Networks in Image Compression

Authors: S. Anna Durai, E. Anna Saro

Abstract:

In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Counter Propagation Neural Network, it takes longer time to converge. The reason for this is that the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbor with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative Distribution Function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used the Counter Propagation Neural Network yield high compression ratio as well as it converges quickly.

Keywords: Correlation, Counter Propagation Neural Networks, Cummulative Distribution Function, Image compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
854 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar

Abstract:

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Keywords: SLB, DLB, Host, Algorithm and Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
853 An Intelligent System Framework for Generating Activity List of a Project Using WBS Mind map and Semantic Network

Authors: H. Iranmanesh, M. Madadi

Abstract:

Work Breakdown Structure (WBS) is one of the most vital planning processes of the project management since it is considered to be the fundamental of other processes like scheduling, controlling, assigning responsibilities, etc. In fact WBS or activity list is the heart of a project and omission of a simple task can lead to an irrecoverable result. There are some tools in order to generate a project WBS. One of the most powerful tools is mind mapping which is the basis of this article. Mind map is a method for thinking together and helps a project manager to stimulate the mind of project team members to generate project WBS. Here we try to generate a WBS of a sample project involving with the building construction using the aid of mind map and the artificial intelligence (AI) programming language. Since mind map structure can not represent data in a computerized way, we convert it to a semantic network which can be used by the computer and then extract the final WBS from the semantic network by the prolog programming language. This method will result a comprehensive WBS and decrease the probability of omitting project tasks.

Keywords: Expert System, Mind map, Semantic network, Work breakdown structure,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2572
852 Web Usability : A Fuzzy Approach to the Navigation Structure Enhancement in a Website System, Case of Iranian Civil Aviation Organization Website

Authors: Hamed Qahri Saremi, Gholam Ali Montazer

Abstract:

With the proliferation of World Wide Web, development of web-based technologies and the growth in web content, the structure of a website becomes more complex and web navigation becomes a critical issue to both web designers and users. In this paper we define the content and web pages as two important and influential factors in website navigation and paraphrase the enhancement in the website navigation as making some useful changes in the link structure of the website based on the aforementioned factors. Then we suggest a new method for proposing the changes using fuzzy approach to optimize the website architecture. Applying the proposed method to a real case of Iranian Civil Aviation Organization (CAO) website, we discuss the results of the novel approach at the final section.

Keywords: Web content, Web navigation, Website system, Webusage mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
851 QoS Management in the Future Internet

Authors: S. Rao, S. Khavtasi, C. Chassot, N. Van Wambeke, F. Armando, S. P. Romano, T. Castaldi

Abstract:

The talks about technological convergence had been around for almost twenty years. Today Internet made it possible. And this is not only technical evolution. The way it changed our lives reflected in variety of applications, services and technologies used in day-to-day life. Such benefits imposed even more requirements on heterogeneous and unreliable IP networks. Current paper outlines QoS management system developed in the NetQoS [1] project. It describes an overall architecture of management system for heterogeneous networks and proposes automated multi-layer QoS management. Paper focuses on the structure of the most crucial modules of the system that enable autonomous and multi-layer provisioning and dynamic adaptation.

Keywords: Automated QoS management, multi-layerprovisioning and adaptation, QoS, QoE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1459
850 Construction of Intersection of Nondeterministic Finite Automata using Z Notation

Authors: Nazir Ahmad Zafar, Nabeel Sabir, Amir Ali

Abstract:

Functionalities and control behavior are both primary requirements in design of a complex system. Automata theory plays an important role in modeling behavior of a system. Z is an ideal notation which is used for describing state space of a system and then defining operations over it. Consequently, an integration of automata and Z will be an effective tool for increasing modeling power for a complex system. Further, nondeterministic finite automata (NFA) may have different implementations and therefore it is needed to verify the transformation from diagrams to a code. If we describe formal specification of an NFA before implementing it, then confidence over transformation can be increased. In this paper, we have given a procedure for integrating NFA and Z. Complement of a special type of NFA is defined. Then union of two NFAs is formalized after defining their complements. Finally, formal construction of intersection of NFAs is described. The specification of this relationship is analyzed and validated using Z/EVES tool.

Keywords: Modeling, Nondeterministic finite automata, Znotation, Integration of approaches, Validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3163
849 Harnessing Replication in Object Allocation

Authors: H. T. Barney, G. C. Low

Abstract:

The design of distributed systems involves the partitioning of the system into components or partitions and the allocation of these components to physical nodes. Techniques have been proposed for both the partitioning and allocation process. However these techniques suffer from a number of limitations. For instance object replication has the potential to greatly improve the performance of an object orientated distributed system but can be difficult to use effectively and there are few techniques that support the developer in harnessing object replication. This paper presents a methodological technique that helps developers decide how objects should be allocated in order to improve performance in a distributed system that supports replication. The performance of the proposed technique is demonstrated and tested on an example system.

Keywords: Allocation, Distributed Systems, Replication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
848 Evaluating some Feature Selection Methods for an Improved SVM Classifier

Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of features selection methods to reduce the dimensionality of the document-representation vector. Four feature selection methods are evaluated: Random Selection, Information Gain (IG), Support Vector Machine (called SVM_FS) and Genetic Algorithm with SVM (GA_FS). We showed that the best results were obtained with SVM_FS and GA_FS methods for a relatively small dimension of the features vector comparative with the IG method that involves longer vectors, for quite similar classification accuracies. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).

Keywords: Features selection, learning with kernels, support vector machine, genetic algorithms and classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
847 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
846 The Application of Non-quantitative Modelling in the Analysis of a Network Warfare Environment

Authors: N. Veerasamy, JPH Eloff

Abstract:

Network warfare is an emerging concept that focuses on the network and computer based forms through which information is attacked and defended. Various computer and network security concepts thus play a role in network warfare. Due the intricacy of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modeling is a useful method to better characterize the field due to the rich ideas that can be generated based on the use of secular associations, chronological origins, linked concepts, categorizations and context specifications. This paper proposes the use of non-quantitative methods through a morphological analysis to better explore and define the influential conditions in a network warfare environment.

Keywords: Morphological, non-quantitative, network warfare.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1358
845 A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Authors: Ghalem Belalem, Yahya Slimani

Abstract:

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

Keywords: Data Grid, replication, consistency, optimistic approach, pessimistic approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552
844 Performance Enhancement of Motion Estimation Using SSE2 Technology

Authors: Trung Hieu Tran, Hyo-Moon Cho, Sang-Bock Cho

Abstract:

Motion estimation is the most computationally intensive part in video processing. Many fast motion estimation algorithms have been proposed to decrease the computational complexity by reducing the number of candidate motion vectors. However, these studies are for fast search algorithms themselves while almost image and video compressions are operated with software based. Therefore, the timing constraints for running these motion estimation algorithms not only challenge for the video codec but also overwhelm for some of processors. In this paper, the performance of motion estimation is enhanced by using Intel's Streaming SIMD Extension 2 (SSE2) technology with Intel Pentium 4 processor.

Keywords: Motion Estimation, Full Search, Three StepSearch, MMX/SSE/SSE2 Technologies, SIMD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
843 Hardware Implementation of Stack-Based Replacement Algorithms

Authors: Hassan Ghasemzadeh, Sepideh Mazrouee, Hassan Goldani Moghaddam, Hamid Shojaei, Mohammad Reza Kakoee

Abstract:

Block replacement algorithms to increase hit ratio have been extensively used in cache memory management. Among basic replacement schemes, LRU and FIFO have been shown to be effective replacement algorithms in terms of hit rates. In this paper, we introduce a flexible stack-based circuit which can be employed in hardware implementation of both LRU and FIFO policies. We propose a simple and efficient architecture such that stack-based replacement algorithms can be implemented without the drawbacks of the traditional architectures. The stack is modular and hence, a set of stack rows can be cascaded depending on the number of blocks in each cache set. Our circuit can be implemented in conjunction with the cache controller and static/dynamic memories to form a cache system. Experimental results exhibit that our proposed circuit provides an average value of 26% improvement in storage bits and its maximum operating frequency is increased by a factor of two

Keywords: Cache Memory, Replacement Algorithms, LeastRecently Used Algorithm, First In First Out Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3421