Search results for: Cloud computing systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4928

Search results for: Cloud computing systems

4718 Iterative Methods for Computing the Weighted Minkowski Inverses of Matrices in Minkowski Space

Authors: Xiaoji Liu, Yonghui Qin

Abstract:

In this note, we consider a family of iterative formula for computing the weighted Minskowski inverses AM,N in Minskowski space, and give two kinds of iterations and the necessary and sufficient conditions of the convergence of iterations.

Keywords: iterative method, the Minskowski inverse, A

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
4717 Quantum Computing: A New Era of Computing

Authors: Jyoti Chaturvedi Gursaran

Abstract:

Nature conducts its action in a very private manner. To reveal these actions classical science has done a great effort. But classical science can experiment only with the things that can be seen with eyes. Beyond the scope of classical science quantum science works very well. It is based on some postulates like qubit, superposition of two states, entanglement, measurement and evolution of states that are briefly described in the present paper. One of the applications of quantum computing i.e. implementation of a novel quantum evolutionary algorithm(QEA) to automate the time tabling problem of Dayalbagh Educational Institute (Deemed University) is also presented in this paper. Making a good timetable is a scheduling problem. It is NP-hard, multi-constrained, complex and a combinatorial optimization problem. The solution of this problem cannot be obtained in polynomial time. The QEA uses genetic operators on the Q-bit as well as updating operator of quantum gate which is introduced as a variation operator to converge toward better solutions.

Keywords: Quantum computing, qubit, superposition, entanglement, measurement of states, evolution of states, Scheduling problem, hard and soft constraints, evolutionary algorithm, quantum evolutionary algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2613
4716 Towards Assessment of Indicators Influence on Innovativeness of Countries' Economies: Selected Soft Computing Approaches

Authors: Marta Czyżewska, Krzysztof Pancerz, Jarosław Szkoła

Abstract:

The aim of this paper is to assess the influence of several indicators determining innovativeness of countries' economies by applying selected soft computing methods. Such methods enable us to identify correlations between indicators for period 2006-2010. The main attention in the paper is focused on selecting proper computer tools for solving this problem. As a tool supporting identification, the X-means clustering algorithm, the Apriori rules generation algorithm as well as Self-Organizing Feature Maps (SOMs) have been selected. The paper has rather a rudimentary character. We briefly describe usefulness of the selected approaches and indicate some challenges for further research.

Keywords: Assessment of indicators, innovativeness, soft computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
4715 Automatic Visualization Pipeline Formation for Medical Datasets on Grid Computing Environment

Authors: Aboamama Atahar Ahmed, Muhammad Shafie Abd Latiff, Kamalrulnizam Abu Bakar, Zainul AhmadRajion

Abstract:

Distance visualization of large datasets often takes the direction of remote viewing and zooming techniques of stored static images. However, the continuous increase in the size of datasets and visualization operation causes insufficient performance with traditional desktop computers. Additionally, the visualization techniques such as Isosurface depend on the available resources of the running machine and the size of datasets. Moreover, the continuous demand for powerful computing powers and continuous increase in the size of datasets results an urgent need for a grid computing infrastructure. However, some issues arise in current grid such as resources availability at the client machines which are not sufficient enough to process large datasets. On top of that, different output devices and different network bandwidth between the visualization pipeline components often result output suitable for one machine and not suitable for another. In this paper we investigate how the grid services could be used to support remote visualization of large datasets and to break the constraint of physical co-location of the resources by applying the grid computing technologies. We show our grid enabled architecture to visualize large medical datasets (circa 5 million polygons) for remote interactive visualization on modest resources clients.

Keywords: Visualization, Grid computing, Medical datasets, visualization techniques, thin clients, Globus toolkit, VTK.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
4714 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6

Authors: M. Moslehpour, S. Khorsandi

Abstract:

Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.

Keywords: NDP, IPsec, SEND, CGA, Modifier, Malicious node, Self-Computing, Distributed-Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
4713 The Effect of Increment in Simulation Samples on a Combined Selection Procedure

Authors: Mohammad H. Almomani, Rosmanjawati Abdul Rahman

Abstract:

Statistical selection procedures are used to select the best simulated system from a finite set of alternatives. In this paper, we present a procedure that can be used to select the best system when the number of alternatives is large. The proposed procedure consists a combination between Ranking and Selection, and Ordinal Optimization procedures. In order to improve the performance of Ordinal Optimization, Optimal Computing Budget Allocation technique is used to determine the best simulation lengths for all simulation systems and to reduce the total computation time. We also argue the effect of increment in simulation samples for the combined procedure. The results of numerical illustration show clearly the effect of increment in simulation samples on the proposed combination of selection procedure.

Keywords: Indifference-Zone, Optimal Computing Budget Allocation, Ordinal Optimization, Ranking and Selection, Subset Selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1204
4712 Inspection of Geometrical Integrity of Work Piece and Measurement of Tool Wear by the Use of Photo Digitizing Method

Authors: R. Alipour, F. Nadjarian, A. Alinaghizade

Abstract:

Considering complexity of products, new geometrical design and investment tolerances that are necessary, measuring and dimensional controlling involve modern and more precise methods. Photo digitizing method using two cameras to record pictures and utilization of conventional method named “cloud points" and data analysis by the use of ATOUS software, is known as modern and efficient in mentioned context. In this paper, benefits of photo digitizing method in evaluating sampling of machining processes have been put forward. For example, assessment of geometrical integrity surface in 5-axis milling process and measurement of carbide tool wear in turning process, can be can be brought forward. Advantages of this method comparing to conventional methods have been expressed.

Keywords: photo digitizing, tool wear, geometrical integrity, cloud points

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1325
4711 Trust Managementfor Pervasive Computing Environments

Authors: Denis Trcek

Abstract:

Trust is essential for further and wider acceptance of contemporary e-services. It was first addressed almost thirty years ago in Trusted Computer System Evaluation Criteria standard by the US DoD. But this and other proposed approaches of that period were actually solving security. Roughly some ten years ago, methodologies followed that addressed trust phenomenon at its core, and they were based on Bayesian statistics and its derivatives, while some approaches were based on game theory. However, trust is a manifestation of judgment and reasoning processes. It has to be dealt with in accordance with this fact and adequately supported in cyber environment. On the basis of the results in the field of psychology and our own findings, a methodology called qualitative algebra has been developed, which deals with so far overlooked elements of trust phenomenon. It complements existing methodologies and provides a basis for a practical technical solution that supports management of trust in contemporary computing environments. Such solution is also presented at the end of this paper.

Keywords: internet security, trust management, multi-agent systems, reasoning and judgment, modeling and simulation, qualitativealgebra

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
4710 A Type-2 Fuzzy Model for Link Prediction in Social Network

Authors: Mansoureh Naderipour, Susan Bastani, Mohammad Fazel Zarandi

Abstract:

Predicting links that may occur in the future and missing links in social networks is an attractive problem in social network analysis. Granular computing can help us to model the relationships between human-based system and social sciences in this field. In this paper, we present a model based on granular computing approach and Type-2 fuzzy logic to predict links regarding nodes’ activity and the relationship between two nodes. Our model is tested on collaboration networks. It is found that the accuracy of prediction is significantly higher than the Type-1 fuzzy and crisp approach.

Keywords: Social Network, link prediction, granular computing, Type-2 fuzzy sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
4709 A Comparison of Different Soft Computing Models for Credit Scoring

Authors: Nnamdi I. Nwulu, Shola G. Oroja

Abstract:

It has become crucial over the years for nations to improve their credit scoring methods and techniques in light of the increasing volatility of the global economy. Statistical methods or tools have been the favoured means for this; however artificial intelligence or soft computing based techniques are becoming increasingly preferred due to their proficient and precise nature and relative simplicity. This work presents a comparison between Support Vector Machines and Artificial Neural Networks two popular soft computing models when applied to credit scoring. Amidst the different criteria-s that can be used for comparisons; accuracy, computational complexity and processing times are the selected criteria used to evaluate both models. Furthermore the German credit scoring dataset which is a real world dataset is used to train and test both developed models. Experimental results obtained from our study suggest that although both soft computing models could be used with a high degree of accuracy, Artificial Neural Networks deliver better results than Support Vector Machines.

Keywords: Artificial Neural Networks, Credit Scoring, SoftComputing Models, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096
4708 Semantic Mobility Channel (SMC): Ubiquitous and Mobile Computing Meets the Semantic Web

Authors: José M. Cantera, Miguel Jiménez, Genoveva López, Javier Soriano

Abstract:

With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).

Keywords: Semantic web, ubiquitous and mobile computing, web content transcoding. semantic mark-up, mobile computing, middleware and services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
4707 An Efficient MIPv6 Return Routability Scheme Based on Geometric Computing

Authors: Yen-Cheng Chen, Fu-Chen Yang

Abstract:

IETF defines mobility support in IPv6, i.e. MIPv6, to allow nodes to remain reachable while moving around in the IPv6 internet. When a node moves and visits a foreign network, it is still reachable through the indirect packet forwarding from its home network. This triangular routing feature provides node mobility but increases the communication latency between nodes. This deficiency can be overcome by using a Binding Update (BU) scheme, which let nodes keep up-to-date IP addresses and communicate with each other through direct IP routing. To further protect the security of BU, a Return Routability (RR) procedure was developed. However, it has been found that RR procedure is vulnerable to many attacks. In this paper, we will propose a lightweight RR procedure based on geometric computing. In consideration of the inherent limitation of computing resources in mobile node, the proposed scheme is developed to minimize the cost of computations and to eliminate the overhead of state maintenance during binding updates. Compared with other CGA-based BU schemes, our scheme is more efficient and doesn-t need nonce tables in nodes.

Keywords: Mobile IPv6, Binding update, Geometric computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347
4706 Using High Performance Computing for Online Flood Monitoring and Prediction

Authors: Stepan Kuchar, Martin Golasowski, Radim Vavrik, Michal Podhoranyi, Boris Sir, Jan Martinovic

Abstract:

The main goal of this article is to describe the online flood monitoring and prediction system Floreon+ primarily developed for the Moravian-Silesian region in the Czech Republic and the basic process it uses for running automatic rainfall-runoff and hydrodynamic simulations along with their calibration and uncertainty modeling. It takes a long time to execute such process sequentially, which is not acceptable in the online scenario, so the use of a high performance computing environment is proposed for all parts of the process to shorten their duration. Finally, a case study on the Ostravice River catchment is presented that shows actual durations and their gain from the parallel implementation.

Keywords: Flood prediction process, High performance computing, Online flood prediction system, Parallelization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2290
4705 Automatic Light Control in Domotics using Artificial Neural Networks

Authors: Carlos Machado, José A. Mendes

Abstract:

Home Automation is a field that, among other subjects, is concerned with the comfort, security and energy requirements of private homes. The configuration of automatic functions in this type of houses is not always simple to its inhabitants requiring the initial setup and regular adjustments. In this work, the ubiquitous computing system vision is used, where the users- action patterns are captured, recorded and used to create the contextawareness that allows the self-configuration of the home automation system. The system will try to free the users from setup adjustments as the home tries to adapt to its inhabitants- real habits. In this paper it is described a completely automated process to determine the light state and act on them, taking in account the users- daily habits. Artificial Neural Network (ANN) is used as a pattern recognition method, classifying for each moment the light state. The work presented uses data from a real house where a family is actually living.

Keywords: ANN, Home Automation, Neural Systems, PatternRecognition, Ubiquitous Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
4704 Architecture of Large-Scale Systems

Authors: Arne Koschel, Irina Astrova, Elena Deutschkämer, Jacob Ester, Johannes Feldmann

Abstract:

In this paper various techniques in relation to large-scale systems are presented. At first, explanation of large-scale systems and differences from traditional systems are given. Next, possible specifications and requirements on hardware and software are listed. Finally, examples of large-scale systems are presented.

Keywords: Distributed file systems, cashing, large scale systems, MapReduce algorithm, NoSQL databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3004
4703 Efficient and Extensible Data Processing Framework in Ubiquitious Sensor Networks

Authors: Junghoon Lee, Gyung-Leen Park, Ho-Young Kwak, Cheol Min Kim

Abstract:

This paper presents the design and implements the prototype of an intelligent data processing framework in ubiquitous sensor networks. Much focus is put on how to handle the sensor data stream as well as the interoperability between the low-level sensor data and application clients. Our framework first addresses systematic middleware which mitigates the interaction between the application layer and low-level sensors, for the sake of analyzing a great volume of sensor data by filtering and integrating to create value-added context information. Then, an agent-based architecture is proposed for real-time data distribution to efficiently forward a specific event to the appropriate application registered in the directory service via the open interface. The prototype implementation demonstrates that our framework can host a sophisticated application on the ubiquitous sensor network and it can autonomously evolve to new middleware, taking advantages of promising technologies such as software agents, XML, cloud computing, and the like.

Keywords: sensor network, intelligent farm, middleware, event detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316
4702 Process-based Business Transformation through Services Computing

Authors: Sinnakkrishnan Perumal, Nitish Pandey

Abstract:

Business transformation initiatives are required by any organization to jump from its normal mode of operation to the one that is suitable for the change in the environment such as competitive pressures, regulatory requirements, changes in labor market, etc., or internal such as changes in strategy/vision, changes in the capability, change in the management, etc. Recent advances in information technology in automating the business processes have the potential to transform an organization to provide it with a sustained competitive advantage. Process constitutes the skeleton of a business. Thus, for a business to exist and compete well, it is essential for the skeleton to be robust and agile. This paper details “transformation" from a business perspective, methodologies to bring about an effective transformation, process-based transformation, and the role of services computing in this. Further, it details the benefits that could be achieved through services computing.

Keywords: Business Transformation, Services Oriented Architecture, Business Processes, Process-based Transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095
4701 Designing and Implementing a Novel Scheduler for Multiprocessor System using Genetic Algorithm

Authors: Iman Zangeneh, Mostafa Moradi, Mazyar Baranpouyan

Abstract:

System is using multiple processors for computing and information processing, is increasing rapidly speed operation of these systems compared with single processor systems, very significant impact on system performance is increased .important differences to yield a single multi-processor cpu, the scheduling policies, to reduce the implementation time of all processes. Notwithstanding the famous algorithms such as SPT, LPT, LSPT and RLPT for scheduling and there, but none led to the answer are not optimal.In this paper scheduling using genetic algorithms and innovative way to finish the whole process faster that we do and the result compared with three algorithms we mentioned.

Keywords: Multiprocessor system, genetic algorithms, time implementation process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
4700 A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Authors: Ghalem Belalem, Yahya Slimani

Abstract:

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

Keywords: Data Grid, replication, consistency, optimistic approach, pessimistic approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
4699 Efficient Semi-Systolic Finite Field Multiplier Using Redundant Basis

Authors: Hyun-Ho Lee, Kee-Won Kim

Abstract:

The arithmetic operations over GF(2m) have been extensively used in error correcting codes and public-key cryptography schemes. Finite field arithmetic includes addition, multiplication, division and inversion operations. Addition is very simple and can be implemented with an extremely simple circuit. The other operations are much more complex. The multiplication is the most important for cryptosystems, such as the elliptic curve cryptosystem, since computing exponentiation, division, and computing multiplicative inverse can be performed by computing multiplication iteratively. In this paper, we present a parallel computation algorithm that operates Montgomery multiplication over finite field using redundant basis. Also, based on the multiplication algorithm, we present an efficient semi-systolic multiplier over finite field. The multiplier has less space and time complexities compared to related multipliers. As compared to the corresponding existing structures, the multiplier saves at least 5% area, 50% time, and 53% area-time (AT) complexity. Accordingly, it is well suited for VLSI implementation and can be easily applied as a basic component for computing complex operations over finite field, such as inversion and division operation.

Keywords: Finite field, Montgomery multiplication, systolic array, cryptography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
4698 On the Factors Affecting Computing Students’ Awareness of the Latest ICTs

Authors: O. D. Adegbehingbe, S. D. Eyono Obono

Abstract:

The education sector is constantly faced with rapid changes in technologies in terms of ensuring that the curriculum is up to date and in terms of making sure that students are aware of these technological changes. This challenge can be seen as the motivation for this study, which is to examine the factors affecting computing students’ awareness of the latest Information Technologies (ICTs). The aim of this study is divided into two sub-objectives which are: the selection of relevant theories and the design of a conceptual model to support it as well as the empirical testing of the designed model. The first objective is achieved by a review of existing literature on technology adoption theories and models. The second objective is achieved using a survey of computing students in the four universities of the KwaZulu-Natal province of South Africa. Data collected from this survey is analyzed using Statistical package for the Social Science (SPSS) using descriptive statistics, ANOVA and Pearson correlations. The main hypothesis of this study is that there is a relationship between the demographics and the prior conditions of the computing students and their awareness of general ICT trends and of Digital Switch Over (DSO) a new technology which involves the change from analog to digital television broadcasting in order to achieve improved spectrum efficiency. The prior conditions of the computing students that were considered in this study are students’ perceived exposure to career guidance and students’ perceived curriculum currency. The results of this study confirm that gender, ethnicity, and high school computing course affect students’ perceived curriculum currency while high school location affects students’ awareness of DSO. The results of this study also confirm that there is a relationship between students prior conditions and their awareness of general ICT trends and DSO in particular.

Keywords: Education, Information Technologies, IDT, awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194
4697 Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment

Authors: M. Bohlouli, M. Analoui

Abstract:

For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.

Keywords: Active Database, Grid Computing, ResourceRequirement Prediction, Scheduling,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404
4696 An Algorithm for Computing the Analytic Singular Value Decomposition

Authors: Drahoslava Janovska, Vladimir Janovsky, Kunio Tanabe

Abstract:

A proof of convergence of a new continuation algorithm for computing the Analytic SVD for a large sparse parameter– dependent matrix is given. The algorithm itself was developed and numerically tested in [5].

Keywords: Analytic Singular Value Decomposition, large sparse parameter–dependent matrices, continuation algorithm of a predictorcorrector type.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
4695 Evolutionary Cobreeding of Cooperative and Competitive Subcultures

Authors: Emilia Nercissians

Abstract:

Neoclassical and functionalist explanations of self organization in multiagent systems have been criticized on several accounts including unrealistic explication of overadapted agents and failure to resolve problems of externality. The paper outlines a more elaborate and dynamic model that is capable of resolving these dilemmas. An illustrative example where behavioral diversity is cobred in a repeated nonzero sum task via evolutionary computing is presented.

Keywords: evolutionary stability, externalities, neofunctionalism, prisoners' dilemma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1259
4694 Increasing the System Availability of Data Centers by Using Virtualization Technologies

Authors: Chris Ewe, Naoum Jamous, Holger Schrödl

Abstract:

Like most entrepreneurs, data center operators pursue goals such as profit-maximization, improvement of the company’s reputation or basically to exist on the market. Part of those aims is to guarantee a given quality of service. Quality characteristics are specified in a contract called the service level agreement. Central part of this agreement is non-functional properties of an IT service. The system availability is one of the most important properties as it will be shown in this paper. To comply with availability requirements, data center operators can use virtualization technologies. A clear model to assess the effect of virtualization functions on the parts of a data center in relation to the system availability is still missing. This paper aims to introduce a basic model that shows these connections, and consider if the identified effects are positive or negative. Thus, this work also points out possible disadvantages of the technology. In consequence, the paper shows opportunities as well as risks of data center virtualization in relation to system availability.

Keywords: Availability, cloud computing IT service, quality of service, service level agreement, virtualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957
4693 A Knowledge Acquisition Model Using Multi-Agents for KaaS

Authors: Dhanashree Nansaheb Kharde, Justus Selwyn

Abstract:

These days customer satisfaction plays vital role in any business. When customer searches for a product, significantly a junk of irrelevant information is what is given, leading to customer dissatisfaction. To provide exactly relevant information on the searched product, we are proposing a model of KaaS (Knowledge as a Service), which pre-processes the information using decision making paradigm using Multi-agents. Information obtained from various sources is taken to derive knowledge and they are linked to Cloud to capture new idea. The main focus of this work is to acquire relevant information (knowledge) related to product, then convert this knowledge into a service for customer satisfaction and deploy on cloud. For achieving these objectives we are have opted to use multi agents. They are communicating and interacting with each other, manipulate information, provide knowledge, to take decisions. The paper discusses about KaaS as an intelligent approach for Knowledge acquisition.

Keywords: Knowledge acquisition, multi-agents, intelligent user interface, ontology, intelligent agent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
4692 Exploring the Combinatorics of Motif Alignments Foraccurately Computing E-values from P-values

Authors: T. Kjosmoen, T. Ryen, T. Eftestøl

Abstract:

In biological and biomedical research motif finding tools are important in locating regulatory elements in DNA sequences. There are many such motif finding tools available, which often yield position weight matrices and significance indicators. These indicators, p-values and E-values, describe the likelihood that a motif alignment is generated by the background process, and the expected number of occurrences of the motif in the data set, respectively. The various tools often estimate these indicators differently, making them not directly comparable. One approach for comparing motifs from different tools, is computing the E-value as the product of the p-value and the number of possible alignments in the data set. In this paper we explore the combinatorics of the motif alignment models OOPS, ZOOPS, and ANR, and propose a generic algorithm for computing the number of possible combinations accurately. We also show that using the wrong alignment model can give E-values that significantly diverge from their true values.

Keywords: Motif alignment, combinatorics, p-value, E-value, OOPS, ZOOPS, ANR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1181
4691 Implementation of Watch Dog Timer for Fault Tolerant Computing on Cluster Server

Authors: Meenakshi Bheevgade, Rajendra M. Patrikar

Abstract:

In today-s new technology era, cluster has become a necessity for the modern computing and data applications since many applications take more time (even days or months) for computation. Although after parallelization, computation speeds up, still time required for much application can be more. Thus, reliability of the cluster becomes very important issue and implementation of fault tolerant mechanism becomes essential. The difficulty in designing a fault tolerant cluster system increases with the difficulties of various failures. The most imperative obsession is that the algorithm, which avoids a simple failure in a system, must tolerate the more severe failures. In this paper, we implemented the theory of watchdog timer in a parallel environment, to take care of failures. Implementation of simple algorithm in our project helps us to take care of different types of failures; consequently, we found that the reliability of this cluster improves.

Keywords: Cluster, Fault tolerant, Grid, Grid ComputingSystem, Meta-computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182
4690 Computing Visibility Subsets in an Orthogonal Polyhedron

Authors: Jefri Marzal, Hong Xie, Chun Che Fung

Abstract:

Visibility problems are central to many computational geometry applications. One of the typical visibility problems is computing the view from a given point. In this paper, a linear time procedure is proposed to compute the visibility subsets from a corner of a rectangular prism in an orthogonal polyhedron. The proposed algorithm could be useful to solve classic 3D problems.

Keywords: Visibility, rectangular prism, orthogonal polyhedron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
4689 Dynamic Load Balancing Strategy for Grid Computing

Authors: Belabbas Yagoubi, Yahya Slimani

Abstract:

Workload and resource management are two essential functions provided at the service level of the grid software infrastructure. To improve the global throughput of these software environments, workloads have to be evenly scheduled among the available resources. To realize this goal several load balancing strategies and algorithms have been proposed. Most strategies were developed in mind, assuming homogeneous set of sites linked with homogeneous and fast networks. However for computational grids we must address main new issues, namely: heterogeneity, scalability and adaptability. In this paper, we propose a layered algorithm which achieve dynamic load balancing in grid computing. Based on a tree model, our algorithm presents the following main features: (i) it is layered; (ii) it supports heterogeneity and scalability; and, (iii) it is totally independent from any physical architecture of a grid.

Keywords: Grid computing, load balancing, workload, tree based model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3077