Search results for: Replicas.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9

Search results for: Replicas.

9 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: Data grid, data replication, simulation, replica selection, replica placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057
8 Optimizing Hadoop Block Placement Policy and Cluster Blocks Distribution

Authors: Nchimbi Edward Pius, Liu Qin, Fion Yang, Zhu Hong Ming

Abstract:

The current Hadoop block placement policy do not fairly and evenly distributes replicas of blocks written to datanodes in a Hadoop cluster.

This paper presents a new solution that helps to keep the cluster in a balanced state while an HDFS client is writing data to a file in Hadoop cluster. The solution had been implemented, and test had been conducted to evaluate its contribution to Hadoop distributed file system.

It has been found that, the solution has lowered global execution time taken by Hadoop balancer to 22 percent. It also has been found that, Hadoop balancer respectively over replicate 1.75 and 3.3 percent of all re-distributed blocks in the modified and original Hadoop clusters.

The feature that keeps the cluster in a balanced state works as a core part to Hadoop system and not just as a utility like traditional balancer. This is one of the significant achievements and uniqueness of the solution developed during the course of this research work.

Keywords: Balancer, Datanode, Distributed file system, Hadoop, Replicas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4892
7 A Web Oriented Watermarking Protocol

Authors: Franco Frattolillo, Salvatore D'Onofrio

Abstract:

This paper presents a watermarking protocol able to solve the well-known “customer-s right problem" and “unbinding problem". In particular, the protocol has been purposely designed to be adopted in a web context, where users wanting to buy digital contents are usually neither provided with digital certificates issued by certification authorities (CAs) nor able to autonomously perform specific security actions. Furthermore, the protocol enables users to keep their identities unexposed during web transactions as well as allows guilty buyers, i.e. who are responsible distributors of illegal replicas, to be unambiguously identified. Finally, the protocol has been designed so that web content providers (CPs) can exploit copyright protection services supplied by web service providers (SPs) in a security context. Thus, CPs can take advantage of complex services without having to directly implement them.

Keywords: Copyright protection, digital rights management, watermarkingprotocols.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473
6 A Simple Constellation Precoding Technique over MIMO-OFDM Systems

Authors: Fuh-Hsin Hwang, Tsui-Tsai Lin, Chih-Wen Chan, Cheng-Yuan Chang

Abstract:

This paper studies the design of a simple constellation precoding for a multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system over Rayleigh fading channels where OFDM is used to keep the diversity replicas orthogonal and reduce ISI effects. A multi-user environment with K synchronous co-channel users is considered. The proposed scheme provides a bandwidth efficient transmission for individual users by increasing the system throughput. In comparison with the existing coded MIMO-OFDM schemes, the precoding technique is designed under the consideration of its low implementation complexity while providing a comparable error performance to the existing schemes. Analytic and simulation results have been presented to show the distinguished error performance.

Keywords: coded modulation, diversity technique, OFDM, MIMO, constellation precoding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
5 A General Framework for Modeling Replicated Real-Time Database

Authors: Hala Abdel hameed, Hazem M. El-Bakry, Torky Sultan

Abstract:

There are many issues that affect modeling and designing real-time databases. One of those issues is maintaining consistency between the actual state of the real-time object of the external environment and its images as reflected by all its replicas distributed over multiple nodes. The need to improve the scalability is another important issue. In this paper, we present a general framework to design a replicated real-time database for small to medium scale systems and maintain all timing constrains. In order to extend the idea for modeling a large scale database, we present a general outline that consider improving the scalability by using an existing static segmentation algorithm applied on the whole database, with the intent to lower the degree of replication, enables segments to have individual degrees of replication with the purpose of avoiding excessive resource usage, which all together contribute in solving the scalability problem for DRTDBS.

Keywords: Database modeling, Distributed database, Real time databases, Replication

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1321
4 Precombining Adaptive LMMSE Detection for DS-CDMA Systems in Time Varying Channels: Non Blind and Blind Approaches

Authors: M. D. Kokate, T. R. Sontakke, P. W. Wani

Abstract:

This paper deals with an adaptive multiuser detector for direct sequence code division multiple-access (DS-CDMA) systems. A modified receiver, precombinig LMMSE is considered under time varying channel environment. Detector updating is performed with two criterions, mean square estimation (MSE) and MOE optimization technique. The adaptive implementation issues of these two schemes are quite different. MSE criterion updates the filter weights by minimizing error between data vector and adaptive vector. MOE criterion together with canonical representation of the detector results in a constrained optimization problem. Even though the canonical representation is very complicated under time varying channels, it is analyzed with assumption of average power profile of multipath replicas of user of interest. The performance of both schemes is studied for practical SNR conditions. Results show that for poor SNR, MSE precombining LMMSE is better than the blind precombining LMMSE but for greater SNR, MOE scheme outperforms with better result.

Keywords: LMMSE, MOE, MUD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
3 A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Authors: Ghalem Belalem, Yahya Slimani

Abstract:

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

Keywords: Data Grid, replication, consistency, optimistic approach, pessimistic approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
2 Cost-Effective Private Grid Using Object-based Grid Architecture

Authors: M. Victor Jose, V. Seenivasagam

Abstract:

This paper proposes a cost-effective private grid using Object-based Grid Architecture (OGA). In OGA, the data process privacy and inter communication are increased through an object- oriented concept. The limitation of the existing grid is that the user can enter or leave the grid at any time without schedule and dedicated resource. To overcome these limitations, cost-effective private grid and appropriate algorithms are proposed. In this, each system contains two platforms such as grid and local platforms. The grid manager service running in local personal computer can act as grid resource. When the system is on, it is intimated to the Monitoring and Information System (MIS) and details are maintained in Resource Object Table (ROT). The MIS is responsible to select the resource where the file or the replica should be stored. The resource storage is done within virtual single private grid nodes using random object addressing to prevent stolen attack. If any grid resource goes down, then the resource ID will be removed from the ROT, and resource recovery is efficiently managed by the replicas. This random addressing technique makes the grid storage a single storage and the user views the entire grid network as a single system.

Keywords: Object Grid Architecture, Grid Manager Service, Resource Object table, Random object addressing, Object storage, Dynamic Object Update.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979
1 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement

Authors: Magdi Elmessiry, Adel Elmessiry

Abstract:

The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.

Keywords: Fashion, infringement, Blockchain, artificial intelligence, textiles supply.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166