Search results for: cloud computing privacy.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 893

Search results for: cloud computing privacy.

623 Agent Decision using Granular Computing in Traffic System

Authors: Yasser F. Hassan, Marwa Abdeen, Mustafa Fahmy

Abstract:

In recent years multi-agent systems have emerged as one of the interesting architectures facilitating distributed collaboration and distributed problem solving. Each node (agent) of the network might pursue its own agenda, exploit its environment, develop its own problem solving strategy and establish required communication strategies. Within each node of the network, one could encounter a diversity of problem-solving approaches. Quite commonly the agents can realize their processing at the level of information granules that is the most suitable from their local points of view. Information granules can come at various levels of granularity. Each agent could exploit a certain formalism of information granulation engaging a machinery of fuzzy sets, interval analysis, rough sets, just to name a few dominant technologies of granular computing. Having this in mind, arises a fundamental issue of forming effective interaction linkages between the agents so that they fully broadcast their findings and benefit from interacting with others.

Keywords: Granular computing, rough sets, agents, traffic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688
622 Organizational Data Security in Perspective of Ownership of Mobile Devices Used by Employees for Works

Authors: B. Ferdousi, J. Bari

Abstract:

With advancement of mobile computing, employees are increasingly doing their job-related works using personally owned mobile devices or organization owned devices. The Bring Your Own Device (BYOD) model allows employees to use their own mobile devices for job-related works, while Corporate Owned, Personally Enabled (COPE) model allows both organizations and employees to install applications onto organization-owned mobile devices used for job-related works. While there are many benefits of using mobile computing for job-related works, there are also serious concerns of different levels of threats to the organizational data security. Consequently, it is crucial to know the level of threat to the organizational data security in the BOYD and COPE models. It is also important to ensure that employees comply with the organizational data security policy. This paper discusses the organizational data security issues in perspective of ownership of mobile devices used by employees, especially in BYOD and COPE models. It appears that while the BYOD model has many benefits, there are relatively more data security risks in this model than in the COPE model. The findings also showed that in both BYOD and COPE environments, a more practical approach towards achieving secure mobile computing in organizational setting is through the development of comprehensive cybersecurity policies balancing employees’ need for convenience with organizational data security. The study helps to figure out the compliance and the risks of security breach in BYOD and COPE models.

Keywords: Data security, mobile computing, BYOD, COPE, cybersecurity policy, cybersecurity compliance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 275
621 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures

Authors: Silvina Caíno-Lores, Jesús Carretero

Abstract:

Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.

Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
620 VLSI Design of 2-D Discrete Wavelet Transform for Area-Efficient and High-Speed Image Computing

Authors: Mountassar Maamoun, Mehdi Neggazi, Abdelhamid Meraghni, Daoud Berkani

Abstract:

This paper presents a VLSI design approach of a highspeed and real-time 2-D Discrete Wavelet Transform computing. The proposed architecture, based on new and fast convolution approach, reduces the hardware complexity in addition to reduce the critical path to the multiplier delay. Furthermore, an advanced twodimensional (2-D) discrete wavelet transform (DWT) implementation, with an efficient memory area, is designed to produce one output in every clock cycle. As a result, a very highspeed is attained. The system is verified, using JPEG2000 coefficients filters, on Xilinx Virtex-II Field Programmable Gate Array (FPGA) device without accessing any external memory. The resulting computing rate is up to 270 M samples/s and the (9,7) 2-D wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out memory) with 256×256 image size. In this way, the developed design requests reduced memory and provide very high-speed processing as well as high PSNR quality.

Keywords: Discrete Wavelet Transform (DWT), Fast Convolution, FPGA, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
619 Fuzzy Set Approach to Study Appositives and Its Impact Due to Positional Alterations

Authors: E. Mike Dison, T. Pathinathan

Abstract:

Computing with Words (CWW) and Possibilistic Relational Universal Fuzzy (PRUF) are the two concepts which widely represent and measure the vaguely defined natural phenomenon. In this paper, we study the positional alteration of the phrases by which the impact of a natural language proposition gets affected and/or modified. We observe the gradations due to sensitivity/feeling of a statement towards the positional alterations. We derive the classification and modification of the meaning of words due to the positional alteration. We present the results with reference to set theoretic interpretations.

Keywords: Appositive, computing with words, PRUF, semantic sentiment analysis, set theoretic interpretations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 783
618 Explorative Data Mining of Constructivist Learning Experiences and Activities with Multiple Dimensions

Authors: Patrick Wessa, Bart Baesens

Abstract:

This paper discusses the use of explorative data mining tools that allow the educator to explore new relationships between reported learning experiences and actual activities, even if there are multiple dimensions with a large number of measured items. The underlying technology is based on the so-called Compendium Platform for Reproducible Computing (http://www.freestatistics.org) which was built on top the computational R Framework (http://www.wessa.net).

Keywords: Reproducible computing, data mining, explorative data analysis, compendium technology, computer assisted education

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1212
617 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance empirical formula, typical SQL query tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789
616 Data Security in a DApp Twitter Alike on Web 3.0 With Blockchain Based Technology

Authors: Vishal Awasthi, Tanya Soni, Vigya Awasthi, Swati Singh, Shivali Verma

Abstract:

There is a growing demand for a network that grants a high level of data security and confidentiality. For this reason, the semantic web was introduced, which allows data to be shared and reused across applications while safeguarding users privacy and user’s will grab back control of their data. The earlier Web 1.0 and Web 2.0 versions were built on client-server architecture, in  which there was the risk of data theft and unconsented sale of user data. A decentralized version, Known as Web 3.0, that is mostly built on blockchain technology was interjected to resolve these issues. The recent research focuses on blockchain technology, deals with privacy, security, transparency, and innovation of decentralized applications (DApps), e.g. a Twitter Clone, Whatsapp clone. In this paper the Twitter Alike built on the Ethereum blockchain will replace traditional techniques with improved latency, throughput, and data ownership. The central principle of this DApp is smart contract implemented using Solidity which is an object- oriented and highlevel language. Consequently, this will provide a better Quality Services, high data security, and integrity for both present and future internet technologies.

Keywords: Blockchain, DApps, Ethereum, Semantic Web, Smart Contract, Solidity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 209
615 Architecture Based on Dynamic Graphs for the Dynamic Reconfiguration of Farms of Computers

Authors: Carmen Navarrete, Eloy Anguiano

Abstract:

In the last years, the computers have increased their capacity of calculus and networks, for the interconnection of these machines. The networks have been improved until obtaining the actual high rates of data transferring. The programs that nowadays try to take advantage of these new technologies cannot be written using the traditional techniques of programming, since most of the algorithms were designed for being executed in an only processor,in a nonconcurrent form instead of being executed concurrently ina set of processors working and communicating through a network.This paper aims to present the ongoing development of a new system for the reconfiguration of grouping of computers, taking into account these new technologies.

Keywords: Dynamic network topology, resource and task allocation, parallel computing, heterogeneous computing, dynamic reconfiguration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1319
614 Laplace Decomposition Approximation Solution for a System of Multi-Pantograph Equations

Authors: M. A. Koroma, C. Zhan, A. F. Kamara, A. B. Sesay

Abstract:

In this work we adopt a combination of Laplace transform and the decomposition method to find numerical solutions of a system of multi-pantograph equations. The procedure leads to a rapid convergence of the series to the exact solution after computing a few terms. The effectiveness of the method is demonstrated in some examples by obtaining the exact solution and in others by computing the absolute error which decreases as the number of terms of the series increases.

Keywords: Laplace decomposition, pantograph equations, exact solution, numerical solution, approximate solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
613 Enhancing Security in Resource Sharing Using Key Holding Mechanism

Authors: M. Victor Jose, V. Seenivasagam

Abstract:

This paper describes a logical method to enhance security on the grid computing to restrict the misuse of the grid resources. This method is an economic and efficient one to avoid the usage of the special devices. The security issues, techniques and solutions needed to provide a secure grid computing environment are described. A well defined process for security management among the resource accesses and key holding algorithm is also proposed. In this method, the identity management, access control and authorization and authentication are effectively handled.

Keywords: Grid security, Irregular binary series, Key holding mechanism, Resource identity, Secure resource access.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
612 Computing the Similarity and the Diversity in the Species Based on Cronobacter Genome

Authors: E. Al Daoud

Abstract:

The purpose of computing the similarity and the diversity in the species is to trace the process of evolution and to find the relationship between the species and discover the unique, the special, the common and the universal proteins. The proteins of the whole genome of 40 species are compared with the cronobacter genome which is used as reference genome. More than 3 billion pairwise alignments are performed using blastp. Several findings are introduced in this study, for example, we found 172 proteins in cronobacter genome which have insignificant hits in other species, 116 significant proteins in the all tested species with very high score value and 129 common proteins in the plants but have insignificant hits in mammals, birds, fishes, and insects.

Keywords: Genome, species, blastp, conserved genes, cronobacter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 963
611 An Efficient Algorithm for Computing all Program Forward Static Slices

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program backward slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. The existing algorithms for computing program slices are introduced to compute a slice at a program point. In these algorithms, the program, or the model that represents the program, is traversed completely or partially once. To compute more than one slice, the same algorithm is applied for every point of interest in the program. Thus, the same program, or program representation, is traversed several times. In this paper, an algorithm is introduced to compute all forward static slices of a computer program by traversing the program representation graph once. Therefore, the introduced algorithm is useful for software engineering applications that require computing program slices at different points of a program. The program representation graph used in this paper is called Program Dependence Graph (PDG).

Keywords: Program slicing, static slicing, forward slicing, program dependence graph (PDG).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
610 Computing Continuous Skyline Queries without Discriminating between Static and Dynamic Attributes

Authors: Ibrahim Gomaa, Hoda M. O. Mokhtar

Abstract:

Although most of the existing skyline queries algorithms focused basically on querying static points through static databases; with the expanding number of sensors, wireless communications and mobile applications, the demand for continuous skyline queries has increased. Unlike traditional skyline queries which only consider static attributes, continuous skyline queries include dynamic attributes, as well as the static ones. However, as skyline queries computation is based on checking the domination of skyline points over all dimensions, considering both the static and dynamic attributes without separation is required. In this paper, we present an efficient algorithm for computing continuous skyline queries without discriminating between static and dynamic attributes. Our algorithm in brief proceeds as follows: First, it excludes the points which will not be in the initial skyline result; this pruning phase reduces the required number of comparisons. Second, the association between the spatial positions of data points is examined; this phase gives an idea of where changes in the result might occur and consequently enables us to efficiently update the skyline result (continuous update) rather than computing the skyline from scratch. Finally, experimental evaluation is provided which demonstrates the accuracy, performance and efficiency of our algorithm over other existing approaches.

Keywords: Continuous query processing, dynamic database, moving object, skyline queries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1207
609 A Parallel Quadtree Approach for Image Compression using Wavelets

Authors: Hamed Vahdat Nejad, Hossein Deldari

Abstract:

Wavelet transforms are multiresolution decompositions that can be used to analyze signals and images. Image compression is one of major applications of wavelet transforms in image processing. It is considered as one of the most powerful methods that provides a high compression ratio. However, its implementation is very time-consuming. At the other hand, parallel computing technologies are an efficient method for image compression using wavelets. In this paper, we propose a parallel wavelet compression algorithm based on quadtrees. We implement the algorithm using MatlabMPI (a parallel, message passing version of Matlab), and compute its isoefficiency function, and show that it is scalable. Our experimental results confirm the efficiency of the algorithm also.

Keywords: Image compression, MPI, Parallel computing, Wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
608 Implementation of an Improved Secure System Detection for E-passport by using EPC RFID Tags

Authors: A. Baith Mohamed, Ayman Abdel-Hamid, Kareem Youssri Mohamed

Abstract:

Current proposals for E-passport or ID-Card is similar to a regular passport with the addition of tiny contactless integrated circuit (computer chip) inserted in the back cover, which will act as a secure storage device of the same data visually displayed on the photo page of the passport. In addition, it will include a digital photograph that will enable biometric comparison, through the use of facial recognition technology at international borders. Moreover, the e-passport will have a new interface, incorporating additional antifraud and security features. However, its problems are reliability, security and privacy. Privacy is a serious issue since there is no encryption between the readers and the E-passport. However, security issues such as authentication, data protection and control techniques cannot be embedded in one process. In this paper, design and prototype implementation of an improved E-passport reader is presented. The passport holder is authenticated online by using GSM network. The GSM network is the main interface between identification center and the e-passport reader. The communication data is protected between server and e-passport reader by using AES to encrypt data for protection will transferring through GSM network. Performance measurements indicate a 19% improvement in encryption cycles versus previously reported results.

Keywords: RFID "Radio Frequency Identification", EPC"Electronic Product Code", ICAO "International Civil Aviation Organization", IFF "Identify Friend or Foe"

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2563
607 Classification Algorithms in Human Activity Recognition using Smartphones

Authors: Mohd Fikri Azli bin Abdullah, Ali Fahmi Perwira Negara, Md. Shohel Sayeed, Deok-Jai Choi, Kalaiarasi Sonai Muthu

Abstract:

Rapid advancement in computing technology brings computers and humans to be seamlessly integrated in future. The emergence of smartphone has driven computing era towards ubiquitous and pervasive computing. Recognizing human activity has garnered a lot of interest and has raised significant researches- concerns in identifying contextual information useful to human activity recognition. Not only unobtrusive to users in daily life, smartphone has embedded built-in sensors that capable to sense contextual information of its users supported with wide range capability of network connections. In this paper, we will discuss the classification algorithms used in smartphone-based human activity. Existing technologies pertaining to smartphone-based researches in human activity recognition will be highlighted and discussed. Our paper will also present our findings and opinions to formulate improvement ideas in current researches- trends. Understanding research trends will enable researchers to have clearer research direction and common vision on latest smartphone-based human activity recognition area.

Keywords: Classification algorithms, Human Activity Recognition (HAR), Smartphones

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6245
606 Reversible Binary Arithmetic for Integrated Circuit Design

Authors: D. Krishnaveni, M. Geetha Priya

Abstract:

Application of reversible logic in integrated circuits results in the improved optimization of power consumption. This technology can be put into use in a variety of low power applications such as quantum computing, optical computing, nano-technology, and Complementary Metal Oxide Semiconductor (CMOS) Very Large Scale Integrated (VLSI) design etc. Logic gates are the basic building blocks in the design of any logic network and thus integrated circuits. In this paper, reversible Dual Key Gate (DKG) and Dual key Gate Pair (DKGP) gates that work singly as full adder/full subtractor are used to realize the basic building blocks of logic circuits. Reversible full adder/subtractor and parallel adder/ subtractor are designed using other reversible gates available in the literature and compared with that of DKG & DKGP gates. Efficient performance of reversible logic circuits relies on the optimization of the key parameters viz number of constant inputs, garbage outputs and number of reversible gates. The full adder/subtractor and parallel adder/subtractor design with reversible DKGP and DKG gates results in least number of constant inputs, garbage outputs, and number of reversible gates compared to the other designs. Thus, this paper provides a threshold to build more complex arithmetic systems using these reversible logic gates, leading to the enhanced performance of computing systems.

Keywords: Low power CMOS, quantum computing, reversible logic gates, full adder, full subtractor, parallel adder/subtractor, basic gates, universal gates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382
605 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment

Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang

Abstract:

2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn  features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.

Keywords: Artificial Intelligence, machine learning, deep learning, convolutional neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1181
604 Double Layer Polarization and Non-Linear Electroosmosis in and around a Charged Permeable Aggregate

Authors: Partha P. Gopmandal, S. Bhattacharyya

Abstract:

We have studied the migration of a charged permeable aggregate in electrolyte under the influence of an axial electric field and pressure gradient. The migration of the positively charged aggregate leads to a deformation of the anionic cloud around it. The hydrodynamics of the aggregate is governed by the interaction of electroosmotic flow in and around the particle, hydrodynamic friction and electric force experienced by the aggregate. We have computed the non-linear Nernest-Planck equations coupled with the Dracy- Brinkman extended Navier-Stokes equations and Poisson equation for electric field through a finite volume method. The permeability of the aggregate enable the counterion penetration. The penetration of counterions depends on the volume charge density of the aggregate and ionic concentration of electrolytes at a fixed field strength. The retardation effect due to the double layer polarization increases the drag force compared to an uncharged aggregate. Increase in migration sped from the electrophretic velocity of the aggregate produces further asymmetry in charge cloud and reduces the electric body force exerted on the particle. The permeability of the particle have relatively little influence on the electric body force when Double layer is relatively thin. The impact of the key parameters of electrokinetics on the hydrodynamics of the aggregate is analyzed.

Keywords: Electrophoresis, Advective flow, Polarization effect, Numerical solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
603 Building a Hierarchical, Granular Knowledge Cube

Authors: Alexander Denzler, Marcel Wehrle, Andreas Meier

Abstract:

A knowledge base stores facts and rules about the world that applications can use for the purpose of reasoning. By applying the concept of granular computing to a knowledge base, several advantages emerge. These can be harnessed by applications to improve their capabilities and performance. In this paper, the concept behind such a construct, called a granular knowledge cube, is defined, and its intended use as an instrument that manages to cope with different data types and detect knowledge domains is elaborated. Furthermore, the underlying architecture, consisting of the three layers of the storing, representing, and structuring of knowledge, is described. Finally, benefits as well as challenges of deploying it are listed alongside application types that could profit from having such an enhanced knowledge base.

Keywords: Granular computing, granular knowledge, hierarchical structuring, knowledge bases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2332
602 Grid Learning; Computer Grid Joins to e- Learning

Authors: A. Nassiry, A. Kardan

Abstract:

According to development of communications and web-based technologies in recent years, e-Learning has became very important for everyone and is seen as one of most dynamic teaching methods. Grid computing is a pattern for increasing of computing power and storage capacity of a system and is based on hardware and software resources in a network with common purpose. In this article we study grid architecture and describe its different layers. In this way, we will analyze grid layered architecture. Then we will introduce a new suitable architecture for e-Learning which is based on grid network, and for this reason we call it Grid Learning Architecture. Various sections and layers of suggested architecture will be analyzed; especially grid middleware layer that has key role. This layer is heart of grid learning architecture and, in fact, regardless of this layer, e-Learning based on grid architecture will not be feasible.

Keywords: Distributed learning, Grid Learning, Grid network, SCORM standard.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
601 Evaluation of State of the Art IDS Message Exchange Protocols

Authors: Robert Koch, Mario Golling, Gabi Dreo

Abstract:

During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.

Keywords: Cyber Defence, Cyber Warfare, Intrusion Detection Information Exchange, Early Warning Systems, Joint Intrusion Detection, Cyber Conflict

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2243
600 The Contribution of Sulfate and Oxidized Organics in Climatically Important Ultrafine Particles at a Coral Reef Environment

Authors: P. Vaattovaara, H. B. Swan, G. B. Jones, E. Deschaseaux, B. Miljevic, A. Laaksonen, Z. D. Ristovski

Abstract:

In order to investigate the properties of coral reef origin secondary aerosol and especially the contribution of secondary organic aerosol, ethanol affinity to atmospheric nucleation mode particles (diameter<15nm) was measured at the Heron reef marine environment in the South Pacific Ocean during the first coral reef aerosol characterization experiment in May-June 2011 using an ultrafine organic tandem differential mobility analyzer.

Our campaign study at Heron reef showed that the nucleation mode size particles (diameter =10nm) composition contain internally mixed sulfate and oxidized organic components in approximately equal proportion in sunny and still conditions around low tide time, indicating local biogenic sources. The produced secondary compounds and aerosols have potential to contribute to cloud condensation nuclei formation and properties that may affect local low-level cloud formation over the GBR. Additionally, primary marine sea-salt and organic material during windy conditions and anthropogenic/biogenic sources during continental air masses can affect the properties of these particles.

Keywords: Coral reef, DMS, particle composition, secondary organics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857
599 Granulation using Clustering and Rough Set Theory and its Tree Representation

Authors: Girish Kumar Singh, Sonajharia Minz

Abstract:

Granular computing deals with representation of information in the form of some aggregates and related methods for transformation and analysis for problem solving. A granulation scheme based on clustering and Rough Set Theory is presented with focus on structured conceptualization of information has been presented in this paper. Experiments for the proposed method on four labeled data exhibit good result with reference to classification problem. The proposed granulation technique is semi-supervised imbibing global as well as local information granulation. To represent the results of the attribute oriented granulation a tree structure is proposed in this paper.

Keywords: Granular computing, clustering, Rough sets, datamining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
598 Enabling Integration across Heterogeneous Care Networks

Authors: Federico Cabitza, Marco P. Locatelli, Marcello Sarini, Carla Simone

Abstract:

The paper shows how the CASMAS modeling language, and its associated pervasive computing architecture, can be used to facilitate continuity of care by providing members of patientcentered communities of care with a support to cooperation and knowledge sharing through the usage of electronic documents and digital devices. We consider a scenario of clearly fragmented care to show how proper mechanisms can be defined to facilitate a better integration of practices and information across heterogeneous care networks. The scenario is declined in terms of architectural components and cooperation-oriented mechanisms that make the support reactive to the evolution of the context where these communities operate.

Keywords: Pervasive Computing, Communities of Care, HeterogeneousCare Networks, Multi-Agent System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1319
597 Parallel-computing Approach for FFT Implementation on Digital Signal Processor (DSP)

Authors: Yi-Pin Hsu, Shin-Yu Lin

Abstract:

An efficient parallel form in digital signal processor can improve the algorithm performance. The butterfly structure is an important role in fast Fourier transform (FFT), because its symmetry form is suitable for hardware implementation. Although it can perform a symmetric structure, the performance will be reduced under the data-dependent flow characteristic. Even though recent research which call as novel memory reference reduction methods (NMRRM) for FFT focus on reduce memory reference in twiddle factor, the data-dependent property still exists. In this paper, we propose a parallel-computing approach for FFT implementation on digital signal processor (DSP) which is based on data-independent property and still hold the property of low-memory reference. The proposed method combines final two steps in NMRRM FFT to perform a novel data-independent structure, besides it is very suitable for multi-operation-unit digital signal processor and dual-core system. We have applied the proposed method of radix-2 FFT algorithm in low memory reference on TI TMSC320C64x DSP. Experimental results show the method can reduce 33.8% clock cycles comparing with the NMRRM FFT implementation and keep the low-memory reference property.

Keywords: Parallel-computing, FFT, low-memory reference, TIDSP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153
596 An Evolutionary Statistical Learning Theory

Authors: Sung-Hae Jun, Kyung-Whan Oh

Abstract:

Statistical learning theory was developed by Vapnik. It is a learning theory based on Vapnik-Chervonenkis dimension. It also has been used in learning models as good analytical tools. In general, a learning theory has had several problems. Some of them are local optima and over-fitting problems. As well, statistical learning theory has same problems because the kernel type, kernel parameters, and regularization constant C are determined subjectively by the art of researchers. So, we propose an evolutionary statistical learning theory to settle the problems of original statistical learning theory. Combining evolutionary computing into statistical learning theory, our theory is constructed. We verify improved performances of an evolutionary statistical learning theory using data sets from KDD cup.

Keywords: Evolutionary computing, Local optima, Over-fitting, Statistical learning theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
595 Benchmarking: Performance on ALPS and Formosa Clusters

Authors: Chih-Wei Hsieh, Chau-Yi Chou, Sheng-HsiuKuo, Tsung-Che Tsai, I-Chen Wu

Abstract:

This paper presents the benchmarking results and performance evaluation of differentclustersbuilt atthe National Center for High-Performance Computingin Taiwan. Performance of processor, memory subsystem andinterconnect is a critical factor in the overall performance of high performance computing platforms. The evaluation compares different system architecture and software platforms. Most supercomputer used HPL to benchmark their system performance, in accordance with the requirement of the TOP500 List. In this paper we consider system memory access factors that affect benchmark performance, such as processor and memory performance.We hope these works will provide useful information for future development and construct cluster system.

Keywords: Performance Evaluation, Benchmarking and High-Performance Computing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
594 RFU Based Computational Unit Design For Reconfigurable Processors

Authors: M. Aqeel Iqbal

Abstract:

Fully customized hardware based technology provides high performance and low power consumption by specializing the tasks in hardware but lacks design flexibility since any kind of changes require re-design and re-fabrication. Software based solutions operate with software instructions due to which a great flexibility is achieved from the easy development and maintenance of the software code. But this execution of instructions introduces a high overhead in performance and area consumption. In past few decades the reconfigurable computing domain has been introduced which overcomes the traditional trades-off between flexibility and performance and is able to achieve high performance while maintaining a good flexibility. The dramatic gains in terms of chip performance and design flexibility achieved through the reconfigurable computing systems are greatly dependent on the design of their computational units being integrated with reconfigurable logic resources. The computational unit of any reconfigurable system plays vital role in defining its strength. In this research paper an RFU based computational unit design has been presented using the tightly coupled, multi-threaded reconfigurable cores. The proposed design has been simulated for VLIW based architectures and a high gain in performance has been observed as compared to the conventional computing systems.

Keywords: Configuration Stream, Configuration overhead, Configuration Controller, Reconfigurable devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1578