Search results for: On-demand Computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 692

Search results for: On-demand Computing

512 Description of Unsteady Flows in the Cuboid Container

Authors: K. Horáková, K. Fraňa, V. Honzejk

Abstract:

This part of study deals with description of unsteady isothermal melt flow in the container with cuboid shape. This melt flow is driven by rotating magnetic field. Input data (instantaneous velocities, grid coordinates and Lorentz forces) were obtained from in-house CFD code (called NS-FEM3D) which uses DDES method of computing. Description of the flow was performed by contours of Lorentz forces and caused velocity field. Taylor magnetic numbers of the flow were used 1.10^6, 5.10^6 and 1.10^7, flow was in 3D turbulent flow regime.

Keywords: In-house computing code, Lorentz forces, magnetohydrodynamics, rotating magnetic field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
511 Evaluating New Service Development Performance Based on Multigranular Linguistic Assessment

Authors: Wen-Pai Wang, Mei-Ching Tang

Abstract:

The service sector continues to grow and the percentage of GDP accounted for by service industries keeps increasing. The growth and importance of service to an economy is not just a phenomenon of advanced economies, service is now a majority of the world gross domestic products. However, the performance evaluation process of new service development problems generally involves uncertain and imprecise data. This paper presents a 2-tuple fuzzy linguistic computing approach to dealing with heterogeneous information and information loss problems while the processes of subjective evaluation integration. The proposed method based on group decision-making scenario to assist business managers in measuring performance of new service development manipulates the heterogeneity integration processes and avoids the information loss effectively.

Keywords: Heterogeneity, Multigranular linguistic computing, New service development, Performance evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
510 An Improved Scheduling Strategy in Cloud Using Trust Based Mechanism

Authors: D. Sumathi, P. Poongodi

Abstract:

Cloud Computing refers to applications delivered as services over the internet, and the datacenters that provide those services with hardware and systems software. These were earlier referred to as Software as a Service (SaaS). Scheduling is justified by job components (called tasks), lack of information. In fact, in a large fraction of jobs from machine learning, bio-computing, and image processing domains, it is possible to estimate the maximum time required for a task in the job. This study focuses on Trust based scheduling to improve cloud security by modifying Heterogeneous Earliest Finish Time (HEFT) algorithm. It also proposes TR-HEFT (Trust Reputation HEFT) which is then compared to Dynamic Load Scheduling.

Keywords: Software as a Service (SaaS), Trust, Heterogeneous Earliest Finish Time (HEFT) algorithm, Dynamic Load Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
509 Cognitive Radio Networks (CRN): Resource Allocation Techniques Based On DNA-inspired Computing

Authors: Santosh Kumar Singh, Krishna Chandra Roy, Vibhakar Pathak

Abstract:

Spectrum is a scarce commodity, and considering the spectrum scarcity faced by the wireless-based service providers led to high congestion levels. Technical inefficiencies from pooled, since all networks share a common pool of channels, exhausting the available channels will force networks to block the services. Researchers found that cognitive radio (CR) technology may resolve the spectrum scarcity. A CR is a self-configuring entity in a wireless networking that senses its environment, tracks changes, and frequently exchanges information with their networks. However, CRN facing challenges and condition become worst while tracks changes i.e. reallocation of another under-utilized channels while primary network user arrives. In this paper, channels or resource reallocation technique based on DNA-inspired computing algorithm for CRN has been proposed.

Keywords: Ad hoc networks, channels reallocation, cognitive radio, DNA local sequence alignment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
508 A Multi-Criteria Evaluation Incorporating Linguistic Computing for Service Innovation Performance

Authors: Wen-Pai Wang

Abstract:

The growing influence of service industries has prompted greater attention being paid to service operations management. However, service managers often have difficulty articulating the veritable effects of their service innovation. Especially, the performance evaluation process of service innovation problems generally involves uncertain and imprecise data. This paper presents a 2-tuple fuzzy linguistic computing approach to dealing with heterogeneous information and information loss problems while the processes of subjective evaluation integration. The proposed method based on group decision-making scenario to assist business managers in measuring performance of service innovation manipulates the heterogeneity integration processes and avoids the information loss effectively.

Keywords: Group decision-making, Heterogeneity, Linguisticcomputing, Multi-criteria, Service innovation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
507 Dissertation by Portfolio - A Break from Traditional Approaches

Authors: Paul Crowther, Richard Hill

Abstract:

Much has been written about the difficulties students have with producing traditional dissertations. This includes both native English speakers (L1) and students with English as a second language (L2). The main emphasis of these papers has been on the structure of the dissertation, but in all cases, even when electronic versions are discussed, the dissertation is still in what most would regard as a traditional written form. Master of Science Degrees in computing disciplines require students to gain technical proficiency and apply their knowledge to a range of scenarios. The basis of this paper is that if a dissertation is a means of showing that such a student has met the criteria for a pass, which should be based on the learning outcomes of the dissertation module, does meeting those outcomes require a student to demonstrate their skills in a solely text based form, particularly in a highly technical research project? Could it be possible for a student to produce a series of related artifacts which form a cohesive package that meets the learning out comes of the dissertation?

Keywords: Computing, Masters dissertation, thesis, portfolio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
506 A Review on Soft Computing Technique in Intrusion Detection System

Authors: Noor Suhana Sulaiman, Rohani Abu Bakar, Norrozila Sulaiman

Abstract:

Intrusion Detection System is significant in network security. It detects and identifies intrusion behavior or intrusion attempts in a computer system by monitoring and analyzing the network packets in real time. In the recent year, intelligent algorithms applied in the intrusion detection system (IDS) have been an increasing concern with the rapid growth of the network security. IDS data deals with a huge amount of data which contains irrelevant and redundant features causing slow training and testing process, higher resource consumption as well as poor detection rate. Since the amount of audit data that an IDS needs to examine is very large even for a small network, classification by hand is impossible. Hence, the primary objective of this review is to review the techniques prior to classification process suit to IDS data.

Keywords: Intrusion Detection System, security, soft computing, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864
505 Key Concepts of 5th Generation Mobile Technology

Authors: H. Magri, N. Abghour, M. Ouzzif

Abstract:

The 5th generation of mobile networks is term used in various research papers and projects to identify the next major phase of mobile telecommunications standards. 5G wireless networks will support higher peak data rate, lower latency and provide best connections with QoS guarantees. In this article, we discuss various promising technologies for 5G wireless communication systems, such as IPv6 support, World Wide Wireless Web (WWWW), Dynamic Adhoc Wireless Networks (DAWN), BEAM DIVISION MULTIPLE ACCESS (BDMA), Cloud Computing, cognitive radio technology and FBMC/OQAM. This paper is organized as follows: First, we will give introduction to 5G systems, present some goals and requirements of 5G. In the next, basic differences between 4G and 5G are given, after we talk about key technology innovations of 5G systems and finally we will conclude in last Section.

Keywords: WWWW, BDMA, DAWN, 5G, 4G, IPv6, Cloud Computing, cognitive radio, FBMC/OQAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3829
504 Combine a Population-based Incremental Learning with Artificial Immune System for Intrusion Detection System

Authors: Jheng-Long Wu, Pei-Chann Chang, Hsuan-Ming Chen

Abstract:

This research focus on the intrusion detection system (IDS) development which using artificial immune system (AIS) with population based incremental learning (PBIL). AIS have powerful distinguished capability to extirpate antigen when the antigen intrude into human body. The PBIL is based on past learning experience to adjust new learning. Therefore we propose an intrusion detection system call PBIL-AIS which combine two approaches of PBIL and AIS to evolution computing. In AIS part we design three mechanisms such as clonal selection, negative selection and antibody level to intensify AIS performance. In experimental result, our PBIL-AIS IDS can capture high accuracy when an intrusion connection attacks.

Keywords: Artificial immune system, intrusion detection, population-based incremental learning, evolution computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
503 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study

Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker

Abstract:

In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.

Keywords: Admissions, algorithms, cloud computing, differentiation, fog computing, leveling, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723
502 MONARC: A Case Study on Simulation Analysis for LHC Activities

Authors: Ciprian Dobre

Abstract:

The scale, complexity and worldwide geographical spread of the LHC computing and data analysis problems are unprecedented in scientific research. The complexity of processing and accessing this data is increased substantially by the size and global span of the major experiments, combined with the limited wide area network bandwidth available. We present the latest generation of the MONARC (MOdels of Networked Analysis at Regional Centers) simulation framework, as a design and modeling tool for large scale distributed systems applied to HEP experiments. We present simulation experiments designed to evaluate the capabilities of the current real-world distributed infrastructure to support existing physics analysis processes and the means by which the experiments bands together to meet the technical challenges posed by the storage, access and computing requirements of LHC data analysis within the CMS experiment.

Keywords: Modeling and simulation, evaluation, large scale distributed systems, LHC experiments, CMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
501 Formosa3: A Cloud-Enabled HPC Cluster in NCHC

Authors: Chin-Hung Li, Te-Ming Chen, Ying-Chuan Chen, Shuen-Tai Wang

Abstract:

This paper proposes a new approach to offer a private cloud service in HPC clusters. In particular, our approach relies on automatically scheduling users- customized environment request as a normal job in batch system. After finishing virtualization request jobs, those guest operating systems will dismiss so that compute nodes will be released again for computing. We present initial work on the innovative integration of HPC batch system and virtualization tools that aims at coexistence such that they suffice for meeting the minimizing interference required by a traditional HPC cluster. Given the design of initial infrastructure, the proposed effort has the potential to positively impact on synergy model. The results from the experiment concluded that goal for provisioning customized cluster environment indeed can be fulfilled by using virtual machines, and efficiency can be improved with proper setup and arrangements.

Keywords: Cloud Computing, HPC Cluster, Private Cloud, Virtualization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042
500 Optimal All-to-All Personalized Communication in All-Port Tori

Authors: Liu Gang, Gu Nai-jie, Bi Kun, Tu Kun, Dong Wan-li

Abstract:

All-to-all personalized communication, also known as complete exchange, is one of the most dense communication patterns in parallel computing. In this paper, we propose new indirect algorithms for complete exchange on all-port ring and torus. The new algorithms fully utilize all communication links and transmit messages along shortest paths to completely achieve the theoretical lower bounds on message transmission, which have not be achieved among other existing indirect algorithms. For 2D r × c ( r % c ) all-port torus, the algorithm has time complexities of optimal transmission cost and O(c) message startup cost. In addition, the proposed algorithms accommodate non-power-of-two tori where the number of nodes in each dimension needs not be power-of-two or square. Finally, the algorithms are conceptually simple and symmetrical for every message and every node so that they can be easily implemented and achieve the optimum in practice.

Keywords: Complete exchange, collective communication, all-to-all personalized communication, parallel computing, wormhole routing, torus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
499 RANFIS : Rough Adaptive Neuro-Fuzzy Inference System

Authors: Sandeep Chandana, Rene V. Mayorga

Abstract:

The paper presents a new hybridization methodology involving Neural, Fuzzy and Rough Computing. A Rough Sets based approximation technique has been proposed based on a certain Neuro – Fuzzy architecture. A New Rough Neuron composition consisting of a combination of a Lower Bound neuron and a Boundary neuron has also been described. The conventional convergence of error in back propagation has been given away for a new framework based on 'Output Excitation Factor' and an inverse input transfer function. The paper also presents a brief comparison of performances, of the existing Rough Neural Networks and ANFIS architecture against the proposed methodology. It can be observed that the rough approximation based neuro-fuzzy architecture is superior to its counterparts.

Keywords: Boundary neuron, neuro-fuzzy, output excitation factor, RANFIS, rough approximation, rough neural computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
498 Software Effort Estimation Using Soft Computing Techniques

Authors: Parvinder S. Sandhu, Porush Bassi, Amanpreet Singh Brar

Abstract:

Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.

Keywords: Effort Estimation, Neural-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
497 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems

Authors: Nyeng P. Gyang

Abstract:

Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.

Keywords: Cloud computing systems, multicore systems, parallel delaunay triangulation, parallel surface modeling and generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879
496 A Consideration of the Achievement of Productive Level Parallel Programming Skills

Authors: Tadayoshi Horita, Masakazu Akiba, Mina Terauchi, Tsuneo Kanno

Abstract:

This paper gives a consideration of the achievement of productive level parallel programming skills, based on the data of the graduation studies in the Polytechnic University of Japan. The data show that most students can achieve only parallel programming skills during the graduation study (about 600 to 700 hours), if the programming environment is limited to GPGPUs. However, the data also show that it is a very high level task that a student achieves productive level parallel programming skills during only the graduation study. In addition, it shows that the parallel programming environments for GPGPU, such as CUDA and OpenCL, may be more suitable for parallel computing education than other environments such as MPI on a cluster system and Cell.B.E. These results must be useful for the areas of not only software developments, but also hardware product developments using computer technologies.

Keywords: Parallel computing, programming education, GPU, GPGPU, CUDA, OpenCL, MPI, Cell.B.E.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687
495 Cloud Computing Security for Multi-Cloud Service Providers: Controls and Techniques in our Modern Threat Landscape

Authors: Sandesh Achar

Abstract:

Cloud computing security is a broad term that covers a variety of security concerns for organizations that use cloud services. Multi-cloud service providers must consider several factors when addressing security for their customers, including identity and access management, data at rest and in transit, egress and ingress traffic control, vulnerability and threat management, and auditing. This paper explores each of these aspects of cloud security in detail and provides recommendations for best practices for multi-cloud service providers. It also discusses the challenges inherent in securing a multi-cloud environment and offers solutions for overcoming these challenges. By the end of this paper, readers should have a good understanding of the various security concerns associated with multi-cloud environments in the context of today’s modern cyber threats and how to address them.

Keywords: Multi-cloud service, SOC, system organization control, data loss prevention, DLP, identity and access management, IAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 706
494 The Effect of Increment in Simulation Samples on a Combined Selection Procedure

Authors: Mohammad H. Almomani, Rosmanjawati Abdul Rahman

Abstract:

Statistical selection procedures are used to select the best simulated system from a finite set of alternatives. In this paper, we present a procedure that can be used to select the best system when the number of alternatives is large. The proposed procedure consists a combination between Ranking and Selection, and Ordinal Optimization procedures. In order to improve the performance of Ordinal Optimization, Optimal Computing Budget Allocation technique is used to determine the best simulation lengths for all simulation systems and to reduce the total computation time. We also argue the effect of increment in simulation samples for the combined procedure. The results of numerical illustration show clearly the effect of increment in simulation samples on the proposed combination of selection procedure.

Keywords: Indifference-Zone, Optimal Computing Budget Allocation, Ordinal Optimization, Ranking and Selection, Subset Selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1241
493 Parallel Double Splicing on Iso-Arrays

Authors: V. Masilamani, D.K. Sheena Christy, D.G. Thomas

Abstract:

Image synthesis is an important area in image processing. To synthesize images various systems are proposed in the literature. In this paper, we propose a bio-inspired system to synthesize image and to study the generating power of the system, we define the class of languages generated by our system. We call image as array in this paper. We use a primitive called iso-array to synthesize image/array. The operation is double splicing on iso-arrays. The double splicing operation is used in DNA computing and we use this to synthesize image. A comparison of the family of languages generated by the proposed self restricted double splicing systems on iso-arrays with the existing family of local iso-picture languages is made. Certain closure properties such as union, concatenation and rotation are studied for the family of languages generated by the proposed model.

Keywords: DNA computing, splicing system, iso-picture languages, iso-array double splicing system, iso-array self splicing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
492 Trust Managementfor Pervasive Computing Environments

Authors: Denis Trcek

Abstract:

Trust is essential for further and wider acceptance of contemporary e-services. It was first addressed almost thirty years ago in Trusted Computer System Evaluation Criteria standard by the US DoD. But this and other proposed approaches of that period were actually solving security. Roughly some ten years ago, methodologies followed that addressed trust phenomenon at its core, and they were based on Bayesian statistics and its derivatives, while some approaches were based on game theory. However, trust is a manifestation of judgment and reasoning processes. It has to be dealt with in accordance with this fact and adequately supported in cyber environment. On the basis of the results in the field of psychology and our own findings, a methodology called qualitative algebra has been developed, which deals with so far overlooked elements of trust phenomenon. It complements existing methodologies and provides a basis for a practical technical solution that supports management of trust in contemporary computing environments. Such solution is also presented at the end of this paper.

Keywords: internet security, trust management, multi-agent systems, reasoning and judgment, modeling and simulation, qualitativealgebra

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
491 A Distributed Approach to Extract High Utility Itemsets from XML Data

Authors: S. Kannimuthu, K. Premalatha

Abstract:

This paper investigates a new data mining capability that entails mining of High Utility Itemsets (HUI) in a distributed environment. Existing research in data mining deals with only presence or absence of an items and do not consider the semantic measures like weight or cost of the items. Thus, HUI mining algorithm has evolved. HUI mining is the one kind of utility mining concept, aims to identify itemsets whose utility satisfies a given threshold. Although, the approach of mining HUIs in a distributed environment and mining of the same from XML data have not explored yet. In this work, a novel approach is proposed to mine HUIs from the XML based data in a distributed environment. This work utilizes Service Oriented Computing (SOC) paradigm which provides Knowledge as a Service (KaaS). The interesting patterns are provided via the web services with the help of knowledge server to answer the queries of the consumers. The performance of the approach is evaluated on various databases using execution time and memory consumption.

Keywords: Data mining, Knowledge as a Service, service oriented computing, utility mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454
490 Design and Implementation of Shared Memory based Parallel File System Logging Method for High Performance Computing

Authors: Hyeyoung Cho, Sungho Kim, SangDong Lee

Abstract:

I/O workload is a critical and important factor to analyze I/O pattern and file system performance. However tracing I/O operations on the fly distributed parallel file system is non-trivial due to collection overhead and a large volume of data. In this paper, we design and implement a parallel file system logging method for high performance computing using shared memory-based multi-layer scheme. It minimizes the overhead with reduced logging operation response time and provides efficient post-processing scheme through shared memory. Separated logging server can collect sequential logs from multiple clients in a cluster through packet communication. Implementation and evaluation result shows low overhead and high scalability of this architecture for high performance parallel logging analysis.

Keywords: I/O workload, PVFS, I/O Trace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
489 A Timed and Colored Petri Nets for Modeling and Verifying Cloud System Elasticity

Authors: W. Louhichi, M.Berrima, N. Ben Rajeb Robbana

Abstract:

Elasticity is the essential property of cloud computing. As the name suggests, it constitutes the ability of a cloud system to adjust resource provisioning in relation to fluctuating workloads. There are two types of elasticity operations, vertical and horizontal. In this work, we are interested in horizontal scaling, which is ensured by two mechanisms; scaling in and scaling out. Following the sizing of the system, we can adopt scaling in the event of over-supply and scaling out in the event of under-supply. In this paper, we propose a formal model, based on temporized and colored Petri nets (TdCPNs), for the modeling of the duplication and the removal of a virtual machine from a server. This model is based on formal Petri Nets (PNs) modeling language. The proposed models are edited, verified, and simulated with two examples implemented in colored Petri nets (CPNs)tools, which is a modeling tool for colored and timed PNs.

Keywords: Cloud computing, elasticity, elasticity controller, petri nets, scaling in, scaling out.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646
488 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption

Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Moses Noel Dogonyaro

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.

Keywords: Data Analytics, Security, Privacy, Bootstrapping, and Fully Homomorphic Encryption Scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3458
487 Automatic Light Control in Domotics using Artificial Neural Networks

Authors: Carlos Machado, José A. Mendes

Abstract:

Home Automation is a field that, among other subjects, is concerned with the comfort, security and energy requirements of private homes. The configuration of automatic functions in this type of houses is not always simple to its inhabitants requiring the initial setup and regular adjustments. In this work, the ubiquitous computing system vision is used, where the users- action patterns are captured, recorded and used to create the contextawareness that allows the self-configuration of the home automation system. The system will try to free the users from setup adjustments as the home tries to adapt to its inhabitants- real habits. In this paper it is described a completely automated process to determine the light state and act on them, taking in account the users- daily habits. Artificial Neural Network (ANN) is used as a pattern recognition method, classifying for each moment the light state. The work presented uses data from a real house where a family is actually living.

Keywords: ANN, Home Automation, Neural Systems, PatternRecognition, Ubiquitous Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2071
486 Cloud Computing Support for Diagnosing Researches

Authors: A. Amirov, O. Gerget, V. Kochegurov

Abstract:

One of the main biomedical problem lies in detecting dependencies in semi structured data. Solution includes biomedical portal and algorithms (integral rating health criteria, multidimensional data visualization methods). Biomedical portal allows to process diagnostic and research data in parallel mode using Microsoft System Center 2012, Windows HPC Server cloud technologies. Service does not allow user to see internal calculations instead it provides practical interface. When data is sent for processing user may track status of task and will achieve results as soon as computation is completed. Service includes own algorithms and allows diagnosing and predicating medical cases. Approved methods are based on complex system entropy methods, algorithms for determining the energy patterns of development and trajectory models of biological systems and logical–probabilistic approach with the blurring of images.

Keywords: Biomedical portal, cloud computing, diagnostic and prognostic research, mathematical data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
485 Design and Implementation of Security Middleware for Data Warehouse Signature Framework

Authors: Mayada AlMeghari

Abstract:

Recently, grid middlewares have provided large integrated use of network resources as the shared data and the CPU to become a virtual supercomputer. In this work, we present the design and implementation of the middleware for Data Warehouse Signature (DWS) Framework. The aim of using the middleware in the proposed DWS framework is to achieve the high performance by the parallel computing. This middleware is developed on Alchemi.Net framework to increase the security among the network nodes through the authentication and group-key distribution model. This model achieves the key security and prevents any intermediate attacks in the middleware. This paper presents the flow process structures of the middleware design. In addition, the paper ensures the implementation of security for DWS middleware enhancement with the authentication and group-key distribution model. Finally, from the analysis of other middleware approaches, the developed middleware of DWS framework is the optimal solution of a complete covering of security issues.

Keywords: Middleware, parallel computing, data warehouse, security, group-key, high performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 337
484 Parallel and Distributed Mining of Association Rule on Knowledge Grid

Authors: U. Sakthi, R. Hemalatha, R. S. Bhuvaneswaran

Abstract:

In Virtual organization, Knowledge Discovery (KD) service contains distributed data resources and computing grid nodes. Computational grid is integrated with data grid to form Knowledge Grid, which implements Apriori algorithm for mining association rule on grid network. This paper describes development of parallel and distributed version of Apriori algorithm on Globus Toolkit using Message Passing Interface extended with Grid Services (MPICHG2). The creation of Knowledge Grid on top of data and computational grid is to support decision making in real time applications. In this paper, the case study describes design and implementation of local and global mining of frequent item sets. The experiments were conducted on different configurations of grid network and computation time was recorded for each operation. We analyzed our result with various grid configurations and it shows speedup of computation time is almost superlinear.

Keywords: Association rule, Grid computing, Knowledge grid, Mobility prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181
483 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications

Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan

Abstract:

High-performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from EM simulations is oftentimes cumbersome leading to large storage requirements. In this paper, we proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. We solve this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.

Keywords: RADAR, RCS, high performance computing, point scatterer model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606