Search results for: pervasive computing.
602 Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment
Authors: M. Bohlouli, M. Analoui
Abstract:
For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.
Keywords: Active Database, Grid Computing, ResourceRequirement Prediction, Scheduling,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432601 An Algorithm for Computing the Analytic Singular Value Decomposition
Authors: Drahoslava Janovska, Vladimir Janovsky, Kunio Tanabe
Abstract:
A proof of convergence of a new continuation algorithm for computing the Analytic SVD for a large sparse parameter– dependent matrix is given. The algorithm itself was developed and numerically tested in [5].
Keywords: Analytic Singular Value Decomposition, large sparse parameter–dependent matrices, continuation algorithm of a predictorcorrector type.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456600 Fast 3D Collision Detection Algorithm using 2D Intersection Area
Authors: Taehyun Yoon, Keechul Jung
Abstract:
There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.
Keywords: Collision Detection, Computer Vision, Human Computer Interaction, Visual Hull
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2406599 Exploring the Combinatorics of Motif Alignments Foraccurately Computing E-values from P-values
Authors: T. Kjosmoen, T. Ryen, T. Eftestøl
Abstract:
In biological and biomedical research motif finding tools are important in locating regulatory elements in DNA sequences. There are many such motif finding tools available, which often yield position weight matrices and significance indicators. These indicators, p-values and E-values, describe the likelihood that a motif alignment is generated by the background process, and the expected number of occurrences of the motif in the data set, respectively. The various tools often estimate these indicators differently, making them not directly comparable. One approach for comparing motifs from different tools, is computing the E-value as the product of the p-value and the number of possible alignments in the data set. In this paper we explore the combinatorics of the motif alignment models OOPS, ZOOPS, and ANR, and propose a generic algorithm for computing the number of possible combinations accurately. We also show that using the wrong alignment model can give E-values that significantly diverge from their true values.
Keywords: Motif alignment, combinatorics, p-value, E-value, OOPS, ZOOPS, ANR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1211598 The Challenges and Solutions for Developing Mobile Apps in a Small University
Authors: Greg Turner, Bin Lu, Cheer-Sun Yang
Abstract:
As computing technology advances, smartphone applications can assist student learning in a pervasive way. For example, the idea of using mobile apps for the PA Common Trees, Pests, Pathogens, in the field as a reference tool allows middle school students to learn about trees and associated pests/pathogens without bringing a textbook. While working on the development of three heterogeneous mobile apps, we ran into numerous challenges. Both the traditional waterfall model and the more modern agile methodologies failed in practice. The waterfall model emphasizes the planning of the duration for each phase. When the duration of each phase is not consistent with the availability of developers, the waterfall model cannot be employed. When applying Agile Methodologies, we cannot maintain the high frequency of the iterative development review process, known as ‘sprints’. In this paper, we discuss the challenges and solutions. We propose a hybrid model known as the Relay Race Methodology to reflect the concept of racing and relaying during the process of software development in practice. Based on the development project, we observe that the modeling of the relay race transition between any two phases is manifested naturally. Thus, we claim that the RRM model can provide a de fecto rather than a de jure basis for the core concept in the software development model. In this paper, the background of the project is introduced first. Then, the challenges are pointed out followed by our solutions. Finally, the experiences learned and the future works are presented.Keywords: Agile methods, mobile apps, software process model, waterfall model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603597 Implementation of Watch Dog Timer for Fault Tolerant Computing on Cluster Server
Authors: Meenakshi Bheevgade, Rajendra M. Patrikar
Abstract:
In today-s new technology era, cluster has become a necessity for the modern computing and data applications since many applications take more time (even days or months) for computation. Although after parallelization, computation speeds up, still time required for much application can be more. Thus, reliability of the cluster becomes very important issue and implementation of fault tolerant mechanism becomes essential. The difficulty in designing a fault tolerant cluster system increases with the difficulties of various failures. The most imperative obsession is that the algorithm, which avoids a simple failure in a system, must tolerate the more severe failures. In this paper, we implemented the theory of watchdog timer in a parallel environment, to take care of failures. Implementation of simple algorithm in our project helps us to take care of different types of failures; consequently, we found that the reliability of this cluster improves.Keywords: Cluster, Fault tolerant, Grid, Grid ComputingSystem, Meta-computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214596 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics
Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood
Abstract:
We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.Keywords: Cloud computing, cybersecurity, digital forensics, Kafka, Kubernetes, Spark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658595 Computing Visibility Subsets in an Orthogonal Polyhedron
Authors: Jefri Marzal, Hong Xie, Chun Che Fung
Abstract:
Visibility problems are central to many computational geometry applications. One of the typical visibility problems is computing the view from a given point. In this paper, a linear time procedure is proposed to compute the visibility subsets from a corner of a rectangular prism in an orthogonal polyhedron. The proposed algorithm could be useful to solve classic 3D problems.
Keywords: Visibility, rectangular prism, orthogonal polyhedron.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389594 Dynamic Load Balancing Strategy for Grid Computing
Authors: Belabbas Yagoubi, Yahya Slimani
Abstract:
Workload and resource management are two essential functions provided at the service level of the grid software infrastructure. To improve the global throughput of these software environments, workloads have to be evenly scheduled among the available resources. To realize this goal several load balancing strategies and algorithms have been proposed. Most strategies were developed in mind, assuming homogeneous set of sites linked with homogeneous and fast networks. However for computational grids we must address main new issues, namely: heterogeneity, scalability and adaptability. In this paper, we propose a layered algorithm which achieve dynamic load balancing in grid computing. Based on a tree model, our algorithm presents the following main features: (i) it is layered; (ii) it supports heterogeneity and scalability; and, (iii) it is totally independent from any physical architecture of a grid.
Keywords: Grid computing, load balancing, workload, tree based model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3138593 Performance Evaluation of Data Transfer Protocol GridFTP for Grid Computing
Authors: Hiroyuki Ohsaki, Makoto Imase
Abstract:
In Grid computing, a data transfer protocol called GridFTP has been widely used for efficiently transferring a large volume of data. Currently, two versions of GridFTP protocols, GridFTP version 1 (GridFTP v1) and GridFTP version 2 (GridFTP v2), have been proposed in the GGF. GridFTP v2 supports several advanced features such as data streaming, dynamic resource allocation, and checksum transfer, by defining a transfer mode called X-block mode. However, in the literature, effectiveness of GridFTP v2 has not been fully investigated. In this paper, we therefore quantitatively evaluate performance of GridFTP v1 and GridFTP v2 using mathematical analysis and simulation experiments. We reveal the performance limitation of GridFTP v1, and quantitatively show effectiveness of GridFTP v2. Through several numerical examples, we show that by utilizing the data streaming feature, the average file transfer time of GridFTP v2 is significantly smaller than that of GridFTP v1.Keywords: Grid Computing, GridFTP, Performance Evaluation, Queuing Theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1411592 Accelerating the Uptake of Smart City Applications through Cloud Computing
Authors: Panagiotis Tsarchopoulos, Nicos Komninos, Christina Kakderi
Abstract:
Smart cities are high on the political agenda around the globe. However, planning smart cities and deploying applications dealing with the complex problems of the urban environment is a very challenging task that is difficult to be undertaken solely by the cities. We argue that the uptake of smart city strategies is facilitated, first, through the development of smart city application repositories allowing re-use of already developed and tested software, and, second, through cloud computing which disengages city authorities from any resource constraints, technical or financial, and has a higher impact and greater effect at the city level The combination of these two solutions allows city governments and municipalities to select and deploy a large number of applications dedicated to different city functions, which collectively could create a multiplier effect with a greater impact on the urban environment.Keywords: Smart cities, applications, cloud computing, migration to the cloud, application repositories.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2282591 Investigation of the Physical Computing in Computational Thinking Practices, Computer Programming Concepts and Self-Efficacy for Crosscutting Ideas in STEM Content Environments
Authors: Sarantos Psycharis
Abstract:
Physical Computing, as an instructional model, is applied in the framework of the Engineering Pedagogy to teach “transversal/cross-cutting ideas” in a STEM content approach. Labview and Arduino were used in order to connect the physical world with real data in the framework of the so called Computational Experiment. Tertiary prospective engineering educators were engaged during their course and Computational Thinking (CT) concepts were registered before and after the intervention across didactic activities using validated questionnaires for the relationship between self-efficacy, computer programming, and CT concepts when STEM content epistemology is implemented in alignment with the Computational Pedagogy model. Results show a significant change in students’ responses for self-efficacy for CT before and after the instruction. Results also indicate a significant relation between the responses in the different CT concepts/practices. According to the findings, STEM content epistemology combined with Physical Computing should be a good candidate as a learning and teaching approach in university settings that enhances students’ engagement in CT concepts/practices.
Keywords: STEM, computational thinking, physical computing, Arduino, Labview, self-efficacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814590 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems
Authors: Juhi Faridi, Mohd. Ajmal Kafeel
Abstract:
The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS. Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.
Keywords: Analog circuits, digital circuits, memristors, neuromorphic computing systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214589 Reducing Power Consumption in Cloud Platforms using an Effective Mechanism
Authors: Shuen-Tai Wang, Chin-Hung Li, Ying-Chuan Chen
Abstract:
In recent years there has been renewal of interest in the relation between Green IT and Cloud Computing. The growing use of computers in cloud platform has caused marked energy consumption, putting negative pressure on electricity cost of cloud data center. This paper proposes an effective mechanism to reduce energy utilization in cloud computing environments. We present initial work on the integration of resource and power management that aims at reducing power consumption. Our mechanism relies on recalling virtualization services dynamically according to user-s virtualization request and temporarily shutting down the physical machines after finish in order to conserve energy. Given the estimated energy consumption, this proposed effort has the potential to positively impact power consumption. The results from the experiment concluded that energy indeed can be saved by powering off the idling physical machines in cloud platforms.Keywords: Green IT, Cloud Computing, virtualization, power consumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2157588 Social Anthropology of Convergence and Nomadic Computing
Authors: Emilia Nercissians
Abstract:
The paper attempts to contribute to the largely neglected social and anthropological discussion of technology development on the one hand, and to redirecting the emphasis in anthropology from primitive and exotic societies to problems of high relevance in contemporary era and how technology is used in everyday life. It draws upon multidimensional models of intelligence and ideal type formation. It is argued that the predominance of computational and cognitive cosmovisions have led to technology alienation. Injection of communicative competence in artificially intelligent systems and identity technologies in the coming information society are analyzedKeywords: convergence, nomadic computing, solidarity, status.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499587 MICOSim: A Simulator for Modelling Economic Scheduling in Grid Computing
Authors: Mohammad Bsoul, Iain Phillips, Chris Hinde
Abstract:
This paper is concerned with the design and implementation of MICOSim, an event-driven simulator written in Java for evaluating the performance of Grid entities (users, brokers and resources) under different scenarios such as varying the numbers of users, resources and brokers and varying their specifications and employed strategies.
Keywords: Grid computing, Economic Scheduling, Simulation, Event-Driven, Java.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496586 Cloud Monitoring and Performance Optimization Ensuring High Availability and Security
Authors: Inayat Ur Rehman, Georgia Sakellari
Abstract:
Cloud computing has evolved into a vital technology for businesses, offering scalability, flexibility, and cost-effectiveness. However, maintaining high availability and optimal performance in the cloud is crucial for reliable services. This paper explores the significance of cloud monitoring and performance optimization in sustaining the high availability of cloud-based systems. It discusses diverse monitoring tools, techniques, and best practices for continually assessing the health and performance of cloud resources. The paper also delves into performance optimization strategies, including resource allocation, load balancing, and auto-scaling, to ensure efficient resource utilization and responsiveness. Addressing potential challenges in cloud monitoring and optimization, the paper offers insights into data security and privacy considerations. Through this thorough analysis, the paper aims to underscore the importance of cloud monitoring and performance optimization for ensuring a seamless and highly available cloud computing environment.
Keywords: Cloud computing, cloud monitoring, performance optimization, high availability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74585 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.
Keywords: Digitalization, digital transformation, lean production, Industrie 4.0, value chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033584 New Hybrid Algorithm for Task Scheduling in Grid Computing to Decrease missed Task
Authors: Z. Pooranian, A. Harounabadi, M. Shojafar, N. Hedayat
Abstract:
The purpose of Grid computing is to utilize computational power of idle resources which are distributed in different areas. Given the grid dynamism and its decentralize resources, there is a need for an efficient scheduler for scheduling applications. Since task scheduling includes in the NP-hard problems various researches have focused on invented algorithms especially the genetic ones. But since genetic is an inherent algorithm which searches the problem space globally and does not have the efficiency required for local searching, therefore, its combination with local searching algorithms can compensate for this shortcomings. The aim of this paper is to combine the genetic algorithm and GELS (GAGELS) as a method to solve scheduling problem by which simultaneously pay attention to two factors of time and number of missed tasks. Results show that the proposed algorithm can decrease makespan while minimizing the number of missed tasks compared with the traditional methods.Keywords: Grid Computing, Genetic Algorithm, Gravitational Emulation Local Search (GELS), missed task
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943583 Accrual Based Scheduling for Cloud in Single and Multi Resource System: Study of Three Techniques
Authors: R. Santhosh, T. Ravichandran
Abstract:
This paper evaluates the accrual based scheduling for cloud in single and multi-resource system. Numerous organizations benefit from Cloud computing by hosting their applications. The cloud model provides needed access to computing with potentially unlimited resources. Scheduling is tasks and resources mapping to a certain optimal goal principle. Scheduling, schedules tasks to virtual machines in accordance with adaptable time, in sequence under transaction logic constraints. A good scheduling algorithm improves CPU use, turnaround time, and throughput. In this paper, three realtime cloud services scheduling algorithm for single resources and multiple resources are investigated. Experimental results show Resource matching algorithm performance to be superior for both single and multi-resource scheduling when compared to benefit first scheduling, Migration, Checkpoint algorithms.Keywords: Cloud computing, Scheduling, Migration, Checkpoint, Resource Matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918582 Development of Web-Based Remote Desktop to Provide Adaptive User Interfaces in Cloud Platform
Authors: Shuen-Tai Wang, Hsi-Ya Chang
Abstract:
Cloud virtualization technologies are becoming more and more prevalent, cloud users usually encounter the problem of how to access to the virtualized remote desktops easily over the web without requiring the installation of special clients. To resolve this issue, we took advantage of the HTML5 technology and developed web-based remote desktop. It permits users to access the terminal which running in our cloud platform from anywhere. We implemented a sketch of web interface following the cloud computing concept that seeks to enable collaboration and communication among users for high performance computing. Given the development of remote desktop virtualization, it allows to shift the user’s desktop from the traditional PC environment to the cloud platform, which is stored on a remote virtual machine rather than locally. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for online cloud service. This is also made possible by the low administrative costs as well as relatively inexpensive end-user terminals and reduced energy expenses.
Keywords: Virtualization, Remote Desktop, HTML5, Cloud Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3253581 Optimum Performance Measures of Interdependent Queuing System with Controllable Arrival Rates
Authors: S. S. Mishra
Abstract:
In this paper, an attempt is made to compute the total optimal cost of interdependent queuing system with controllable arrival rates as an important performance measure of the system. An example of application has also been presented to exhibit the use of the model. Finally, numerical demonstration based on a computing algorithm and variational effects of the model with the help of the graph have also been presented.Keywords: Computing, Controllable arrival rate, Optimum performance measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467580 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia
Authors: Tim Nedyalkov
Abstract:
A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. Collecting, managing, and retaining large amounts of data in cloud environments make information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.
Keywords: Cloud compliance, cloud security, cloud security governance, data governance, privacy protection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 912579 Agent Decision using Granular Computing in Traffic System
Authors: Yasser F. Hassan, Marwa Abdeen, Mustafa Fahmy
Abstract:
In recent years multi-agent systems have emerged as one of the interesting architectures facilitating distributed collaboration and distributed problem solving. Each node (agent) of the network might pursue its own agenda, exploit its environment, develop its own problem solving strategy and establish required communication strategies. Within each node of the network, one could encounter a diversity of problem-solving approaches. Quite commonly the agents can realize their processing at the level of information granules that is the most suitable from their local points of view. Information granules can come at various levels of granularity. Each agent could exploit a certain formalism of information granulation engaging a machinery of fuzzy sets, interval analysis, rough sets, just to name a few dominant technologies of granular computing. Having this in mind, arises a fundamental issue of forming effective interaction linkages between the agents so that they fully broadcast their findings and benefit from interacting with others.
Keywords: Granular computing, rough sets, agents, traffic system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728578 Enabling Remote Desktop in a Virtualized Environment for Cloud Services
Authors: Shuen-Tai Wang, Yu-Ching Lin, Hsi-Ya Chang
Abstract:
Cloud computing is the innovative and leading information technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort. This paper presents our development on enabling an individual user's desktop in a virtualized environment, which is stored on a remote virtual machine rather than locally. We present the initial work on the integration of virtual desktop and application sharing with virtualization technology. Given the development of remote desktop virtualization, this proposed effort has the potential to positively provide an efficient, resilience and elastic environment for online cloud service. Users no longer need to burden the cost of software licenses and platform maintenances. Moreover, this development also helps boost user productivity by promoting a flexible model that lets users access their desktop environments from virtually anywhere.
Keywords: Cloud Computing, Virtualization, Virtual Desktop, Elastic Environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207577 Organizational Data Security in Perspective of Ownership of Mobile Devices Used by Employees for Works
Authors: B. Ferdousi, J. Bari
Abstract:
With advancement of mobile computing, employees are increasingly doing their job-related works using personally owned mobile devices or organization owned devices. The Bring Your Own Device (BYOD) model allows employees to use their own mobile devices for job-related works, while Corporate Owned, Personally Enabled (COPE) model allows both organizations and employees to install applications onto organization-owned mobile devices used for job-related works. While there are many benefits of using mobile computing for job-related works, there are also serious concerns of different levels of threats to the organizational data security. Consequently, it is crucial to know the level of threat to the organizational data security in the BOYD and COPE models. It is also important to ensure that employees comply with the organizational data security policy. This paper discusses the organizational data security issues in perspective of ownership of mobile devices used by employees, especially in BYOD and COPE models. It appears that while the BYOD model has many benefits, there are relatively more data security risks in this model than in the COPE model. The findings also showed that in both BYOD and COPE environments, a more practical approach towards achieving secure mobile computing in organizational setting is through the development of comprehensive cybersecurity policies balancing employees’ need for convenience with organizational data security. The study helps to figure out the compliance and the risks of security breach in BYOD and COPE models.
Keywords: Data security, mobile computing, BYOD, COPE, cybersecurity policy, cybersecurity compliance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 373576 C@sa: Intelligent Home Control and Simulation
Authors: Berardina De Carolis, Giovanni Cozzolongo
Abstract:
In this paper, we present C@sa, a multiagent system aiming at modeling, controlling and simulating the behavior of an intelligent house. The developed system aims at providing to architects, designers and psychologists a simulation and control tool for understanding which is the impact of embedded and pervasive technology on people daily life. In this vision, the house is seen as an environment made up of independent and distributed devices, controlled by agents, interacting to support user's goals and tasks.
Keywords: Ambient intelligence, agent-based systems, influence diagrams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1550575 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491574 The Contemporary Visual Spectacle — Critical Visual Literacy
Authors: Lai-Fen Yang
Abstract:
In this increasingly visual world, how can we best decipher and understand the many ways that our everyday lives are organized around looking practices and the many images we encounter each day? Indeed, how we interact with and interpret visual images is a basic component of human life. Today, however, we are living in one of the most artificial visual and image-saturated cultures in human history, which makes understanding the complex construction and multiple social functions of visual imagery more important than ever before. Themes regarding our experience of a visually pervasive mediated culture, here, termed visual spectacle.
Keywords: Visual culture, contemporary, visual spectacle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966573 VLSI Design of 2-D Discrete Wavelet Transform for Area-Efficient and High-Speed Image Computing
Authors: Mountassar Maamoun, Mehdi Neggazi, Abdelhamid Meraghni, Daoud Berkani
Abstract:
This paper presents a VLSI design approach of a highspeed and real-time 2-D Discrete Wavelet Transform computing. The proposed architecture, based on new and fast convolution approach, reduces the hardware complexity in addition to reduce the critical path to the multiplier delay. Furthermore, an advanced twodimensional (2-D) discrete wavelet transform (DWT) implementation, with an efficient memory area, is designed to produce one output in every clock cycle. As a result, a very highspeed is attained. The system is verified, using JPEG2000 coefficients filters, on Xilinx Virtex-II Field Programmable Gate Array (FPGA) device without accessing any external memory. The resulting computing rate is up to 270 M samples/s and the (9,7) 2-D wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out memory) with 256×256 image size. In this way, the developed design requests reduced memory and provide very high-speed processing as well as high PSNR quality.Keywords: Discrete Wavelet Transform (DWT), Fast Convolution, FPGA, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966