Search results for: computing time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18531

Search results for: computing time

18381 Data Stream Association Rule Mining with Cloud Computing

Authors: B. Suraj Aravind, M. H. M. Krishna Prasad

Abstract:

There exist emerging applications of data streams that require association rule mining, such as network traffic monitoring, web click streams analysis, sensor data, data from satellites etc. Data streams typically arrive continuously in high speed with huge amount and changing data distribution. This raises new issues that need to be considered when developing association rule mining techniques for stream data. This paper proposes to introduce an improved data stream association rule mining algorithm by eliminating the limitation of resources. For this, the concept of cloud computing is used. Inclusion of this may lead to additional unknown problems which needs further research.

Keywords: data stream, association rule mining, cloud computing, frequent itemsets

Procedia PDF Downloads 492
18380 Going Horizontal: Confronting the Challenges When Transitioning to Cloud

Authors: Harvey Hyman, Thomas Hull

Abstract:

As one of the largest cancer treatment centers in the United States, we continuously confront the challenge of how to leverage the best possible technological solutions, in order to provide the highest quality of service to our customers – the doctors, nurses and patients at Moffitt who are fighting every day for the prevention and cure of cancer. This paper reports on the transition from a vertical to a horizontal IT infrastructure. We discuss how the new frameworks and methods such as public, private and hybrid cloud, brokering cloud services are replacing the traditional vertical paradigm for computing. We also report on the impact of containers, micro services, and the shift to continuous integration/continuous delivery. These impacts and changes in delivery methodology for computing are driving how we accomplish our strategic IT goals across the enterprise.

Keywords: cloud computing, IT infrastructure, IT architecture, healthcare

Procedia PDF Downloads 374
18379 A Study on How to Link BIM Services to Cloud Computing Architecture

Authors: Kim Young-Jin, Kim Byung-Kon

Abstract:

Although more efforts to expand the application of BIM (Building Information Modeling) technologies have be pursued in recent years than ever, it’s true that there have been various challenges in doing so, including a lack or absence of relevant institutions, lots of costs required to build BIM-related infrastructure, incompatible processes, etc. This, in turn, has led to a more prolonged delay in the expansion of their application than expected at an early stage. Especially, attempts to save costs for building BIM-related infrastructure and provide various BIM services compatible with domestic processes include studies to link between BIM and cloud computing technologies. Also in this study, the author attempted to develop a cloud BIM service operation model through analyzing the level of BIM applications for the construction sector and deriving relevant service areas, and find how to link BIM services to the cloud operation model, as through archiving BIM data and creating a revenue structure so that the BIM services may grow spontaneously, considering a demand for cloud resources.

Keywords: construction IT, BIM (building information modeling), cloud computing, BIM service based cloud computing

Procedia PDF Downloads 481
18378 DNA PLA: A Nano-Biotechnological Programmable Device

Authors: Hafiz Md. HasanBabu, Khandaker Mohammad Mohi Uddin, Md. IstiakJaman Ami, Rahat Hossain Faisal

Abstract:

Computing in biomolecular programming performs through the different types of reactions. Proteins and nucleic acids are used to store the information generated by biomolecular programming. DNA (Deoxyribose Nucleic Acid) can be used to build a molecular computing system and operating system for its predictable molecular behavior property. The DNA device has clear advantages over conventional devices when applied to problems that can be divided into separate, non-sequential tasks. The reason is that DNA strands can hold so much data in memory and conduct multiple operations at once, thus solving decomposable problems much faster. Programmable Logic Array, abbreviated as PLA is a programmable device having programmable AND operations and OR operations. In this paper, a DNA PLA is designed by different molecular operations using DNA molecules with the proposed algorithms. The molecular PLA could take advantage of DNA's physical properties to store information and perform calculations. These include extremely dense information storage, enormous parallelism, and extraordinary energy efficiency.

Keywords: biological systems, DNA computing, parallel computing, programmable logic array, PLA, DNA

Procedia PDF Downloads 116
18377 Navigating Cyber Attacks with Quantum Computing: Leveraging Vulnerabilities and Forensics for Advanced Penetration Testing in Cybersecurity

Authors: Sayor Ajfar Aaron, Ashif Newaz, Sajjat Hossain Abir, Mushfiqur Rahman

Abstract:

This paper examines the transformative potential of quantum computing in the field of cybersecurity, with a focus on advanced penetration testing and forensics. It explores how quantum technologies can be leveraged to identify and exploit vulnerabilities more efficiently than traditional methods and how they can enhance the forensic analysis of cyber-attacks. Through theoretical analysis and practical simulations, this study highlights the enhanced capabilities of quantum algorithms in detecting and responding to sophisticated cyber threats, providing a pathway for developing more resilient cybersecurity infrastructures.

Keywords: cybersecurity, cyber forensics, penetration testing, quantum computing

Procedia PDF Downloads 42
18376 Method and Apparatus for Optimized Job Scheduling in the High-Performance Computing Cloud Environment

Authors: Subodh Kumar, Amit Varde

Abstract:

Typical on-premises high-performance computing (HPC) environments consist of a fixed number and a fixed set of computing hardware. During the design of the HPC environment, the hardware components, including but not limited to CPU, Memory, GPU, and networking, are carefully chosen from select vendors for optimal performance. High capital cost for building the environment is a prime factor influencing the design environment. A class of software called “Job Schedulers” are critical to maximizing these resources and running multiple workloads to extract the maximum value for the high capital cost. In principle, schedulers work by preventing workloads and users from monopolizing the finite hardware resources by queuing jobs in a workload. A cloud-based HPC environment does not have the limitations of fixed (type of and quantity of) hardware resources. In theory, users and workloads could spin up any number and type of hardware resource. This paper discusses the limitations of using traditional scheduling algorithms for cloud-based HPC workloads. It proposes a new set of features, called “HPC optimizers,” for maximizing the benefits of the elasticity and scalability of the cloud with the goal of cost-performance optimization of the workload.

Keywords: high performance computing, HPC, cloud computing, optimization, schedulers

Procedia PDF Downloads 81
18375 WhatsApp as Part of a Blended Learning Model to Help Programming Novices

Authors: Tlou J. Ramabu

Abstract:

Programming is one of the challenging subjects in the field of computing. In the higher education sphere, some programming novices’ performance, retention rate, and success rate are not improving. Most of the time, the problem is caused by the slow pace of learning, difficulty in grasping the syntax of the programming language and poor logical skills. More importantly, programming forms part of major subjects within the field of computing. As a result, specialized pedagogical methods and innovation are highly recommended. Little research has been done on the potential productivity of the WhatsApp platform as part of a blended learning model. In this article, the authors discuss the WhatsApp group as a part of blended learning model incorporated for a group of programming novices. We discuss possible administrative activities for productive utilisation of the WhatsApp group on the blended learning overview. The aim is to take advantage of the popularity of WhatsApp and the time students spend on it for their educational purpose. We believe that blended learning featuring a WhatsApp group may ease novices’ cognitive load and strengthen their foundational programming knowledge and skills. This is a work in progress as the proposed blended learning model with WhatsApp incorporated is yet to be implemented.

Keywords: blended learning, higher education, WhatsApp, programming, novices, lecturers

Procedia PDF Downloads 164
18374 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 154
18373 Loan Repayment Prediction Using Machine Learning: Model Development, Django Web Integration and Cloud Deployment

Authors: Seun Mayowa Sunday

Abstract:

Loan prediction is one of the most significant and recognised fields of research in the banking, insurance, and the financial security industries. Some prediction systems on the market include the construction of static software. However, due to the fact that static software only operates with strictly regulated rules, they cannot aid customers beyond these limitations. Application of many machine learning (ML) techniques are required for loan prediction. Four separate machine learning models, random forest (RF), decision tree (DT), k-nearest neighbour (KNN), and logistic regression, are used to create the loan prediction model. Using the anaconda navigator and the required machine learning (ML) libraries, models are created and evaluated using the appropriate measuring metrics. From the finding, the random forest performs with the highest accuracy of 80.17% which was later implemented into the Django framework. For real-time testing, the web application is deployed on the Alibabacloud which is among the top 4 biggest cloud computing provider. Hence, to the best of our knowledge, this research will serve as the first academic paper which combines the model development and the Django framework, with the deployment into the Alibaba cloud computing application.

Keywords: k-nearest neighbor, random forest, logistic regression, decision tree, django, cloud computing, alibaba cloud

Procedia PDF Downloads 119
18372 Parallel Computing: Offloading Matrix Multiplication to GPU

Authors: Bharath R., Tharun Sai N., Bhuvan G.

Abstract:

This project focuses on developing a Parallel Computing method aimed at optimizing matrix multiplication through GPU acceleration. Addressing algorithmic challenges, GPU programming intricacies, and integration issues, the project aims to enhance efficiency and scalability. The methodology involves algorithm design, GPU programming, and optimization techniques. Future plans include advanced optimizations, extended functionality, and integration with high-level frameworks. User engagement is emphasized through user-friendly interfaces, open- source collaboration, and continuous refinement based on feedback. The project's impact extends to significantly improving matrix multiplication performance in scientific computing and machine learning applications.

Keywords: matrix multiplication, parallel processing, cuda, performance boost, neural networks

Procedia PDF Downloads 41
18371 A Study on How to Develop the Usage Metering Functions of BIM (Building Information Modeling) Software under Cloud Computing Environment

Authors: Kim Byung-Kon, Kim Young-Jin

Abstract:

As project opportunities for the Architecture, Engineering and Construction (AEC) industry have grown more complex and larger, the utilization of BIM (Building Information Modeling) technologies for 3D design and simulation practices has been increasing significantly; the typical applications of the BIM technologies include clash detection and design alternative based on 3D planning, which have been expanded over to the technology of construction management in the AEC industry for virtual design and construction. As for now, commercial BIM software has been operated under a single-user environment, which is why initial costs for its introduction are very high. Cloud computing, one of the most promising next-generation Internet technologies, enables simple Internet devices to use services and resources provided with BIM software. Recently in Korea, studies to link between BIM and cloud computing technologies have been directed toward saving costs to build BIM-related infrastructure, and providing various BIM services for small- and medium-sized enterprises (SMEs). This study addressed how to develop the usage metering functions of BIM software under cloud computing architecture in order to archive and use BIM data and create an optimal revenue structure so that the BIM services may grow spontaneously, considering a demand for cloud resources. To this end, the author surveyed relevant cases, and then analyzed needs and requirements from AEC industry. Based on the results & findings of the foregoing survey & analysis, the author proposed herein how to optimally develop the usage metering functions of cloud BIM software.

Keywords: construction IT, BIM (Building Information Modeling), cloud computing, BIM-based cloud computing, 3D design, cloud BIM

Procedia PDF Downloads 497
18370 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling

Authors: Fahad Y. Al-dawish

Abstract:

The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.

Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing

Procedia PDF Downloads 412
18369 A Type-2 Fuzzy Model for Link Prediction in Social Network

Authors: Mansoureh Naderipour, Susan Bastani, Mohammad Fazel Zarandi

Abstract:

Predicting links that may occur in the future and missing links in social networks is an attractive problem in social network analysis. Granular computing can help us to model the relationships between human-based system and social sciences in this field. In this paper, we present a model based on granular computing approach and Type-2 fuzzy logic to predict links regarding nodes’ activity and the relationship between two nodes. Our model is tested on collaboration networks. It is found that the accuracy of prediction is significantly higher than the Type-1 fuzzy and crisp approach.

Keywords: social network, link prediction, granular computing, type-2 fuzzy sets

Procedia PDF Downloads 316
18368 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation

Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta

Abstract:

Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.

Keywords: channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, lévy flight distribution, optimization, improved multi–objective firefly algorithms, Pareto optimal

Procedia PDF Downloads 310
18367 The On-Board Critical Message Transmission Design for Navigation Satellite Delay/Disruption Tolerant Network

Authors: Ji-yang Yu, Dan Huang, Guo-ping Feng, Xin Li, Lu-yuan Wang

Abstract:

The navigation satellite network, especially the Beidou MEO Constellation, can relay data effectively with wide coverage and is applied in navigation, detection, and position widely. But the constellation has not been completed, and the amount of satellites on-board is not enough to cover the earth, which makes the data-relay disrupted or delayed in the transition process. The data-relay function needs to tolerant the delay or disruption in some extension, which make the Beidou MEO Constellation a delay/disruption-tolerant network (DTN). The traditional DTN designs mainly employ the relay table as the basic of data path schedule computing. But in practical application, especially in critical condition, such as the war-time or the infliction heavy losses on the constellation, parts of the nodes may become invalid, then the traditional DTN design could be useless. Furthermore, when transmitting the critical message in the navigation system, the maximum priority strategy is used, but the nodes still inquiry the relay table to design the path, which makes the delay more than minutes. Under this circumstances, it needs a function which could compute the optimum data path on-board in real-time according to the constellation states. The on-board critical message transmission design for navigation satellite delay/disruption-tolerant network (DTN) is proposed, according to the characteristics of navigation satellite network. With the real-time computation of parameters in the network link, the least-delay transition path is deduced to retransmit the critical message in urgent conditions. First, the DTN model for constellation is established based on the time-varying matrix (TVM) instead of the time-varying graph (TVG); then, the least transition delay data path is deduced with the parameters of the current node; at last, the critical message transits to the next best node. For the on-board real-time computing, the time delay and misjudges of constellation states in ground stations are eliminated, and the residual information channel for each node can be used flexibly. Compare with the minute’s delay of traditional DTN; the proposed transmits the critical message in seconds, which improves the re-transition efficiency. The hardware is implemented in FPGA based on the proposed model, and the tests prove the validity.

Keywords: critical message, DTN, navigation satellite, on-board, real-time

Procedia PDF Downloads 336
18366 Recent Developments in the Application of Deep Learning to Stock Market Prediction

Authors: Shraddha Jain Sharma, Ratnalata Gupta

Abstract:

Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.

Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume

Procedia PDF Downloads 81
18365 Hierarchical Queue-Based Task Scheduling with CloudSim

Authors: Wanqing You, Kai Qian, Ying Qian

Abstract:

The concepts of Cloud Computing provide users with infrastructure, platform and software as service, which make those services more accessible for people via Internet. To better analysis the performance of Cloud Computing provisioning policies as well as resources allocation strategies, a toolkit named CloudSim proposed. With CloudSim, the Cloud Computing environment can be easily constructed by modelling and simulating cloud computing components, such as datacenter, host, and virtual machine. A good scheduling strategy is the key to achieve the load balancing among different machines as well as to improve the utilization of basic resources. Recently, the existing scheduling algorithms may work well in some presumptive cases in a single machine; however they are unable to make the best decision for the unforeseen future. In real world scenario, there would be numbers of tasks as well as several virtual machines working in parallel. Based on the concepts of multi-queue, this paper presents a new scheduling algorithm to schedule tasks with CloudSim by taking into account several parameters, the machines’ capacity, the priority of tasks and the history log.

Keywords: hierarchical queue, load balancing, CloudSim, information technology

Procedia PDF Downloads 413
18364 Performance Analysis of Elliptic Curve Cryptography Using Onion Routing to Enhance the Privacy and Anonymity in Grid Computing

Authors: H. Parveen Begam, M. A. Maluk Mohamed

Abstract:

Grid computing is an environment that allows sharing and coordinated use of diverse resources in dynamic, heterogeneous and distributed environment using Virtual Organization (VO). Security is a critical issue due to the open nature of the wireless channels in the grid computing which requires three fundamental services: authentication, authorization, and encryption. The privacy and anonymity are considered as an important factor while communicating over publicly spanned network like web. To ensure a high level of security we explored an extension of onion routing, which has been used with dynamic token exchange along with protection of privacy and anonymity of individual identity. To improve the performance of encrypting the layers, the elliptic curve cryptography is used. Compared to traditional cryptosystems like RSA (Rivest-Shamir-Adelman), ECC (Elliptic Curve Cryptosystem) offers equivalent security with smaller key sizes which result in faster computations, lower power consumption, as well as memory and bandwidth savings. This paper presents the estimation of the performance improvements of onion routing using ECC as well as the comparison graph between performance level of RSA and ECC.

Keywords: grid computing, privacy, anonymity, onion routing, ECC, RSA

Procedia PDF Downloads 388
18363 Automatic Queuing Model Applications

Authors: Fahad Suleiman

Abstract:

Queuing, in medical system is the process of moving patients in a specific sequence to a specific service according to the patients’ nature of illness. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the medical consultancy system, the different queuing algorithms that are used in healthcare system to serve the patients, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the medical queuing system that can analyses the queue status and take decision which patient to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

Keywords: queuing systems, queuing system models, scheduling algorithms, patients

Procedia PDF Downloads 340
18362 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect

Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy

Abstract:

Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.

Keywords: genetic algorithms, economic dispatch, pattern search

Procedia PDF Downloads 432
18361 Distributed Actor System for Traffic Simulation

Authors: Han Wang, Zhuoxian Dai, Zhe Zhu, Hui Zhang, Zhenyu Zeng

Abstract:

In traditional microscopic traffic simulation, various approaches have been suggested to implement the single-agent behaviors about lane changing and intelligent driver model. However, when it comes to very large metropolitan areas, microscopic traffic simulation requires more resources and become time-consuming, then macroscopic traffic simulation aggregate trends of interests rather than individual vehicle traces. In this paper, we describe the architecture and implementation of the actor system of microscopic traffic simulation, which exploits the distributed architecture of modern-day cloud computing. The results demonstrate that our architecture achieves high-performance and outperforms all the other traditional microscopic software in all tasks. To the best of our knowledge, this the first system that enables single-agent behavior in macroscopic traffic simulation. We thus believe it contributes to a new type of system for traffic simulation, which could provide individual vehicle behaviors in microscopic traffic simulation.

Keywords: actor system, cloud computing, distributed system, traffic simulation

Procedia PDF Downloads 182
18360 Adaptive Certificate-Based Mutual Authentication Protocol for Mobile Grid Infrastructure

Authors: H. Parveen Begam, M. A. Maluk Mohamed

Abstract:

Mobile Grid Computing is an environment that allows sharing and coordinated use of diverse resources in dynamic, heterogeneous and distributed environment using different types of electronic portable devices. In a grid environment the security issues are like authentication, authorization, message protection and delegation handled by GSI (Grid Security Infrastructure). Proving better security between mobile devices and grid infrastructure is a major issue, because of the open nature of wireless networks, heterogeneous and distributed environments. In a mobile grid environment, the individual computing devices may be resource-limited in isolation, as an aggregated sum, they have the potential to play a vital role within the mobile grid environment. Some adaptive methodology or solution is needed to solve the issues like authentication of a base station, security of information flowing between a mobile user and a base station, prevention of attacks within a base station, hand-over of authentication information, communication cost of establishing a session key between mobile user and base station, computing complexity of achieving authenticity and security. The sharing of resources of the devices can be achieved only through the trusted relationships between the mobile hosts (MHs). Before accessing the grid service, the mobile devices should be proven authentic. This paper proposes the dynamic certificate based mutual authentication protocol between two mobile hosts in a mobile grid environment. The certificate generation process is done by CA (Certificate Authority) for all the authenticated MHs. Security (because of validity period of the certificate) and dynamicity (transmission time) can be achieved through the secure service certificates. Authentication protocol is built on communication services to provide cryptographically secured mechanisms for verifying the identity of users and resources.

Keywords: mobile grid computing, certificate authority (CA), SSL/TLS protocol, secured service certificates

Procedia PDF Downloads 297
18359 Classification of Attacks Over Cloud Environment

Authors: Karim Abouelmehdi, Loubna Dali, Elmoutaoukkil Abdelmajid, Hoda Elsayed, Eladnani Fatiha, Benihssane Abderahim

Abstract:

The security of cloud services is the concern of cloud service providers. In this paper, we will mention different classifications of cloud attacks referred by specialized organizations. Each agency has its classification of well-defined properties. The purpose is to present a high-level classification of current research in cloud computing security. This classification is organized around attack strategies and corresponding defenses.

Keywords: cloud computing, classification, risk, security

Procedia PDF Downloads 535
18358 On the Factors Affecting Computing Students’ Awareness of the Latest ICTs

Authors: O. D. Adegbehingbe, S. D. Eyono Obono

Abstract:

The education sector is constantly faced with rapid changes in technologies in terms of ensuring that the curriculum is up to date and in terms of making sure that students are aware of these technological changes. This challenge can be seen as the motivation for this study, which is to examine the factors affecting computing students’ awareness of the latest Information Technologies (ICTs). The aim of this study is divided into two sub-objectives which are: the selection of relevant theories and the design of a conceptual model to support it as well as the empirical testing of the designed model. The first objective is achieved by a review of existing literature on technology adoption theories and models. The second objective is achieved using a survey of computing students in the four universities of the KwaZulu-Natal province of South Africa. Data collected from this survey is analyzed using Statistical package for the Social Science (SPSS) using descriptive statistics, ANOVA and Pearson correlations. The main hypothesis of this study is that there is a relationship between the demographics and the prior conditions of the computing students and their awareness of general ICT trends and of Digital Switch Over (DSO) a new technology which involves the change from analog to digital television broadcasting in order to achieve improved spectrum efficiency. The prior conditions of the computing students that were considered in this study are students’ perceived exposure to career guidance and students’ perceived curriculum currency. The results of this study confirm that gender, ethnicity, and high school computing course affect students’ perceived curriculum currency while high school location affects students’ awareness of DSO. The results of this study also confirm that there is a relationship between students prior conditions and their awareness of general ICT trends and DSO in particular.

Keywords: education, information technologies, IDT, awareness

Procedia PDF Downloads 350
18357 Proposed Solutions Based on Affective Computing

Authors: Diego Adrian Cardenas Jorge, Gerardo Mirando Guisado, Alfredo Barrientos Padilla

Abstract:

A system based on Affective Computing can detect and interpret human information like voice, facial expressions and body movement to detect emotions and execute a corresponding response. This data is important due to the fact that a person can communicate more effectively with emotions than can be possible with words. This information can be processed through technological components like Facial Recognition, Gait Recognition or Gesture Recognition. As of now, solutions proposed using this technology only consider one component at a given moment. This research investigation proposes two solutions based on Affective Computing taking into account more than one component for emotion detection. The proposals reflect the levels of dependency between hardware devices and software, as well as the interaction process between the system and the user which implies the development of scenarios where both proposals will be put to the test in a live environment. Both solutions are to be developed in code by software engineers to prove the feasibility. To validate the impact on society and business interest, interviews with stakeholders are conducted with an investment mind set where each solution is labeled on a scale of 1 through 5, being one a minimum possible investment and 5 the maximum.

Keywords: affective computing, emotions, emotion detection, face recognition, gait recognition

Procedia PDF Downloads 356
18356 Arithmetic Operations in Deterministic P Systems Based on the Weak Rule Priority

Authors: Chinedu Peter, Dashrath Singh

Abstract:

Membrane computing is a computability model which abstracts its structures and functions from the biological cell. The main ingredient of membrane computing is the notion of a membrane structure, which consists of several cell-like membranes recurrently placed inside a unique skin membrane. The emergence of several variants of membrane computing gives rise to the notion of a P system. The paper presents a variant of P systems for arithmetic operations on non-negative integers based on the weak priorities for rule application. Consequently, we obtain deterministic P systems. Two membranes suffice. There are at most four objects for multiplication and five objects for division throughout the computation processes. The model is simple and has a potential for possible extension to non-negative integers and real numbers in general.

Keywords: P system, binary operation, determinism, weak rule priority

Procedia PDF Downloads 440
18355 An Effective Route to Control of the Safety of Accessing and Storing Data in the Cloud-Based Data Base

Authors: Omid Khodabakhshi, Amir Rozdel

Abstract:

The subject of cloud computing security research has allocated a number of challenges and competitions because the data center is comprised of complex private information and are always faced various risks of information disclosure by hacker attacks or internal enemies. Accordingly, the security of virtual machines in the cloud computing infrastructure layer is very important. So far, there are many software solutions to develop security in virtual machines. But using software alone is not enough to solve security problems. The purpose of this article is to examine the challenges and security requirements for accessing and storing data in an insecure cloud environment. In other words, in this article, a structure is proposed for the implementation of highly isolated security-sensitive codes using secure computing hardware in virtual environments. It also allows remote code validation with inputs and outputs. We provide these security features even in situations where the BIOS, the operating system, and even the super-supervisor are infected. To achieve these goals, we will use the hardware support provided by the new Intel and AMD processors, as well as the TPM security chip. In conclusion, the use of these technologies ultimately creates a root of dynamic trust and reduces TCB to security-sensitive codes.

Keywords: code, cloud computing, security, virtual machines

Procedia PDF Downloads 182
18354 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System

Authors: Abdul-Rahman Al-Ali

Abstract:

As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.

Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances

Procedia PDF Downloads 313
18353 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics

Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood

Abstract:

We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.

Keywords: digital forensics, cloud computing, cyber security, spark, Kubernetes, Kafka

Procedia PDF Downloads 384
18352 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: João Madureira, Ricardo Lagido, Inês Sousa, Fraunhofer Portugal

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.

Keywords: inertial measurement unit (IMU), global positioning system (GPS), smartphone, surfing performance

Procedia PDF Downloads 394