Search results for: hyperdimensional computing
729 Design and Implementation of Security Middleware for Data Warehouse Signature, Framework
Authors: Mayada Al Meghari
Abstract:
Recently, grid middlewares have provided large integrated use of network resources as the shared data and the CPU to become a virtual supercomputer. In this work, we present the design and implementation of the middleware for Data Warehouse Signature, DWS Framework. The aim of using the middleware in our DWS framework is to achieve the high performance by the parallel computing. This middleware is developed on Alchemi.Net framework to increase the security among the network nodes through the authentication and group-key distribution model. This model achieves the key security and prevents any intermediate attacks in the middleware. This paper presents the flow process structures of the middleware design. In addition, the paper ensures the implementation of security for DWS middleware enhancement with the authentication and group-key distribution model. Finally, from the analysis of other middleware approaches, the developed middleware of DWS framework is the optimal solution of a complete covering of security issues.Keywords: middleware, parallel computing, data warehouse, security, group-key, high performance
Procedia PDF Downloads 120728 Pervasive Computing: Model to Increase Arable Crop Yield through Detection Intrusion System (IDS)
Authors: Idowu Olugbenga Adewumi, Foluke Iyabo Oluwatoyinbo
Abstract:
Presently, there are several discussions on the food security with increase in yield of arable crop throughout the world. This article, briefly present research efforts to create digital interfaces to nature, in particular to area of crop production in agriculture with increase in yield with interest on pervasive computing. The approach goes beyond the use of sensor networks for environmental monitoring but also by emphasizing the development of a system architecture that detect intruder (Intrusion Process) which reduce the yield of the farmer at the end of the planting/harvesting period. The objective of the work is to set a model for setting up the hand held or portable device for increasing the quality and quantity of arable crop. This process incorporates the use of infrared motion image sensor with security alarm system which can send a noise signal to intruder on the farm. This model of the portable image sensing device in monitoring or scaring human, rodent, birds and even pests activities will reduce post harvest loss which will increase the yield on farm. The nano intelligence technology was proposed to combat and minimize intrusion process that usually leads to low quality and quantity of produce from farm. Intranet system will be in place with wireless radio (WLAN), router, server, and client computer system or hand held device e.g PDAs or mobile phone. This approach enables the development of hybrid systems which will be effective as a security measure on farm. Since, precision agriculture has developed with the computerization of agricultural production systems and the networking of computerized control systems. In the intelligent plant production system of controlled greenhouses, information on plant responses, measured by sensors, is used to optimize the system. Further work must be carry out on modeling using pervasive computing environment to solve problems of agriculture, as the use of electronics in agriculture will attracts more youth involvement in the industry.Keywords: pervasive computing, intrusion detection, precision agriculture, security, arable crop
Procedia PDF Downloads 406727 Communication of Sensors in Clustering for Wireless Sensor Networks
Authors: Kashish Sareen, Jatinder Singh Bal
Abstract:
The use of wireless sensor networks (WSNs) has grown vastly in the last era, pointing out the crucial need for scalable and energy-efficient routing and data gathering and aggregation protocols in corresponding large-scale environments. Wireless Sensor Networks have now recently emerged as a most important computing platform and continue to grow in diverse areas to provide new opportunities for networking and services. However, the energy constrained and limited computing resources of the sensor nodes present major challenges in gathering data. The sensors collect data about their surrounding and forward it to a command centre through a base station. The past few years have witnessed increased interest in the potential use of wireless sensor networks (WSNs) as they are very useful in target detecting and other applications. However, hierarchical clustering protocols have maximum been used in to overall system lifetime, scalability and energy efficiency. In this paper, the state of the art in corresponding hierarchical clustering approaches for large-scale WSN environments is shown.Keywords: clustering, DLCC, MLCC, wireless sensor networks
Procedia PDF Downloads 483726 Global Healthcare Village Based on Mobile Cloud Computing
Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar
Abstract:
Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy
Procedia PDF Downloads 378725 Applications of AI, Machine Learning, and Deep Learning in Cyber Security
Authors: Hailyie Tekleselase
Abstract:
Deep learning is increasingly used as a building block of security systems. However, neural networks are hard to interpret and typically solid to the practitioner. This paper presents a detail survey of computing methods in cyber security, and analyzes the prospects of enhancing the cyber security capabilities by suggests that of accelerating the intelligence of the security systems. There are many AI-based applications used in industrial scenarios such as Internet of Things (IoT), smart grids, and edge computing. Machine learning technologies require a training process which introduces the protection problems in the training data and algorithms. We present machine learning techniques currently applied to the detection of intrusion, malware, and spam. Our conclusions are based on an extensive review of the literature as well as on experiments performed on real enterprise systems and network traffic. We conclude that problems can be solved successfully only when methods of artificial intelligence are being used besides human experts or operators.Keywords: artificial intelligence, machine learning, deep learning, cyber security, big data
Procedia PDF Downloads 127724 Decision-Making Strategies on Smart Dairy Farms: A Review
Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh
Abstract:
Farm management and operations will drastically change due to access to real-time data, real-time forecasting, and tracking of physical items in combination with Internet of Things developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm-based management and decision-making does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyse on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue, and environmental impact. Evolutionary computing can be very effective in finding the optimal combination of sets of some objects and, finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and evolutionary computing in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management, and its uptake has become a continuing trend.Keywords: big data, evolutionary computing, cloud, precision technologies
Procedia PDF Downloads 190723 Digital Homeostasis: Tangible Computing as a Multi-Sensory Installation
Authors: Andrea Macruz
Abstract:
This paper explores computation as a process for design by examining how computers can become more than an operative strategy in a designer's toolkit. It documents this, building upon concepts of neuroscience and Antonio Damasio's Homeostasis Theory, which is the control of bodily states through feedback intended to keep conditions favorable for life. To do this, it follows a methodology through algorithmic drawing and discusses the outcomes of three multi-sensory design installations, which culminated from a course in an academic setting. It explains both the studio process that took place to create the installations and the computational process that was developed, related to the fields of algorithmic design and tangible computing. It discusses how designers can use computational range to achieve homeostasis related to sensory data in a multi-sensory installation. The outcomes show clearly how people and computers interact with different sensory modalities and affordances. They propose using computers as meta-physical stabilizers rather than tools.Keywords: algorithmic drawing, Antonio Damasio, emotion, homeostasis, multi-sensory installation, neuroscience
Procedia PDF Downloads 109722 Design Off-Campus Interactive Cloud-Based Learning Model
Authors: Osamah Al Qadoori
Abstract:
Using cloud computing in educational sectors grow rapidly in UAE. Initially, within Cloud-Learning Environment Students whenever and wherever can remotely join the online-classroom, on the other hand, Cloud-Based Learning is greatly decreasing the infrastructure and the maintenance cost. Nowadays in many schools (K-12), institutes, colleges as well as universities in UAE Cloud-Based Teaching and Learning environments gain a higher demand and concern. Many students don’t use the available online-educational resources effectively. The challenging question is to which extend these educational resources which are installed in the cloud environment are valuable and constructive? In this paper the researcher is seeking to design an expert agent prototype where the huge information being accommodated inside the cloud environment will go through expert filtration before going to be utilized by other clients (students). To achieve this goal, the focus of the present research would be on two different directions the educational human expertise and the automated-educational expert systems.Keywords: cloud computing, cloud-learning environment, online-classroom, the educational human expertise, the automated-educational expert systems
Procedia PDF Downloads 542721 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encyption Scheme
Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Noel Dogonyara
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud. Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy or confidentiality, availability and integrity of the data and user’s security. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory that is derivable from abstract algebra which can easily be integrated and leveraged in the Cloud computing interface with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based on cryptographic security algorithm.Keywords: big data analytics, security, privacy, bootstrapping, Fully Homomorphic Encryption Scheme
Procedia PDF Downloads 484720 Variants of Mathematical Induction as Strong Proof Techniques in Theory of Computing
Authors: Ahmed Tarek, Ahmed Alveed
Abstract:
In the theory of computing, there are a wide variety of direct and indirect proof techniques. However, mathematical induction (MI) stands out to be one of the most powerful proof techniques for proving hypotheses, theorems, and new results. There are variations of mathematical induction-based proof techniques, which are broadly classified into three categories, such as structural induction (SI), weak induction (WI), and strong induction (SI). In this expository paper, several different variants of the mathematical induction techniques are explored, and the specific scenarios are discussed where a specific induction technique stands out to be more advantageous as compared to other induction strategies. Also, the essential difference among the variants of mathematical induction are explored. The points of separation among mathematical induction, recursion, and logical deduction are precisely analyzed, and the relationship among variations of recurrence relations, and mathematical induction are being explored. In this context, the application of recurrence relations, and mathematical inductions are considered together in a single framework for codewords over a given alphabet.Keywords: alphabet, codeword, deduction, mathematical, induction, recurrence relation, strong induction, structural induction, weak induction
Procedia PDF Downloads 165719 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 119718 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications
Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan
Abstract:
High performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from electromagnetic (EM) simulations is often times cumbersome leading to large storage requirement. This paper proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. This paper solves this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.Keywords: RADAR, RCS, high performance computing, point scatterer model
Procedia PDF Downloads 192717 Using the M-Learning to Support Learning of the Concept of the Derivative
Authors: Elena F. Ruiz, Marina Vicario, Chadwick Carreto, Rubén Peredo
Abstract:
One of the main obstacles in Mexico’s engineering programs is math comprehension, especially in the Derivative concept. Due to this, we present a study case that relates Mobile Computing and Classroom Learning in the “Escuela Superior de Cómputo”, based on the Educational model of the Instituto Politécnico Nacional (competence based work and problem solutions) in which we propose apps and activities to teach the concept of the Derivative. M- Learning is emphasized as one of its lines, as the objective is the use of mobile devices running an app that uses its components such as sensors, screen, camera and processing power in classroom work. In this paper, we employed Augmented Reality (ARRoC), based on the good results this technology has had in the field of learning. This proposal was developed using a qualitative research methodology supported by quantitative research. The methodological instruments used on this proposal are: observation, questionnaires, interviews and evaluations. We obtained positive results with a 40% increase using M-Learning, from the 20% increase using traditional means.Keywords: augmented reality, classroom learning, educational research, mobile computing
Procedia PDF Downloads 362716 Enhancement Dynamic Cars Detection Based on Optimized HOG Descriptor
Authors: Mansouri Nabila, Ben Jemaa Yousra, Motamed Cina, Watelain Eric
Abstract:
Research and development efforts in intelligent Advanced Driver Assistance Systems (ADAS) seek to save lives and reduce the number of on-road fatalities. For traffic and emergency monitoring, the essential but challenging task is vehicle detection and tracking in reasonably short time. This purpose needs first of all a powerful dynamic car detector model. In fact, this paper presents an optimized HOG process based on shape and motion parameters fusion. Our proposed approach mains to compute HOG by bloc feature from foreground blobs using configurable research window and pathway in order to overcome the shortcoming in term of computing time of HOG descriptor and improve their dynamic application performance. Indeed we prove in this paper that HOG by bloc descriptor combined with motion parameters is a very suitable car detector which reaches in record time a satisfactory recognition rate in dynamic outside area and bypasses several popular works without using sophisticated and expensive architectures such as GPU and FPGA.Keywords: car-detector, HOG, motion, computing time
Procedia PDF Downloads 324715 Some Conjectures and Programs about Computing the Detour Index of Molecular Graphs of Nanotubes
Authors: Shokofeh Ebrtahimi
Abstract:
Let G be the chemical graph of a molecule. The matrix D = [dij ] is called the detour matrix of G, if dij is the length of longest path between atoms i and j. The sum of all entries above the main diagonal of D is called the detour index of G.Chemical graph theory is the topology branch of mathematical chemistry which applies graph theory to mathematical modelling of chemical phenomena.[1] The pioneers of the chemical graph theory are Alexandru Balaban, Ante Graovac, Ivan Gutman, Haruo Hosoya, Milan Randić and Nenad TrinajstićLet G be the chemical graph of a molecule. The matrix D = [dij ] is called the detour matrix of G, if dij is the length of longest path between atoms i and j. The sum of all entries above the main diagonal of D is called the detour index of G. In this paper, a new program for computing the detour index of molecular graphs of nanotubes by heptagons is determineded. Some Conjectures about detour index of Molecular graphs of nanotubes is included.Keywords: chemical graph, detour matrix, Detour index, carbon nanotube
Procedia PDF Downloads 293714 Teaching Computer Programming to Diverse Students: A Comparative, Mixed-Methods, Classroom Research Study
Authors: Almudena Konrad, Tomás Galguera
Abstract:
Lack of motivation and interest is a serious obstacle to students’ learning computing skills. A need exists for a knowledge base on effective pedagogy and curricula to teach computer programming. This paper presents results from research evaluating a six-year project designed to teach complex concepts in computer programming collaboratively, while supporting students to continue developing their computer thinking and related coding skills individually. Utilizing a quasi-experimental, mixed methods design, the pedagogical approaches and methods were assessed in two contrasting groups of students with different socioeconomic status, gender, and age composition. Analyses of quantitative data from Likert-scale surveys and an evaluation rubric, combined with qualitative data from reflective writing exercises and semi-structured interviews yielded convincing evidence of the project’s success at both teaching and inspiring students.Keywords: computational thinking, computing education, computer programming curriculum, logic, teaching methods
Procedia PDF Downloads 316713 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach
Authors: Mohammad H. Almomani
Abstract:
In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization
Procedia PDF Downloads 356712 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories
Procedia PDF Downloads 284711 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 366710 Pod and Wavelets Application for Aerodynamic Design Optimization
Authors: Bonchan Koo, Junhee Han, Dohyung Lee
Abstract:
The research attempts to evaluate the accuracy and efficiency of a design optimization procedure which combines wavelets-based solution algorithm and proper orthogonal decomposition (POD) database management technique. Aerodynamic design procedure calls for high fidelity computational fluid dynamic (CFD) simulations and the consideration of large number of flow conditions and design constraints. Even with significant computing power advancement, current level of integrated design process requires substantial computing time and resources. POD reduces the degree of freedom of full system through conducting singular value decomposition for various field simulations. For additional efficiency improvement of the procedure, adaptive wavelet technique is also being employed during POD training period. The proposed design procedure was applied to the optimization of wing aerodynamic performance. Throughout the research, it was confirmed that the POD/wavelets design procedure could significantly reduce the total design turnaround time and is also able to capture all detailed complex flow features as in full order analysis.Keywords: POD (Proper Orthogonal Decomposition), wavelets, CFD, design optimization, ROM (Reduced Order Model)
Procedia PDF Downloads 470709 R Data Science for Technology Management
Authors: Sunghae Jun
Abstract:
Technology management (TM) is important issue in a company improving the competitiveness. Among many activities of TM, technology analysis (TA) is important factor, because most decisions for management of technology are decided by the results of TA. TA is to analyze the developed results of target technology using statistics or Delphi. TA based on Delphi is depended on the experts’ domain knowledge, in comparison, TA by statistics and machine learning algorithms use objective data such as patent or paper instead of the experts’ knowledge. Many quantitative TA methods based on statistics and machine learning have been studied, and these have been used for technology forecasting, technological innovation, and management of technology. They applied diverse computing tools and many analytical methods case by case. It is not easy to select the suitable software and statistical method for given TA work. So, in this paper, we propose a methodology for quantitative TA using statistical computing software called R and data science to construct a general framework of TA. From the result of case study, we also show how our methodology is applied to real field. This research contributes to R&D planning and technology valuation in TM areas.Keywords: technology management, R system, R data science, statistics, machine learning
Procedia PDF Downloads 458708 Platform-as-a-Service Sticky Policies for Privacy Classification in the Cloud
Authors: Maha Shamseddine, Amjad Nusayr, Wassim Itani
Abstract:
In this paper, we present a Platform-as-a-Service (PaaS) model for controlling the privacy enforcement mechanisms applied on user data when stored and processed in Cloud data centers. The proposed architecture consists of establishing user configurable ‘sticky’ policies on the Graphical User Interface (GUI) data-bound components during the application development phase to specify the details of privacy enforcement on the contents of these components. Various privacy classification classes on the data components are formally defined to give the user full control on the degree and scope of privacy enforcement including the type of execution containers to process the data in the Cloud. This not only enhances the privacy-awareness of the developed Cloud services, but also results in major savings in performance and energy efficiency due to the fact that the privacy mechanisms are solely applied on sensitive data units and not on all the user content. The proposed design is implemented in a real PaaS cloud computing environment on the Microsoft Azure platform.Keywords: privacy enforcement, platform-as-a-service privacy awareness, cloud computing privacy
Procedia PDF Downloads 228707 Objectives of the Standardization of Technical Terminology Nowadays in Albanian
Authors: Gani Pllana
Abstract:
In the conditions of the rapid development of technics and technology in recent years, the cooperation of the scientific-technical language with the standard Albanian language is continuing with a higher intensity than before. We notice a vigor of enrichment in the vocabulary of technical terminology, due to the birth and formation of new fields and subfields of technics, technology, as computing, mechatronics, telemetry, a multitude of concepts many of which, on the one hand, are marked with names of the languages they come from, mainly from English, but on the other hand, they meet their needs with the lexical mother tongue composition (by common words being raised to terms) and with the activation of other layers, such as compound word terms. Thus, for example, in the field of computing, we notice in it the inclusion of the ordinary vocabulary for reproductive reasons, like mi, dritare, flamur, adresë, skedar (Engl: mouse, window, flag, address, file), and along with them, the compound word terms, serving to differentiate relevant concepts, like, adresë e hiperlidhjes, adresë e uebit, adresë relative, adresë virtuale (Engl. address hyperlink, web address, relative address, virtual address) etc.Keywords: common words, Albanian language, technical terminology, standardization
Procedia PDF Downloads 291706 Improving System Performance through User's Resource Access Patterns
Authors: K. C. Wong
Abstract:
This paper demonstrates a number of examples in the hope to shed some light on the possibility of designing future operating systems in a more adaptation-based manner. A modern operating system, we conceive, should possess the capability of 'learning' in such a way that it can dynamically adjust its services and behavior according to the current status of the environment in which it operates. In other words, a modern operating system should play a more proactive role during the session of providing system services to users. As such, a modern operating system is expected to create a computing environment, in which its users are provided with system services more matching their dynamically changing needs. The examples demonstrated in this paper show that user's resource access patterns 'learned' and determined during a session can be utilized to improve system performance and hence to provide users with a better and more effective computing environment. The paper also discusses how to use the frequency, the continuity, and the duration of resource accesses in a session to quantitatively measure and determine user's resource access patterns for the examples shown in the paper.Keywords: adaptation-based systems, operating systems, resource access patterns, system performance
Procedia PDF Downloads 146705 A Genetic Algorithm for the Load Balance of Parallel Computational Fluid Dynamics Computation with Multi-Block Structured Mesh
Authors: Chunye Gong, Ming Tie, Jie Liu, Weimin Bao, Xinbiao Gan, Shengguo Li, Bo Yang, Xuguang Chen, Tiaojie Xiao, Yang Sun
Abstract:
Large-scale CFD simulation relies on high-performance parallel computing, and the load balance is the key role which affects the parallel efficiency. This paper focuses on the load-balancing problem of parallel CFD simulation with structured mesh. A mathematical model for this load-balancing problem is presented. The genetic algorithm, fitness computing, two-level code are designed. Optimal selector, robust operator, and local optimization operator are designed. The properties of the presented genetic algorithm are discussed in-depth. The effects of optimal selector, robust operator, and local optimization operator are proved by experiments. The experimental results of different test sets, DLR-F4, and aircraft design applications show the presented load-balancing algorithm is robust, quickly converged, and is useful in real engineering problems.Keywords: genetic algorithm, load-balancing algorithm, optimal variation, local optimization
Procedia PDF Downloads 186704 The Effectiveness of a Hybrid Diffie-Hellman-RSA-Advanced Encryption Standard Model
Authors: Abdellahi Cheikh
Abstract:
With the emergence of quantum computers with very powerful capabilities, the security of the exchange of shared keys between two interlocutors poses a big problem in terms of the rapid development of technologies such as computing power and computing speed. Therefore, the Diffie-Hellmann (DH) algorithm is more vulnerable than ever. No mechanism guarantees the security of the key exchange, so if an intermediary manages to intercept it, it is easy to intercept. In this regard, several studies have been conducted to improve the security of key exchange between two interlocutors, which has led to interesting results. The modification made on our model Diffie-Hellman-RSA-AES (DRA), which encrypts the information exchanged between two users using the three-encryption algorithms DH, RSA and AES, by using stenographic photos to hide the contents of the p, g and ClesAES values that are sent in an unencrypted state at the level of DRA model to calculate each user's public key. This work includes a comparative study between the DRA model and all existing solutions, as well as the modification made to this model, with an emphasis on the aspect of reliability in terms of security. This study presents a simulation to demonstrate the effectiveness of the modification made to the DRA model. The obtained results show that our model has a security advantage over the existing solution, so we made these changes to reinforce the security of the DRA model.Keywords: Diffie-Hellmann, DRA, RSA, advanced encryption standard
Procedia PDF Downloads 94703 Sharing Experience in Authentic Learning for Mobile Security
Abstract:
Mobile devices such as smartphones are getting more and more popular in our daily lives. The security vulnerability and threat attacks become a very emerging and important research and education topic in computing security discipline. There is a need to have an innovative mobile security hands-on laboratory to provide students with real world relevant mobile threat analysis and protection experience. This paper presents an authentic teaching and learning mobile security approach with smartphone devices which covers most important mobile threats in most aspects of mobile security. Each lab focuses on one type of mobile threats, such as mobile messaging threat, and conveys the threat analysis and protection in multiple ways, including lectures and tutorials, multimedia or app-based demonstration for threats analysis, and mobile app development for threat protections. This authentic learning approach is affordable and easily-adoptable which immerse students in a real world relevant learning environment with real devices. This approach can also be applied to many other mobile related courses such as mobile Java programming, database, network, and any security relevant courses so that can learn concepts and principles better with the hands-on authentic learning experience.Keywords: mobile computing, Android, network, security, labware
Procedia PDF Downloads 409702 A Novel Way to Create Qudit Quantum Error Correction Codes
Authors: Arun Moorthy
Abstract:
Quantum computing promises to provide algorithmic speedups for a number of tasks; however, similar to classical computing, effective error-correcting codes are needed. Current quantum computers require costly equipment to control each particle, so having fewer particles to control is ideal. Although traditional quantum computers are built using qubits (2-level systems), qudits (more than 2-levels) are appealing since they can have an equivalent computational space using fewer particles, meaning fewer particles need to be controlled. Currently, qudit quantum error-correction codes are available for different level qudit systems; however, these codes have sometimes overly specific constraints. When building a qudit system, it is important for researchers to have access to many codes to satisfy their requirements. This project addresses two methods to increase the number of quantum error correcting codes available to researchers. The first method is generating new codes for a given set of parameters. The second method is generating new error-correction codes by using existing codes as a starting point to generate codes for another level (i.e., a 5-level system code on a 2-level system). So, this project builds a website that researchers can use to generate new error-correction codes or codes based on existing codes.Keywords: qudit, error correction, quantum, qubit
Procedia PDF Downloads 162701 Presenting Internals of Networks Using Bare Machine Technology
Authors: Joel Weymouth, Ramesh K. Karne, Alexander L. Wijesinha
Abstract:
Bare Machine Internet is part of the Bare Machine Computing (BMC) paradigm. It is used in programming application ns to run directly on a device. It is software that runs directly against the hardware using CPU, Memory, and I/O. The software application runs without an Operating System and resident mass storage. An important part of the BMC paradigm is the Bare Machine Internet. It utilizes an Application Development model software that interfaces directly with the hardware on a network server and file server. Because it is “bare,” it is a powerful teaching and research tool that can readily display the internals of the network protocols, software, and hardware of the applications running on the Bare Server. It was also demonstrated that the bare server was accessible by laptop and by smartphone/android. The purpose was to show the further practicality of Bare Internet in Computer Engineering and Computer Science Education and Research. It was also to show that an undergraduate student could take advantage of a bare server with any device and any browser at any release version connected to the internet. This paper presents the Bare Web Server as an educational tool. We will discuss possible applications of this paradigm.Keywords: bare machine computing, online research, network technology, visualizing network internals
Procedia PDF Downloads 173700 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems
Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia
Abstract:
The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.Keywords: cloud computing, data management, multi-tenancy, requirements, security
Procedia PDF Downloads 157