Search results for: statistical computing
4788 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4214787 A Type-2 Fuzzy Model for Link Prediction in Social Network
Authors: Mansoureh Naderipour, Susan Bastani, Mohammad Fazel Zarandi
Abstract:
Predicting links that may occur in the future and missing links in social networks is an attractive problem in social network analysis. Granular computing can help us to model the relationships between human-based system and social sciences in this field. In this paper, we present a model based on granular computing approach and Type-2 fuzzy logic to predict links regarding nodes’ activity and the relationship between two nodes. Our model is tested on collaboration networks. It is found that the accuracy of prediction is significantly higher than the Type-1 fuzzy and crisp approach.Keywords: social network, link prediction, granular computing, type-2 fuzzy sets
Procedia PDF Downloads 3264786 Statistical and Land Planning Study of Tourist Arrivals in Greece during 2005-2016
Authors: Dimitra Alexiou
Abstract:
During the last 10 years, in spite of the economic crisis, the number of tourists arriving in Greece has increased, particularly during the tourist season from April to October. In this paper, the number of annual tourist arrivals is studied to explore their preferences with regard to the month of travel, the selected destinations, as well the amount of money spent. The collected data are processed with statistical methods, yielding numerical and graphical results. From the computation of statistical parameters and the forecasting with exponential smoothing, useful conclusions are arrived at that can be used by the Greek tourism authorities, as well as by tourist organizations, for planning purposes for the coming years. The results of this paper and the computed forecast can also be used for decision making by private tourist enterprises that are investing in Greece. With regard to the statistical methods, the method of Simple Exponential Smoothing of time series of data is employed. The search for a best forecast for 2017 and 2018 provides the value of the smoothing coefficient. For all statistical computations and graphics Microsoft Excel is used.Keywords: tourism, statistical methods, exponential smoothing, land spatial planning, economy
Procedia PDF Downloads 2654785 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans
Authors: Jelena Vucicevic
Abstract:
Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.Keywords: censored data, R statistical software, reliability analysis, time to failure
Procedia PDF Downloads 4014784 Using High Performance Computing for Online Flood Monitoring and Prediction
Authors: Stepan Kuchar, Martin Golasowski, Radim Vavrik, Michal Podhoranyi, Boris Sir, Jan Martinovic
Abstract:
The main goal of this article is to describe the online flood monitoring and prediction system Floreon+ primarily developed for the Moravian-Silesian region in the Czech Republic and the basic process it uses for running automatic rainfall-runoff and hydrodynamic simulations along with their calibration and uncertainty modeling. It takes a long time to execute such process sequentially, which is not acceptable in the online scenario, so the use of high-performance computing environment is proposed for all parts of the process to shorten their duration. Finally, a case study on the Ostravice river catchment is presented that shows actual durations and their gain from the parallel implementation.Keywords: flood prediction process, high performance computing, online flood prediction system, parallelization
Procedia PDF Downloads 4924783 DNA Multiplier: A Design Architecture of a Multiplier Circuit Using DNA Molecules
Authors: Hafiz Md. Hasan Babu, Khandaker Mohammad Mohi Uddin, Nitish Biswas, Sarreha Tasmin Rikta, Nuzmul Hossain Nahid
Abstract:
Nanomedicine and bioengineering use biological systems that can perform computing operations. In a biocomputational circuit, different types of biomolecules and DNA (Deoxyribose Nucleic Acid) are used as active components. DNA computing has the capability of performing parallel processing and a large storage capacity that makes it diverse from other computing systems. In most processors, the multiplier is treated as a core hardware block, and multiplication is one of the time-consuming and lengthy tasks. In this paper, cost-effective DNA multipliers are designed using algorithms of molecular DNA operations with respect to conventional ones. The speed and storage capacity of a DNA multiplier are also much higher than a traditional silicon-based multiplier.Keywords: biological systems, DNA multiplier, large storage, parallel processing
Procedia PDF Downloads 2144782 Hierarchical Queue-Based Task Scheduling with CloudSim
Authors: Wanqing You, Kai Qian, Ying Qian
Abstract:
The concepts of Cloud Computing provide users with infrastructure, platform and software as service, which make those services more accessible for people via Internet. To better analysis the performance of Cloud Computing provisioning policies as well as resources allocation strategies, a toolkit named CloudSim proposed. With CloudSim, the Cloud Computing environment can be easily constructed by modelling and simulating cloud computing components, such as datacenter, host, and virtual machine. A good scheduling strategy is the key to achieve the load balancing among different machines as well as to improve the utilization of basic resources. Recently, the existing scheduling algorithms may work well in some presumptive cases in a single machine; however they are unable to make the best decision for the unforeseen future. In real world scenario, there would be numbers of tasks as well as several virtual machines working in parallel. Based on the concepts of multi-queue, this paper presents a new scheduling algorithm to schedule tasks with CloudSim by taking into account several parameters, the machines’ capacity, the priority of tasks and the history log.Keywords: hierarchical queue, load balancing, CloudSim, information technology
Procedia PDF Downloads 4224781 e-Learning Security: A Distributed Incident Response Generator
Authors: Bel G Raggad
Abstract:
An e-Learning setting is a distributed computing environment where information resources can be connected to any public network. Public networks are very unsecure which can compromise the reliability of an e-Learning environment. This study is only concerned with the intrusion detection aspect of e-Learning security and how incident responses are planned. The literature reported great advances in intrusion detection system (ids) but neglected to study an important ids weakness: suspected events are detected but an intrusion is not determined because it is not defined in ids databases. We propose an incident response generator (DIRG) that produces incident responses when the working ids system suspects an event that does not correspond to a known intrusion. Data involved in intrusion detection when ample uncertainty is present is often not suitable to formal statistical models including Bayesian. We instead adopt Dempster and Shafer theory to process intrusion data for the unknown event. The DIRG engine transforms data into a belief structure using incident scenarios deduced by the security administrator. Belief values associated with various incident scenarios are then derived and evaluated to choose the most appropriate scenario for which an automatic incident response is generated. This article provides a numerical example demonstrating the working of the DIRG system.Keywords: decision support system, distributed computing, e-Learning security, incident response, intrusion detection, security risk, statefull inspection
Procedia PDF Downloads 4374780 Performance Analysis of Elliptic Curve Cryptography Using Onion Routing to Enhance the Privacy and Anonymity in Grid Computing
Authors: H. Parveen Begam, M. A. Maluk Mohamed
Abstract:
Grid computing is an environment that allows sharing and coordinated use of diverse resources in dynamic, heterogeneous and distributed environment using Virtual Organization (VO). Security is a critical issue due to the open nature of the wireless channels in the grid computing which requires three fundamental services: authentication, authorization, and encryption. The privacy and anonymity are considered as an important factor while communicating over publicly spanned network like web. To ensure a high level of security we explored an extension of onion routing, which has been used with dynamic token exchange along with protection of privacy and anonymity of individual identity. To improve the performance of encrypting the layers, the elliptic curve cryptography is used. Compared to traditional cryptosystems like RSA (Rivest-Shamir-Adelman), ECC (Elliptic Curve Cryptosystem) offers equivalent security with smaller key sizes which result in faster computations, lower power consumption, as well as memory and bandwidth savings. This paper presents the estimation of the performance improvements of onion routing using ECC as well as the comparison graph between performance level of RSA and ECC.Keywords: grid computing, privacy, anonymity, onion routing, ECC, RSA
Procedia PDF Downloads 3984779 A Two Level Load Balancing Approach for Cloud Environment
Authors: Anurag Jain, Rajneesh Kumar
Abstract:
Cloud computing is the outcome of rapid growth of internet. Due to elastic nature of cloud computing and unpredictable behavior of user, load balancing is the major issue in cloud computing paradigm. An efficient load balancing technique can improve the performance in terms of efficient resource utilization and higher customer satisfaction. Load balancing can be implemented through task scheduling, resource allocation and task migration. Various parameters to analyze the performance of load balancing approach are response time, cost, data processing time and throughput. This paper demonstrates a two level load balancer approach by combining join idle queue and join shortest queue approach. Authors have used cloud analyst simulator to test proposed two level load balancer approach. The results are analyzed and compared with the existing algorithms and as observed, proposed work is one step ahead of existing techniques.Keywords: cloud analyst, cloud computing, join idle queue, join shortest queue, load balancing, task scheduling
Procedia PDF Downloads 4314778 Secure Hashing Algorithm and Advance Encryption Algorithm in Cloud Computing
Authors: Jaimin Patel
Abstract:
Cloud computing is one of the most sharp and important movement in various computing technologies. It provides flexibility to users, cost effectiveness, location independence, easy maintenance, enables multitenancy, drastic performance improvements, and increased productivity. On the other hand, there are also major issues like security. Being a common server, security for a cloud is a major issue; it is important to provide security to protect user’s private data, and it is especially important in e-commerce and social networks. In this paper, encryption algorithms such as Advanced Encryption Standard algorithms, their vulnerabilities, risk of attacks, optimal time and complexity management and comparison with other algorithms based on software implementation is proposed. Encryption techniques to improve the performance of AES algorithms and to reduce risk management are given. Secure Hash Algorithms, their vulnerabilities, software implementations, risk of attacks and comparison with other hashing algorithms as well as the advantages and disadvantages between hashing techniques and encryption are given.Keywords: Cloud computing, encryption algorithm, secure hashing algorithm, brute force attack, birthday attack, plaintext attack, man in middle attack
Procedia PDF Downloads 2804777 MLProxy: SLA-Aware Reverse Proxy for Machine Learning Inference Serving on Serverless Computing Platforms
Authors: Nima Mahmoudi, Hamzeh Khazaei
Abstract:
Serving machine learning inference workloads on the cloud is still a challenging task at the production level. The optimal configuration of the inference workload to meet SLA requirements while optimizing the infrastructure costs is highly complicated due to the complex interaction between batch configuration, resource configurations, and variable arrival process. Serverless computing has emerged in recent years to automate most infrastructure management tasks. Workload batching has revealed the potential to improve the response time and cost-effectiveness of machine learning serving workloads. However, it has not yet been supported out of the box by serverless computing platforms. Our experiments have shown that for various machine learning workloads, batching can hugely improve the system’s efficiency by reducing the processing overhead per request. In this work, we present MLProxy, an adaptive reverse proxy to support efficient machine learning serving workloads on serverless computing systems. MLProxy supports adaptive batching to ensure SLA compliance while optimizing serverless costs. We performed rigorous experiments on Knative to demonstrate the effectiveness of MLProxy. We showed that MLProxy could reduce the cost of serverless deployment by up to 92% while reducing SLA violations by up to 99% that can be generalized across state-of-the-art model serving frameworks.Keywords: serverless computing, machine learning, inference serving, Knative, google cloud run, optimization
Procedia PDF Downloads 1794776 Economic Design of a Quality Control Chart for the Proportion of Defective Items
Authors: Encarnación Álvarez-Verdejo, Raúl Amor-Pulido, Pablo J. Moya-Fernández, Juan F. Muñoz-Rosas, Francisco J. Blanco-Encomienda
Abstract:
Many companies use the statistical tool named as statistical quality control, and which can have a high cost for the companies interested on these statistical tools. The evaluation of the quality of products and services is an important topic, but the reduction of the cost of the implantation of the statistical quality control also has important benefits for the companies. For this reason, it is important to implement a economic design for the various steps included into the statistical quality control. In this paper, we describe some relevant aspects related to the economic design of a quality control chart for the proportion of defective items. They are very important because the suggested issues can reduce the cost of implementing a quality control chart for the proportion of defective items. Note that the main purpose of this chart is to evaluate and control the proportion of defective items of a production process.Keywords: proportion, type I error, economic plan, distribution function
Procedia PDF Downloads 4434775 Implementation of Statistical Parameters to Form an Entropic Mathematical Models
Authors: Gurcharan Singh Buttar
Abstract:
It has been discovered that although these two areas, statistics, and information theory, are independent in their nature, they can be combined to create applications in multidisciplinary mathematics. This is due to the fact that where in the field of statistics, statistical parameters (measures) play an essential role in reference to the population (distribution) under investigation. Information measure is crucial in the study of ambiguity, assortment, and unpredictability present in an array of phenomena. The following communication is a link between the two, and it has been demonstrated that the well-known conventional statistical measures can be used as a measure of information.Keywords: probability distribution, entropy, concavity, symmetry, variance, central tendency
Procedia PDF Downloads 1564774 Quantum Statistical Mechanical Formulations of Three-Body Problems via Non-Local Potentials
Authors: A. Maghari, V. M. Maleki
Abstract:
In this paper, we present a quantum statistical mechanical formulation from our recently analytical expressions for partial-wave transition matrix of a three-particle system. We report the quantum reactive cross sections for three-body scattering processes 1 + (2,3)-> 1 + (2,3) as well as recombination 1 + (2,3) -> 2 + (3,1) between one atom and a weakly-bound dimer. The analytical expressions of three-particle transition matrices and their corresponding cross-sections were obtained from the three-dimensional Faddeev equations subjected to the rank-two non-local separable potentials of the generalized Yamaguchi form. The equilibrium quantum statistical mechanical properties such partition function and equation of state as well as non-equilibrium quantum statistical properties such as transport cross-sections and their corresponding transport collision integrals were formulated analytically. This leads to obtain the transport properties, such as viscosity and diffusion coefficient of a moderate dense gas.Keywords: statistical mechanics, nonlocal separable potential, three-body interaction, faddeev equations
Procedia PDF Downloads 4014773 Classification of Attacks Over Cloud Environment
Authors: Karim Abouelmehdi, Loubna Dali, Elmoutaoukkil Abdelmajid, Hoda Elsayed, Eladnani Fatiha, Benihssane Abderahim
Abstract:
The security of cloud services is the concern of cloud service providers. In this paper, we will mention different classifications of cloud attacks referred by specialized organizations. Each agency has its classification of well-defined properties. The purpose is to present a high-level classification of current research in cloud computing security. This classification is organized around attack strategies and corresponding defenses.Keywords: cloud computing, classification, risk, security
Procedia PDF Downloads 5484772 Efficient Semi-Systolic Finite Field Multiplier Using Redundant Basis
Authors: Hyun-Ho Lee, Kee-Won Kim
Abstract:
The arithmetic operations over GF(2m) have been extensively used in error correcting codes and public-key cryptography schemes. Finite field arithmetic includes addition, multiplication, division and inversion operations. Addition is very simple and can be implemented with an extremely simple circuit. The other operations are much more complex. The multiplication is the most important for cryptosystems, such as the elliptic curve cryptosystem, since computing exponentiation, division, and computing multiplicative inverse can be performed by computing multiplication iteratively. In this paper, we present a parallel computation algorithm that operates Montgomery multiplication over finite field using redundant basis. Also, based on the multiplication algorithm, we present an efficient semi-systolic multiplier over finite field. The multiplier has less space and time complexities compared to related multipliers. As compared to the corresponding existing structures, the multiplier saves at least 5% area, 50% time, and 53% area-time (AT) complexity. Accordingly, it is well suited for VLSI implementation and can be easily applied as a basic component for computing complex operations over finite field, such as inversion and division operation.Keywords: finite field, Montgomery multiplication, systolic array, cryptography
Procedia PDF Downloads 2944771 Proposed Solutions Based on Affective Computing
Authors: Diego Adrian Cardenas Jorge, Gerardo Mirando Guisado, Alfredo Barrientos Padilla
Abstract:
A system based on Affective Computing can detect and interpret human information like voice, facial expressions and body movement to detect emotions and execute a corresponding response. This data is important due to the fact that a person can communicate more effectively with emotions than can be possible with words. This information can be processed through technological components like Facial Recognition, Gait Recognition or Gesture Recognition. As of now, solutions proposed using this technology only consider one component at a given moment. This research investigation proposes two solutions based on Affective Computing taking into account more than one component for emotion detection. The proposals reflect the levels of dependency between hardware devices and software, as well as the interaction process between the system and the user which implies the development of scenarios where both proposals will be put to the test in a live environment. Both solutions are to be developed in code by software engineers to prove the feasibility. To validate the impact on society and business interest, interviews with stakeholders are conducted with an investment mind set where each solution is labeled on a scale of 1 through 5, being one a minimum possible investment and 5 the maximum.Keywords: affective computing, emotions, emotion detection, face recognition, gait recognition
Procedia PDF Downloads 3694770 Arithmetic Operations in Deterministic P Systems Based on the Weak Rule Priority
Authors: Chinedu Peter, Dashrath Singh
Abstract:
Membrane computing is a computability model which abstracts its structures and functions from the biological cell. The main ingredient of membrane computing is the notion of a membrane structure, which consists of several cell-like membranes recurrently placed inside a unique skin membrane. The emergence of several variants of membrane computing gives rise to the notion of a P system. The paper presents a variant of P systems for arithmetic operations on non-negative integers based on the weak priorities for rule application. Consequently, we obtain deterministic P systems. Two membranes suffice. There are at most four objects for multiplication and five objects for division throughout the computation processes. The model is simple and has a potential for possible extension to non-negative integers and real numbers in general.Keywords: P system, binary operation, determinism, weak rule priority
Procedia PDF Downloads 4454769 Statistical Investigation Projects: A Way for Pre-Service Mathematics Teachers to Actively Solve a Campus Problem
Authors: Muhammet Şahal, Oğuz Köklü
Abstract:
As statistical thinking and problem-solving processes have become increasingly important, teachers need to be more rigorously prepared with statistical knowledge to teach their students effectively. This study examined preservice mathematics teachers' development of statistical investigation projects using data and exploratory data analysis tools, following a design-based research perspective and statistical investigation cycle. A total of 26 pre-service senior mathematics teachers from a public university in Turkiye participated in the study. They formed groups of 3-4 members voluntarily and worked on their statistical investigation projects for six weeks. The data sources were audio recordings of pre-service teachers' group discussions while working on their projects in class, whole-class video recordings, and each group’s weekly and final reports. As part of the study, we reviewed weekly reports, provided timely feedback specific to each group, and revised the following week's class work based on the groups’ needs and development in their project. We used content analysis to analyze groups’ audio and classroom video recordings. The participants encountered several difficulties, which included formulating a meaningful statistical question in the early phase of the investigation, securing the most suitable data collection strategy, and deciding on the data analysis method appropriate for their statistical questions. The data collection and organization processes were challenging for some groups and revealed the importance of comprehensive planning. Overall, preservice senior mathematics teachers were able to work on a statistical project that contained the formulation of a statistical question, planning, data collection, analysis, and reaching a conclusion holistically, even though they faced challenges because of their lack of experience. The study suggests that preservice senior mathematics teachers have the potential to apply statistical knowledge and techniques in a real-world context, and they could proceed with the project with the support of the researchers. We provided implications for the statistical education of teachers and future research.Keywords: design-based study, pre-service mathematics teachers, statistical investigation projects, statistical model
Procedia PDF Downloads 844768 The Development of Statistical Analysis in Agriculture Experimental Design Using R
Authors: Somruay Apichatibutarapong, Chookiat Pudprommart
Abstract:
The purpose of this study was to develop of statistical analysis by using R programming via internet applied for agriculture experimental design. Data were collected from 65 items in completely randomized design, randomized block design, Latin square design, split plot design, factorial design and nested design. The quantitative approach was used to investigate the quality of learning media on statistical analysis by using R programming via Internet by six experts and the opinions of 100 students who interested in experimental design and applied statistics. It was revealed that the experts’ opinions were good in all contents except a usage of web board and the students’ opinions were good in overall and all items.Keywords: experimental design, r programming, applied statistics, statistical analysis
Procedia PDF Downloads 3684767 An Effective Route to Control of the Safety of Accessing and Storing Data in the Cloud-Based Data Base
Authors: Omid Khodabakhshi, Amir Rozdel
Abstract:
The subject of cloud computing security research has allocated a number of challenges and competitions because the data center is comprised of complex private information and are always faced various risks of information disclosure by hacker attacks or internal enemies. Accordingly, the security of virtual machines in the cloud computing infrastructure layer is very important. So far, there are many software solutions to develop security in virtual machines. But using software alone is not enough to solve security problems. The purpose of this article is to examine the challenges and security requirements for accessing and storing data in an insecure cloud environment. In other words, in this article, a structure is proposed for the implementation of highly isolated security-sensitive codes using secure computing hardware in virtual environments. It also allows remote code validation with inputs and outputs. We provide these security features even in situations where the BIOS, the operating system, and even the super-supervisor are infected. To achieve these goals, we will use the hardware support provided by the new Intel and AMD processors, as well as the TPM security chip. In conclusion, the use of these technologies ultimately creates a root of dynamic trust and reduces TCB to security-sensitive codes.Keywords: code, cloud computing, security, virtual machines
Procedia PDF Downloads 1914766 Design and Implementation of Remote Application Virtualization in Cloud Environments
Authors: Shuen-Tai Wang, Ying-Chuan Chen, Hsi-Ya Chang
Abstract:
Cloud computing is a paradigm of computing that shifts the way computing has been done in the past. The users can use cloud resources such as application software or storage space from the cloud without needing to own them. This paper is focused on solutions that are anticipated to introduce IaaS idea to build cloud base services and enable the individual remote user's applications in cloud environments, which appear as if they are running on the end user's local computer. The available features of application delivery solution have been developed based on our previous research on the virtualization technology to offer applications independent of location so that the users can work online, offline, anywhere, with appropriate device and at any time. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for cloud service. Users no longer need to burden the system managers and drastically reduces the overall cost of hardware and software licenses. Moreover, this flexible remote application virtualization service represents the next significant step to the mobile workplace, and it lets users access their applications remotely through cloud services anywhere. This is also made possible by the low administrative costs as well as relatively inexpensive end-user terminals and reduced energy expenses.Keywords: cloud computing, IaaS, virtualization, application delivery
Procedia PDF Downloads 2814765 The Impact of Artificial Intelligence on Qualty Conrol and Quality
Authors: Mary Moner Botros Fanawel
Abstract:
Many companies use the statistical tool named as statistical quality control, and which can have a high cost for the companies interested on these statistical tools. The evaluation of the quality of products and services is an important topic, but the reduction of the cost of the implantation of the statistical quality control also has important benefits for the companies. For this reason, it is important to implement a economic design for the various steps included into the statistical quality control. In this paper, we describe some relevant aspects related to the economic design of a quality control chart for the proportion of defective items. They are very important because the suggested issues can reduce the cost of implementing a quality control chart for the proportion of defective items. Note that the main purpose of this chart is to evaluate and control the proportion of defective items of a production process.Keywords: model predictive control, hierarchical control structure, genetic algorithm, water quality with DBPs objectives proportion, type I error, economic plan, distribution function bootstrap control limit, p-value method, out-of-control signals, p-value, quality characteristics
Procedia PDF Downloads 624764 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics
Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood
Abstract:
We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.Keywords: digital forensics, cloud computing, cyber security, spark, Kubernetes, Kafka
Procedia PDF Downloads 3934763 A Performance Analysis Study for Cloud Based ERP Systems
Authors: Burak Erkayman
Abstract:
The manufacturing and service organizations are in the need of using ERP systems to integrate many functions from purchasing to storage, production planning to calculation of costs. Using ERP systems by the integration in the level of information provides companies remarkable advantages in terms of profitability, productivity and efficiency in processes. Cloud computing is one of the most significant changes in information and communication technology. The developments in Cloud Computing attract business world to take advantage of this field. Cloud Computing means much more storage area, more cost saving and faster data transfer rate. In addition to these, it presents new business models, new field of study and practicable solutions for anyone’s use. These developments make inevitable the implementation of ERP systems to cloud environment. In this study, the performance of ERP systems in cloud environment is analyzed through various performance criteria and a comparison between traditional and cloud-ERP systems is presented. At the end of study the transformation and the future of ERP systems is discussed.Keywords: cloud-ERP, ERP system performance, information system transformation
Procedia PDF Downloads 5294762 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 124761 Reconfigurable Ubiquitous Computing Infrastructure for Load Balancing
Authors: Khaled Sellami, Lynda Sellami, Pierre F. Tiako
Abstract:
Ubiquitous computing helps make data and services available to users anytime and anywhere. This makes the cooperation of devices a crucial need. In return, such cooperation causes an overload of the devices and/or networks, resulting in network malfunction and suspension of its activities. Our goal in this paper is to propose an approach of devices reconfiguration in order to help to reduce the energy consumption in ubiquitous environments. The idea is that when high-energy consumption is detected, we proceed to a change in component distribution on the devices to reduce and/or balance the energy consumption. We also investigate the possibility to detect high-energy consumption of devices/network based on devices abilities. As a result, our idea realizes a reconfiguration of devices aimed at reducing the consumption of energy and/or load balancing in ubiquitous environments.Keywords: ubiquitous computing, load balancing, device energy consumption, reconfiguration
Procedia PDF Downloads 2754760 Cloud Computing: Deciding Whether It Is Easier or Harder to Defend Against Cyber Attacks
Authors: Emhemed Shaklawoon, Ibrahim Althomali
Abstract:
We propose that we identify different defense mechanisms that were used before the introduction of the cloud and compare if their protection mechanisms are still valuable and to what degree. Note that in order to defend against vulnerability, we must know how this vulnerability is abused in an attack. Only then, we will be able to recognize if it is easier or harder to defend against cyber attacks.Keywords: cloud computing, privacy, cyber attacks, defend the cloud
Procedia PDF Downloads 4224759 Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering
Authors: Yunus Doğan, Ahmet Durap
Abstract:
Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.Keywords: clustering algorithms, coastal engineering, data mining, data summarization, statistical methods
Procedia PDF Downloads 361