Search results for: hyperdimensional computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 999

Search results for: hyperdimensional computing

759 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 169
758 Towards a Resources Provisioning for Dynamic Workflows in the Cloud

Authors: Fairouz Fakhfakh, Hatem Hadj Kacem, Ahmed Hadj Kacem

Abstract:

Cloud computing offers a new model of service provisioning for workflow applications, thanks to its elasticity and its paying model. However, it presents various challenges that need to be addressed in order to be efficiently utilized. The resources provisioning problem for workflow applications has been widely studied. Nevertheless, the existing works did not consider the change in workflow instances while they are being executed. This functionality has become a major requirement to deal with unusual situations and evolution. This paper presents a first step towards the resources provisioning for a dynamic workflow. In fact, we propose a provisioning algorithm which minimizes the overall workflow execution cost, while meeting a deadline constraint. Then, we extend it to support the dynamic adding of tasks. Experimental results show that our proposed heuristic demonstrates a significant reduction in resources cost by using a consolidation process.

Keywords: cloud computing, resources provisioning, dynamic workflow, workflow applications

Procedia PDF Downloads 296
757 Inclusion and Changes of a Research Criterion in the Institute for Quality and Accreditation of Computing, Engineering and Technology Accreditation Model

Authors: J. Daniel Sanchez Ruiz

Abstract:

The paper explains why and how a research criterion was included within an accreditation system for undergraduate engineering programs, in spite of not being a common practice of accreditation agencies at a global level. This paper is divided into three parts. The first presents the context and the motivations that led the Institute for Quality and Accreditation of Computing, Engineering and Technology Programs (ICACIT) to add a research criterion. The second describes the criterion adopted and the feedback received during 2017 accreditation cycle. The third, the author proposes changes to the accreditation criteria that respond in a pertinent way to the results-based accreditation model and the national context. The author seeks to reconcile an outcome based accreditation model, aligned with the established by the International Engineering Alliance, with the particular context of higher education in Peru.

Keywords: accreditation, engineering education, quality assurance, research

Procedia PDF Downloads 283
756 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing

Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng

Abstract:

Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.

Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware

Procedia PDF Downloads 121
755 Enabling UDP Multicast in Cloud IaaS: An Enterprise Use Case

Authors: Patrick J. Kerpan, Ryan C. Koop, Margaret M. Walker, Chris P. Swan

Abstract:

The User Datagram Protocol (UDP) multicast is a vital part of data center networking that is being left out of major cloud computing providers' network infrastructure. Enterprise users rely on multicast, and particularly UDP multicast to create and connect vital business operations. For example, UPD makes a variety of business functions possible from simultaneous content media updates, High-Performance Computing (HPC) grids, and video call routing for massive open online courses (MOOCs). Essentially, UDP multicast's technological slight is causing a huge effect on whether companies choose to use (or not to use) public cloud infrastructure as a service (IaaS). Allowing the ‘chatty’ UDP multicast protocol inside a cloud network could have a serious impact on the performance of the cloud as a whole. Cloud IaaS providers solve the issue by disallowing all UDP multicast. But what about enterprise use cases for multicast applications in organizations that want to move to the cloud? To re-allow multicast traffic, enterprises can build a layer 3 - 7 network over the top of a data center, private cloud, or public cloud. An overlay network simply creates a private, sealed network on top of the existing network. Overlays give complete control of the network back to enterprise cloud users the freedom to manage their network beyond the control of the cloud provider’s firewall conditions. The same logic applies if for users who wish to use IPsec or BGP network protocols inside or connected into an overlay network in cloud IaaS.

Keywords: cloud computing, protocols, UDP multicast, virtualization

Procedia PDF Downloads 592
754 A Review of Fractal Dimension Computing Methods Applied to Wear Particles

Authors: Manish Kumar Thakur, Subrata Kumar Ghosh

Abstract:

Various types of particles found in lubricant may be characterized by their fractal dimension. Some of the available methods are: yard-stick method or structured walk method, box-counting method. This paper presents a review of the developments and progress in fractal dimension computing methods as applied to characteristics the surface of wear particles. An overview of these methods, their implementation, their advantages and their limits is also present here. It has been accepted that wear particles contain major information about wear and friction of materials. Morphological analysis of wear particles from a lubricant is a very effective way for machine condition monitoring. Fractal dimension methods are used to characterize the morphology of the found particles. It is very useful in the analysis of complexity of irregular substance. The aim of this review is to bring together the fractal methods applicable for wear particles.

Keywords: fractal dimension, morphological analysis, wear, wear particles

Procedia PDF Downloads 491
753 Cyber Attacks Management in IoT Networks Using Deep Learning and Edge Computing

Authors: Asmaa El Harat, Toumi Hicham, Youssef Baddi

Abstract:

This survey delves into the complex realm of Internet of Things (IoT) security, highlighting the urgent need for effective cybersecurity measures as IoT devices become increasingly common. It explores a wide array of cyber threats targeting IoT devices and focuses on mitigating these attacks through the combined use of deep learning and machine learning algorithms, as well as edge and cloud computing paradigms. The survey starts with an overview of the IoT landscape and the various types of attacks that IoT devices face. It then reviews key machine learning and deep learning algorithms employed in IoT cybersecurity, providing a detailed comparison to assist in selecting the most suitable algorithms. Finally, the survey provides valuable insights for cybersecurity professionals and researchers aiming to enhance security in the intricate world of IoT.

Keywords: internet of things (IoT), cybersecurity, machine learning, deep learning

Procedia PDF Downloads 35
752 Web Service Architectural Style Selection in Multi-Criteria Requirements

Authors: Ahmad Mohsin, Syda Fatima, Falak Nawaz, Aman Ullah Khan

Abstract:

Selection of an appropriate architectural style is vital to the success of target web service under development. The nature of architecture design and selection for service-oriented computing applications is quite different as compared to traditional software. Web Services have complex and rigorous architectural styles to choose. Due to this, selection for accurate architectural style for web services development has become a more complex decision to be made by architects. Architectural style selection is a multi-criteria decision and demands lots of experience in service oriented computing. Decision support systems are good solutions to simplify the selection process of a particular architectural style. Our research suggests a new approach using DSS for selection of architectural styles while developing a web service to cater FRs and NFRs. Our proposed DSS helps architects to select right web service architectural pattern according to the domain and non-functional requirements. In this paper, a rule base DSS has been developed using CLIPS (C Language Integrated Production System) to support decisions using multi-criteria requirements. This DSS takes architectural characteristics, domain requirements and software architect preferences for NFRs as input for different architectural styles in use today in service-oriented computing. Weighted sum model has been applied to prioritize quality attributes and domain requirements. Scores are calculated using multiple criterions to choose the final architecture style.

Keywords: software architecture, web-service, rule-based, DSS, multi-criteria requirements, quality attributes

Procedia PDF Downloads 366
751 Artificial Intelligent-Based Approaches for Task ‎Offloading, ‎Resource ‎Allocation and Service ‎Placement of ‎Internet of Things ‎Applications: State of the Art

Authors: Fatima Z. Cherhabil, Mammar Sedrati, Sonia-Sabrina Bendib‎

Abstract:

In order to support the continued growth, critical latency of ‎IoT ‎applications, and ‎various obstacles of traditional data centers, ‎mobile edge ‎computing (MEC) has ‎emerged as a promising solution that extends cloud data-processing and decision-making to edge devices. ‎By adopting a MEC structure, IoT applications could be executed ‎locally, on ‎an edge server, different fog nodes, or distant cloud ‎data centers. However, we are ‎often ‎faced with wanting to optimize conflicting criteria such as ‎minimizing energy ‎consumption of limited local capabilities (in terms of CPU, RAM, storage, bandwidth) of mobile edge ‎devices and trying to ‎keep ‎high performance (reducing ‎response time, increasing throughput and service availability) ‎at the same ‎time‎. Achieving one goal may affect the other, making task offloading (TO), ‎resource allocation (RA), and service placement (SP) complex ‎processes. ‎It is a nontrivial multi-objective optimization ‎problem ‎to study the trade-off between conflicting criteria. ‎The paper provides a survey on different TO, SP, and RA recent multi-‎objective optimization (MOO) approaches used in edge computing environments, particularly artificial intelligent (AI) ones, to satisfy various objectives, constraints, and dynamic conditions related to IoT applications‎.

Keywords: mobile edge computing, multi-objective optimization, artificial ‎intelligence ‎approaches, task offloading, resource allocation, ‎ service placement

Procedia PDF Downloads 117
750 Parallel Random Number Generation for the Modern Supercomputer Architectures

Authors: Roman Snytsar

Abstract:

Pseudo-random numbers are often used in scientific computing such as the Monte Carlo Simulations or the Quantum Inspired Optimization. Requirements for a parallel random number generator running in the modern multi-core vector environment are more stringent than those for sequential random number generators. As well as passing the usual quality tests, the output of the parallel random number generator must be verifiable and reproducible throughout the concurrent execution. We propose a family of vectorized Permuted Congruential Generators. Implementations are available for multiple modern vector modern computer architectures. Besides demonstrating good single core performance, the generators scale easily across many processor cores and multiple distributed nodes. We provide performance and parallel speedup analysis and comparisons between the implementations.

Keywords: pseudo-random numbers, quantum optimization, SIMD, parallel computing

Procedia PDF Downloads 121
749 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study

Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker

Abstract:

In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.

Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning

Procedia PDF Downloads 144
748 A Knowledge-As-A-Service Support Framework for Ambient Learning in Kenya

Authors: Lucy W. Mburu, Richard Karanja, Simon N. Mwendia

Abstract:

Over recent years, learners have experienced a constant need to access on demand knowledge that is fully aligned with the paradigm of cloud computing. As motivated by the global sustainable development goal to ensure inclusive and equitable learning opportunities, this research has developed a framework hinged on the knowledge-as-a-service architecture that utilizes knowledge from ambient learning systems. Through statistical analysis and decision tree modeling, the study discovers influential variables for ambient learning among university students. The main aim is to generate a platform for disseminating and exploiting the available knowledge to aid the learning process and, thus, to improve educational support on the ambient learning system. The research further explores how collaborative effort can be used to form a knowledge network that allows access to heterogeneous sources of knowledge, which benefits knowledge consumers, such as the developers of ambient learning systems.

Keywords: actionable knowledge, ambient learning, cloud computing, decision trees, knowledge as a service

Procedia PDF Downloads 162
747 Computing Customer Lifetime Value in E-Commerce Websites with Regard to Returned Orders and Payment Method

Authors: Morteza Giti

Abstract:

As online shopping is becoming increasingly popular, computing customer lifetime value for better knowing the customers is also gaining more importance. Two distinct factors that can affect the value of a customer in the context of online shopping is the number of returned orders and payment method. Returned orders are those which have been shipped but not collected by the customer and are returned to the store. Payment method refers to the way that customers choose to pay for the price of the order which are usually two: Pre-pay and Cash-on-delivery. In this paper, a novel model called RFMSP is presented to calculated the customer lifetime value, taking these two parameters into account. The RFMSP model is based on the common RFM model while adding two extra parameter. The S represents the order status and the P indicates the payment method. As a case study for this model, the purchase history of customers in an online shop is used to compute the customer lifetime value over a period of twenty months.

Keywords: RFMSP model, AHP, customer lifetime value, k-means clustering, e-commerce

Procedia PDF Downloads 323
746 Big Data Analysis with Rhipe

Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim

Abstract:

Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.

Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe

Procedia PDF Downloads 499
745 Soft Computing Employment to Optimize Safety Stock Levels in Supply Chain Dairy Product under Supply and Demand Uncertainty

Authors: Riyadh Jamegh, Alla Eldin Kassam, Sawsan Sabih

Abstract:

In order to overcome uncertainty conditions and inability to meet customers' requests due to these conditions, organizations tend to reserve a certain safety stock level (SSL). This level must be chosen carefully in order to avoid the increase in holding cost due to excess in SSL or shortage cost due to too low SSL. This paper used soft computing fuzzy logic to identify optimal SSL; this fuzzy model uses the dynamic concept to cope with high complexity environment status. The proposed model can deal with three input variables, i.e., demand stability level, raw material availability level, and on hand inventory level by using dynamic fuzzy logic to obtain the best SSL as an output. In this model, demand stability, raw material, and on hand inventory levels are described linguistically and then treated by inference rules of the fuzzy model to extract the best level of safety stock. The aim of this research is to provide dynamic approach which is used to identify safety stock level, and it can be implanted in different industries. Numerical case study in the dairy industry with Yogurt 200 gm cup product is explained to approve the validity of the proposed model. The obtained results are compared with the current level of safety stock which is calculated by using the traditional approach. The importance of the proposed model has been demonstrated by the significant reduction in safety stock level.

Keywords: inventory optimization, soft computing, safety stock optimization, dairy industries inventory optimization

Procedia PDF Downloads 127
744 Distributed Actor System for Traffic Simulation

Authors: Han Wang, Zhuoxian Dai, Zhe Zhu, Hui Zhang, Zhenyu Zeng

Abstract:

In traditional microscopic traffic simulation, various approaches have been suggested to implement the single-agent behaviors about lane changing and intelligent driver model. However, when it comes to very large metropolitan areas, microscopic traffic simulation requires more resources and become time-consuming, then macroscopic traffic simulation aggregate trends of interests rather than individual vehicle traces. In this paper, we describe the architecture and implementation of the actor system of microscopic traffic simulation, which exploits the distributed architecture of modern-day cloud computing. The results demonstrate that our architecture achieves high-performance and outperforms all the other traditional microscopic software in all tasks. To the best of our knowledge, this the first system that enables single-agent behavior in macroscopic traffic simulation. We thus believe it contributes to a new type of system for traffic simulation, which could provide individual vehicle behaviors in microscopic traffic simulation.

Keywords: actor system, cloud computing, distributed system, traffic simulation

Procedia PDF Downloads 193
743 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems

Authors: Nyeng P. Gyang

Abstract:

Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.

Keywords: cloud computing systems, multicore systems, parallel Delaunay triangulation, parallel surface modeling and generation

Procedia PDF Downloads 208
742 Cloud Computing Security for Multi-Cloud Service Providers: Controls and Techniques in Our Modern Threat Landscape

Authors: Sandesh Achar

Abstract:

Cloud computing security is a broad term that covers a variety of security concerns for organizations that use cloud services. Multi-cloud service providers must consider several factors when addressing security for their customers, including identity and access management, data at rest and in transit, egress and ingress traffic control, vulnerability and threat management, and auditing. This paper explores each of these aspects of cloud security in detail and provides recommendations for best practices for multi-cloud service providers. It also discusses the challenges inherent in securing a multi-cloud environment and offers solutions for overcoming these challenges. By the end of this paper, readers should have a good understanding of the various security concerns associated with multi-cloud environments in the context of today’s modern cyber threats and how to address them.

Keywords: multi-cloud service, system organization control, data loss prevention, identity and access management

Procedia PDF Downloads 99
741 A Case Study in Using Gamification in the Mobile Computing Course

Authors: Rula Al Azawi, Abobaker Shafi

Abstract:

The purpose of this paper is to use gamification technology in the mobile computing course to increase students motivation and engagement. The game applied to be designed by students focusing also to design educational game for children with age six years. This game will teach the students how to learn in a fun way. Our case study is implemented at Gulf College which is affiliated with Staffordshire University-UK. Our game design was applied to teach students Android Studio software by designing an educational game. Our goal with gamification is to improve student attendance, increase student engagement, problem solving and user stratification. Finally, we describe the findings and results of our case study. The data analysis and evaluation are based on students feedback, staff feedback and the final marking grades for the students.

Keywords: gamification, educational game, android studio software, students motivation and engagement

Procedia PDF Downloads 456
740 A New Method to Reduce 5G Application Layer Payload Size

Authors: Gui Yang Wu, Bo Wang, Xin Wang

Abstract:

Nowadays, 5G service-based interface architecture uses text-based payload like JSON to transfer business data between network functions, which has obvious advantages as internet services but causes unnecessarily larger traffic. In this paper, a new 5G application payload size reduction method is presented to provides the mechanism to negotiate about new capability between network functions when network communication starts up and how 5G application data are reduced according to negotiated information with peer network function. Without losing the advantages of 5G text-based payload, this method demonstrates an excellent result on application payload size reduction and does not increase the usage quota of computing resource. Implementation of this method does not impact any standards or specifications and not change any encoding or decoding functionality too. In a real 5G network, this method will contribute to network efficiency and eventually save considerable computing resources.

Keywords: 5G, JSON, payload size, service-based interface

Procedia PDF Downloads 187
739 Integration of Internet-Accessible Resources in the Field of Mobile Robots

Authors: B. Madhevan, R. Sakkaravarthi, R. Diya

Abstract:

The number and variety of mobile robot applications are increasing day by day, both in an industry and in our daily lives. First developed as a tool, nowadays mobile robots can be integrated as an entity in Internet-accessible resources. The present work is organized around four potential resources such as cloud computing, Internet of things, Big data analysis and Co-simulation. Further, the focus relies on integrating, analyzing and discussing the need for integrating Internet-accessible resources and the challenges deriving from such integration, and how these issues have been tackled. Hence, the research work investigates the concepts of the Internet-accessible resources from the aspect of the autonomous mobile robots with an overview of the performances of the currently available database systems. IaR is a world-wide network of interconnected objects, can be considered an evolutionary process in mobile robots. IaR constitutes an integral part of future Internet with data analysis, consisting of both physical and virtual things.

Keywords: internet-accessible resources, cloud computing, big data analysis, internet of things, mobile robot

Procedia PDF Downloads 391
738 A Timed and Colored Petri Nets for Modeling and Verify Cloud System Elasticity

Authors: Walid Louhichi, Mouhebeddine Berrima, Narjes Ben Rajed

Abstract:

Elasticity is the essential property of cloud computing. As the name suggests, it constitutes the ability of a cloud system to adjust resource provisioning in relation to fluctuating workload. There are two types of elasticity operations, vertical and horizontal. In this work, we are interested in horizontal scaling, which is ensured by two mechanisms; scaling in and scaling out. Following the sizing of the system, we can adopt scaling in in the event of over-supply and scaling out in the event of under-supply. In this paper, we propose a formal model, based on colored and temporized Petri nets, for the modeling of the duplication and the removal of a virtual machine from a server. This model is based on formal Petri Nets modeling language. The proposed models are edited, verified, and simulated with two examples implemented in CPNtools, which is a modeling tool for colored and timed Petri nets.

Keywords: cloud computing, elasticity, elasticity controller, petri nets, scaling in, scaling out

Procedia PDF Downloads 156
737 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption

Authors: Waziri Victor Onomza, John K. Alhassan, Idris Ismaila, Noel Dogonyaro Moses

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute theoretical presentations in high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, homomorphic, homomorphic encryption scheme

Procedia PDF Downloads 382
736 Optimizing Availability of Marine Knowledge Repository with Cloud-Based Framework

Authors: Ahmad S. Mohd Noor, Emma A. Sirajudin, Nur F. Mat Zain

Abstract:

Reliability is an important property for knowledge repository system. National Marine Bioinformatics System or NABTICS is a marine knowledge repository portal aimed to provide a baseline for marine biodiversity and a tool for researchers and developers. It is intended to be a large and growing online database and also a metadata system for inputs of research analysis. The trends of present large distributed systems such as Cloud computing are the delivery of computing as a service rather than a product. The goal of this research is to make NABTICS a system of greater availability by integrating it with Cloud based Neighbor Replication and Failure Recovery (NRFR). This can be achieved by implementation of NABTICS into distributed environment. As a result, the user can experience minimum downtime while using the system should the server is having a failure. Consequently the online database application is said to be highly available.

Keywords: cloud, availability, distributed system, marine repository, database replication

Procedia PDF Downloads 472
735 A Parallel Algorithm for Solving the PFSP on the Grid

Authors: Samia Kouki

Abstract:

Solving NP-hard combinatorial optimization problems by exact search methods, such as Branch-and-Bound, may degenerate to complete enumeration. For that reason, exact approaches limit us to solve only small or moderate size problem instances, due to the exponential increase in CPU time when problem size increases. One of the most promising ways to reduce significantly the computational burden of sequential versions of Branch-and-Bound is to design parallel versions of these algorithms which employ several processors. This paper describes a parallel Branch-and-Bound algorithm called GALB for solving the classical permutation flowshop scheduling problem as well as its implementation on a Grid computing infrastructure. The experimental study of our distributed parallel algorithm gives promising results and shows clearly the benefit of the parallel paradigm to solve large-scale instances in moderate CPU time.

Keywords: grid computing, permutation flow shop problem, branch and bound, load balancing

Procedia PDF Downloads 283
734 Creation of a Realistic Railway Simulator Developed on a 3D Graphic Game Engine Using a Numerical Computing Programming Environment

Authors: Kshitij Ansingkar, Yohei Hoshino, Liangliang Yang

Abstract:

Advances in algorithms related to autonomous systems have made it possible to research on improving the accuracy of a train’s location. This has the capability of increasing the throughput of a railway network without the need for the creation of additional infrastructure. To develop such a system, the railway industry requires data to test sensor fusion theories or implement simultaneous localization and mapping (SLAM) algorithms. Though such simulation data and ground truth datasets are available for testing automation algorithms of vehicles, however, due to regulations and economic considerations, there is a dearth of such datasets in the railway industry. Thus, there is a need for the creation of a simulation environment that can generate realistic synthetic datasets. This paper proposes (1) to leverage the capabilities of open-source 3D graphic rendering software to create a visualization of the environment. (2) to utilize open-source 3D geospatial data for accurate visualization and (3) to integrate the graphic rendering software with a programming language and numerical computing platform. To develop such an integrated platform, this paper utilizes the computing platform’s advanced sensor models like LIDAR, camera, IMU or GPS and merges it with the 3D rendering of the game engine to generate high-quality synthetic data. Further, these datasets can be used to train Railway models and improve the accuracy of a train’s location.

Keywords: 3D game engine, 3D geospatial data, dataset generation, railway simulator, sensor fusion, SLAM

Procedia PDF Downloads 14
733 A TiO₂-Based Memristor Reliable for Neuromorphic Computing

Authors: X. S. Wu, H. Jia, P. H. Qian, Z. Zhang, H. L. Cai, F. M. Zhang

Abstract:

A bipolar resistance switching behaviour is detected for a Ti/TiO2-x/Au memristor device, which is fabricated by a masked designed magnetic sputtering. The current dependence of voltage indicates the curve changes slowly and continuously. When voltage pulses are applied to the device, the set and reset processes maintains linearity, which is used to simulate the synapses. We argue that the conduction mechanism of the device is from the oxygen vacancy channel model, and the resistance of the device change slowly due to the reaction between the titanium electrode and the intermediate layer and the existence of a large number of oxygen vacancies in the intermediate layer. Then, Hopfield neural network is constructed to simulate the behaviour of neural network in image processing, and the accuracy rate is more than 98%. This shows that titanium dioxide memristor has a broad application prospect in high performance neural network simulation.

Keywords: memristor fabrication, neuromorphic computing, bionic synaptic application, TiO₂-based

Procedia PDF Downloads 91
732 Virtual Computing Lab for Phonics Development among Deaf Students

Authors: Ankita R. Bansal, Naren S. Burade

Abstract:

Idea is to create a cloud based virtual lab for Deaf Students, “A language acquisition program using Visual Phonics and Cued Speech” using VMware Virtual Lab. This lab will demonstrate students the sounds of letters associated with the Language, building letter blocks, making words, etc Virtual labs are used for demos, training, for the Lingual development of children in their vernacular language. The main potential benefits are reduced labour and hardware costs, faster response times to users. Virtual Computing Labs allows any of the software as a service solutions, virtualization solutions, and terminal services solutions available today to offer as a service on demand, where a single instance of the software runs on the cloud and services multiple end users. VMWare, XEN, MS Virtual Server, Virtuoso, and Citrix are typical examples.

Keywords: visual phonics, language acquisition, vernacular language, cued speech, virtual lab

Procedia PDF Downloads 599
731 Knowledge Based Automated Software Engineering Platform Used for the Development of Bulgarian E-Customs

Authors: Ivan Stanev, Maria Koleva

Abstract:

Described are challenges to the Bulgarian e-Customs (BeC) related to low level of interoperability and standardization, inefficient use of available infrastructure, lack of centralized identification and authorization, extremely low level of software process automation, and insufficient quality of data stored in official registers. The technical requirements for BeC are prepared with a focus on domain independent common platform, specialized customs and excise components, high scalability, flexibility, and reusability. The Knowledge Based Automated Software Engineering (KBASE) Common Platform for Automated Programming (CPAP) is selected as an instrument covering BeC requirements for standardization, programming automation, knowledge interpretation and cloud computing. BeC stage 3 results are presented and analyzed. BeC.S3 development trends are identified.

Keywords: service oriented architecture, cloud computing, knowledge based automated software engineering, common platform for automated programming, e-customs

Procedia PDF Downloads 374
730 Model and Algorithm for Dynamic Wireless Electric Vehicle Charging Network Design

Authors: Trung Hieu Tran, Jesse O'Hanley, Russell Fowler

Abstract:

When in-wheel wireless charging technology for electric vehicles becomes mature, a need for such integrated charging stations network development is essential. In this paper, we thus investigate the optimisation problem of in-wheel wireless electric vehicle charging network design. A mixed-integer linear programming model is formulated to solve into optimality the problem. In addition, a meta-heuristic algorithm is proposed for efficiently solving large-sized instances within a reasonable computation time. A parallel computing strategy is integrated into the algorithm to speed up its computation time. Experimental results carried out on the benchmark instances show that our model and algorithm can find the optimal solutions and their potential for practical applications.

Keywords: electric vehicle, wireless charging station, mathematical programming, meta-heuristic algorithm, parallel computing

Procedia PDF Downloads 83