Search results for: Cluster nodes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 807

Search results for: Cluster nodes

387 Increasing Replica Consistency Performances with Load Balancing Strategy in Data Grid Systems

Authors: Sarra Senhadji, Amar Kateb, Hafida Belbachir

Abstract:

Data replication in data grid systems is one of the important solutions that improve availability, scalability, and fault tolerance. However, this technique can also bring some involved issues such as maintaining replica consistency. Moreover, as grid environment are very dynamic some nodes can be more uploaded than the others to become eventually a bottleneck. The main idea of our work is to propose a complementary solution between replica consistency maintenance and dynamic load balancing strategy to improve access performances under a simulated grid environment.

Keywords: Consistency, replication, data grid, load balancing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2320
386 Detection of Black Holes in MANET Using Collaborative Watchdog with Fuzzy Logic

Authors: Y. Harold Robinson, M. Rajaram, E. Golden Julie, S. Balaji

Abstract:

Mobile ad hoc network (MANET) is a self-configuring network of mobile node connected without wires. A Fuzzy Logic Based Collaborative watchdog approach is used to reduce the detection time of misbehaved nodes and increase the overall truthfulness. This methodology will increase the secure efficient routing by detecting the Black Holes attacks. The simulation results proved that this method improved the energy, reduced the delay and also improved the overall performance of the detecting black hole attacks in MANET.

Keywords: MANET, collaborative watchdog, fuzzy logic, AODV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
385 Peer-to-Peer Epidemic Algorithms for Reliable Multicasting in Ad Hoc Networks

Authors: Zülküf Genç, Öznur Özkasap

Abstract:

Characteristics of ad hoc networks and even their existence depend on the nodes forming them. Thus, services and applications designed for ad hoc networks should adapt to this dynamic and distributed environment. In particular, multicast algorithms having reliability and scalability requirements should abstain from centralized approaches. We aspire to define a reliable and scalable multicast protocol for ad hoc networks. Our target is to utilize epidemic techniques for this purpose. In this paper, we present a brief survey of epidemic algorithms for reliable multicasting in ad hoc networks, and describe formulations and analytical results for simple epidemics. Then, P2P anti-entropy algorithm for content distribution and our prototype simulation model are described together with our initial results demonstrating the behavior of the algorithm.

Keywords: Ad hoc networks, epidemic, peer-to-peer, reliablemulticast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
384 Microclimate Variations in Rio de Janeiro Related to Massive Public Transportation

Authors: Marco E. O. Jardim, Frederico A. M. Souza, Valeria M. Bastos, Myrian C. A. Costa, Nelson F. F. Ebecken

Abstract:

Urban public transportation in Rio de Janeiro is based on bus lines, powered by diesel, and four limited metro lines that support only some neighborhoods. This work presents an infrastructure built to better understand microclimate variations related to massive urban transportation in some specific areas of the city. The use of sensor nodes with small analytics capacity provides environmental information to population or public services. The analyses of data collected from a few small sensors positioned near some heavy traffic streets show the harmful impact due to poor bus route plan.

Keywords: Big data, IoT, public transportation, public health system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1057
383 Selection Initial modes for Belief K-modes Method

Authors: Sarra Ben Hariz, Zied Elouedi, Khaled Mellouli

Abstract:

The belief K-modes method (BKM) approach is a new clustering technique handling uncertainty in the attribute values of objects in both the cluster construction task and the classification one. Like the standard version of this method, the BKM results depend on the chosen initial modes. So, one selection method of initial modes is developed, in this paper, aiming at improving the performances of the BKM approach. Experiments with several sets of real data show that by considered the developed selection initial modes method, the clustering algorithm produces more accurate results.

Keywords: Clustering, Uncertainty, Belief function theory, Belief K-modes Method, Initial modes selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
382 DJess A Knowledge-Sharing Middleware to Deploy Distributed Inference Systems

Authors: Federico Cabitza, Bernardo Dal Seno

Abstract:

In this paper DJess is presented, a novel distributed production system that provides an infrastructure for factual and procedural knowledge sharing. DJess is a Java package that provides programmers with a lightweight middleware by which inference systems implemented in Jess and running on different nodes of a network can communicate. Communication and coordination among inference systems (agents) is achieved through the ability of each agent to transparently and asynchronously reason on inferred knowledge (facts) that might be collected and asserted by other agents on the basis of inference code (rules) that might be either local or transmitted by any node to any other node.

Keywords: Knowledge-Based Systems, Expert Systems, Distributed Inference Systems, Parallel Production Systems, Ambient Intelligence, Mobile Agents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
381 Generalized Chebyshev Collocation Method

Authors: Junghan Kim, Wonkyu Chung, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, we introduce a generalized Chebyshev collocation method (GCCM) based on the generalized Chebyshev polynomials for solving stiff systems. For employing a technique of the embedded Runge-Kutta method used in explicit schemes, the property of the generalized Chebyshev polynomials is used, in which the nodes for the higher degree polynomial are overlapped with those for the lower degree polynomial. The constructed algorithm controls both the error and the time step size simultaneously and further the errors at each integration step are embedded in the algorithm itself, which provides the efficiency of the computational cost. For the assessment of the effectiveness, numerical results obtained by the proposed method and the Radau IIA are presented and compared.

Keywords: Generalized Chebyshev Collocation method, Generalized Chebyshev Polynomial, Initial value problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2624
380 Modelling Peer Group Dieting Behaviour

Authors: M. J. Cunha

Abstract:

The aim of this paper is to understand how peers can influence adolescent girls- dieting behaviour and their body image. Departing from imitation and social learning theories, we study whether adolescent girls tend to model their peer group dieting behaviours, thus influencing their body image construction. Our study was conducted through an enquiry applied to a cluster sample of 466 adolescent high school girls in Lisbon city public schools. Our main findings point to an association between girls- and peers- dieting behaviours, thus reinforcing the modelling hypothesis.

Keywords: Modelling, Diet, Body image, Adolescent girls, Peer group.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
379 An Improved Resource Discovery Approach Using P2P Model for Condor: A Grid Middleware

Authors: Anju Sharma, Seema Bawa

Abstract:

Resource Discovery in Grids is critical for efficient resource allocation and management. Heterogeneous nature and dynamic availability of resources make resource discovery a challenging task. As numbers of nodes are increasing from tens to thousands, scalability is essentially desired. Peer-to-Peer (P2P) techniques, on the other hand, provide effective implementation of scalable services and applications. In this paper we propose a model for resource discovery in Condor Middleware by using the four axis framework defined in P2P approach. The proposed model enhances Condor to incorporate functionality of a P2P system, thus aim to make Condor more scalable, flexible, reliable and robust.

Keywords: Condor Middleware, Grid Computing, P2P, Resource Discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
378 Performance Comparison for AODV, DSR and DSDV W.R.T. CBR and TCP in Large Networks

Authors: Ibrahim M. Buamod, Muattaz Elaneizi

Abstract:

Mobile Ad hoc Network (MANET) is a wireless ad hoc self-configuring network of mobile routers (and associated hosts) connected by wireless links, the union of which forms an arbitrary topology, cause of the random mobility of the nodes. In this paper, an attempt has been made to compare these three protocols DSDV, AODV and DSR on the performance basis under different traffic protocols namely CBR and TCP in a large network. The simulation tool is NS2, the scenarios are made to see the effect of pause times. The results presented in this paper clearly indicate that the different protocols behave differently under different pause times. Also, the results show the main characteristics of different traffic protocols operating on MANETs and thus select the best protocol on each scenario.

Keywords: Awk, CBR, Random waypoint model, TCP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
377 The Relevance of Intellectual Capital: An Analysis of Spanish Universities

Authors: Yolanda Ramírez, Ángel Tejada, Agustín Baidez

Abstract:

In recent years, the intellectual capital reporting in higher education institutions has been acquiring progressive importance worldwide. Intellectual capital approaches becomes critical at universities, mainly due to the fact that knowledge is the main output as well as input in these institutions. Universities produce knowledge, either through scientific and technical research (the results of investigation, publications, etc.) or through teaching (students trained and productive relationships with their stakeholders). The purpose of the present paper is to identify the intangible elements about which university stakeholders demand most information. The results of a study done at Spanish universities are used to see which groups of universities have stakeholders who are more proactive to the disclosure of intellectual capital.

Keywords: Intellectual capital, universities, Spain, cluster analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2083
376 Performance of Total Vector Error of an Estimated Phasor within Local Area Networks

Authors: Ahmed Abdolkhalig, Rastko Zivanovic

Abstract:

This paper evaluates the Total Vector Error of an estimated Phasor as define in IEEE C37.118 standard within different medium access in Local Area Networks (LAN). Three different LAN models (CSMA/CD, CSMA/AMP and Switched Ethernet) are evaluated. The Total Vector Error of the estimated Phasor has been evaluated for the effect of Nodes Number under the standardized network Band-width values defined in IEC 61850-9-2 communication standard (i.e. 0.1, 1 and 10 Gbps).

Keywords: Phasor, Local Area Network, Total Vector Error, IEEE C37.118, IEC 61850.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4140
375 Efficient Web Usage Mining Based on K-Medoids Clustering Technique

Authors: P. Sengottuvelan, T. Gopalakrishnan

Abstract:

Web Usage Mining is the application of data mining techniques to find usage patterns from web log data, so as to grasp required patterns and serve the requirements of Web-based applications. User’s expertise on the internet may be improved by minimizing user’s web access latency. This may be done by predicting the future search page earlier and the same may be prefetched and cached. Therefore, to enhance the standard of web services, it is needed topic to research the user web navigation behavior. Analysis of user’s web navigation behavior is achieved through modeling web navigation history. We propose this technique which cluster’s the user sessions, based on the K-medoids technique.

Keywords: Clustering, K-medoids, Recommendation, User Session, Web Usage Mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
374 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 482
373 Hierarchical Checkpoint Protocol in Data Grids

Authors: Rahma Souli-Jbali, Minyar Sassi Hidri, Rahma Ben Ayed

Abstract:

Grid of computing nodes has emerged as a representative means of connecting distributed computers or resources scattered all over the world for the purpose of computing and distributed storage. Since fault tolerance becomes complex due to the availability of resources in decentralized grid environment, it can be used in connection with replication in data grids. The objective of our work is to present fault tolerance in data grids with data replication-driven model based on clustering. The performance of the protocol is evaluated with Omnet++ simulator. The computational results show the efficiency of our protocol in terms of recovery time and the number of process in rollbacks.

Keywords: Data grids, fault tolerance, chandy-lamport, clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 941
372 Distributed Frequency Synchronization for Global Synchronization in Wireless Mesh Networks

Authors: Jung-Hyun Kim, Jihyung Kim, Kwangjae Lim, Dong Seung Kwon

Abstract:

In this paper, our focus is to assure a global frequency synchronization in OFDMA-based wireless mesh networks with local information. To acquire the global synchronization in distributed manner, we propose a novel distributed frequency synchronization (DFS) method. DFS is a method that carrier frequencies of distributed nodes converge to a common value by repetitive estimation and averaging step and sharing step. Experimental results show that DFS achieves noteworthy better synchronization success probability than existing schemes in OFDMA-based mesh networks where the estimation error is presented.

Keywords: OFDMA systems, Frequency synchronization, Distributed networks, Multiple groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
371 IntelligentLogger: A Heavy-Duty Vehicles Fleet Management System Based on IoT and Smart Prediction Techniques

Authors: D. Goustouridis, A. Sideris, I. Sdrolias, G. Loizos, N.-Alexander Tatlas, S. M. Potirakis

Abstract:

Both daily and long-term management of a heavy-duty vehicles and construction machinery fleet is an extremely complicated and hard to solve issue. This is mainly due to the diversity of the fleet vehicles – machinery, which concerns not only the vehicle types, but also their age/efficiency, as well as the fleet volume, which is often of the order of hundreds or even thousands of vehicles/machineries. In the present paper we present “InteligentLogger”, a holistic heavy-duty fleet management system covering a wide range of diverse fleet vehicles. This is based on specifically designed hardware and software for the automated vehicle health status and operational cost monitoring, for smart maintenance. InteligentLogger is characterized by high adaptability that permits to be tailored to practically any heavy-duty vehicle/machinery (of different technologies -modern or legacy- and of dissimilar uses). Contrary to conventional logistic systems, which are characterized by raised operational costs and often errors, InteligentLogger provides a cost-effective and reliable integrated solution for the e-management and e-maintenance of the fleet members. The InteligentLogger system offers the following unique features that guarantee successful heavy-duty vehicles/machineries fleet management: (a) Recording and storage of operating data of motorized construction machinery, in a reliable way and in real time, using specifically designed Internet of Things (IoT) sensor nodes that communicate through the available network infrastructures, e.g., 3G/LTE; (b) Use on any machine, regardless of its age, in a universal way; (c) Flexibility and complete customization both in terms of data collection, integration with 3rd party systems, as well as in terms of processing and drawing conclusions; (d) Validation, error reporting & correction, as well as update of the system’s database; (e) Artificial intelligence (AI) software, for processing information in real time, identifying out-of-normal behavior and generating alerts; (f) A MicroStrategy based enterprise BI, for modeling information and producing reports, dashboards, and alerts focusing on vehicles– machinery optimal usage, as well as maintenance and scraping policies; (g) Modular structure that allows low implementation costs in the basic fully functional version, but offers scalability without requiring a complete system upgrade.

Keywords: E-maintenance, predictive maintenance, IoT sensor nodes, cost optimization, artificial intelligence, heavy-duty vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 751
370 Design for Reliability and Manufacturing Yield (Study and Modeling of Defects in Integrated Circuits for their Reliability Analysis)

Authors: G. Ait Abdelmalek, R. Ziani

Abstract:

In this document, we have proposed a robust conceptual strategy, in order to improve the robustness against the manufacturing defects and thus the reliability of logic CMOS circuits. However, in order to enable the use of future CMOS technology nodes this strategy combines various types of design: DFR (Design for Reliability), techniques of tolerance: hardware redundancy TMR (Triple Modular Redundancy) for hard error tolerance, the DFT (Design for Testability. The Results on largest ISCAS and ITC benchmark circuits show that our approach improves considerably the reliability, by reducing the key factors, the area costs and fault tolerance probability.

Keywords: Design for reliability, design for testability, fault tolerance, manufacturing yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053
369 Optimization of Communication Protocols by stochastic Delay Mechanisms

Authors: J. Levendovszky, I. Koncz, P. Boros

Abstract:

The paper is concerned with developing stochastic delay mechanisms for efficient multicast protocols and for smooth mobile handover processes which are capable of preserving a given Quality of Service (QoS). In both applications the participating entities (receiver nodes or subscribers) sample a stochastic timer and generate load after a random delay. In this way, the load on the networking resources is evenly distributed which helps to maintain QoS communication. The optimal timer distributions have been sought in different p.d.f. families (e.g. exponential, power law and radial basis function) and the optimal parameter have been found in a recursive manner. Detailed simulations have demonstrated the improvement in performance both in the case of multicast and mobile handover applications.

Keywords: Multicast communication, stochactic delay mechanisms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543
368 Morphological Description of Cervical Cell Images for the Pathological Recognition

Authors: N. Lassouaoui, L. Hamami, N. Nouali

Abstract:

The tracking allows to detect the tumor affections of cervical cancer, it is particularly complex and consuming time, because it consists in seeking some abnormal cells among a cluster of normal cells. In this paper, we present our proposed computer system for helping the doctors in tracking the cervical cancer. Knowing that the diagnosis of the malignancy is based in the set of atypical morphological details of all cells, herein, we present an unsupervised genetic algorithm for the separation of cell components since the diagnosis is doing by analysis of the core and the cytoplasm. We give also the various algorithms used for computing the morphological characteristics of cells (Ratio core/cytoplasm, cellular deformity, ...) necessary for the recognition of illness.

Keywords: Cervical cell, morphological analysis, recognition, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928
367 Effective Implementation of Burst SegmentationTechniques in OBS Networks

Authors: A. Abid, F. M. Abbou, H. T. Ewe

Abstract:

Optical Bursts Switching (OBS) is a relatively new optical switching paradigm. Contention and burst loss in OBS networks are major concerns. To resolve contentions, an interesting alternative to discarding the entire data burst is to partially drop the burst. Partial burst dropping is based on burst segmentation concept that its implementation is constrained by some technical challenges, besides the complexity added to the algorithms and protocols on both edge and core nodes. In this paper, the burst segmentation concept is investigated, and an implementation scheme is proposed and evaluated. An appropriate dropping policy that effectively manages the size of the segmented data bursts is presented. The dropping policy is further supported by a new control packet format that provides constant transmission overhead.

Keywords: Burst length, Burst Segmentation, Optical BurstSwitching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
366 Quality of Service Evaluation using a Combination of Fuzzy C-Means and Regression Model

Authors: Aboagela Dogman, Reza Saatchi, Samir Al-Khayatt

Abstract:

In this study, a network quality of service (QoS) evaluation system was proposed. The system used a combination of fuzzy C-means (FCM) and regression model to analyse and assess the QoS in a simulated network. Network QoS parameters of multimedia applications were intelligently analysed by FCM clustering algorithm. The QoS parameters for each FCM cluster centre were then inputted to a regression model in order to quantify the overall QoS. The proposed QoS evaluation system provided valuable information about the network-s QoS patterns and based on this information, the overall network-s QoS was effectively quantified.

Keywords: Fuzzy C-means; regression model, network quality of service

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
365 Harnessing Replication in Object Allocation

Authors: H. T. Barney, G. C. Low

Abstract:

The design of distributed systems involves the partitioning of the system into components or partitions and the allocation of these components to physical nodes. Techniques have been proposed for both the partitioning and allocation process. However these techniques suffer from a number of limitations. For instance object replication has the potential to greatly improve the performance of an object orientated distributed system but can be difficult to use effectively and there are few techniques that support the developer in harnessing object replication. This paper presents a methodological technique that helps developers decide how objects should be allocated in order to improve performance in a distributed system that supports replication. The performance of the proposed technique is demonstrated and tested on an example system.

Keywords: Allocation, Distributed Systems, Replication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1434
364 Multiple Regression based Graphical Modeling for Images

Authors: Pavan S., Sridhar G., Sridhar V.

Abstract:

Super resolution is one of the commonly referred inference problems in computer vision. In the case of images, this problem is generally addressed using a graphical model framework wherein each node represents a portion of the image and the edges between the nodes represent the statistical dependencies. However, the large dimensionality of images along with the large number of possible states for a node makes the inference problem computationally intractable. In this paper, we propose a representation wherein each node can be represented as acombination of multiple regression functions. The proposed approach achieves a tradeoff between the computational complexity and inference accuracy by varying the number of regression functions for a node.

Keywords: Belief propagation, Graphical model, Regression, Super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
363 An Agent-Based Approach to Immune Modelling: Priming Individual Response

Authors: Dimitri Perrin, Heather J. Ruskin, Martin Crane

Abstract:

This study focuses on examining why the range of experience with respect to HIV infection is so diverse, especially in regard to the latency period. An agent-based approach in modelling the infection is used to extract high-level behaviour which cannot be obtained analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties, contributing to the individual disease experience, and is included in a network which mimics the chain of lymph nodes. The model also accounts for stochastic events such as viral mutations. The size and complexity of the model require major computational effort and parallelisation methods are used.

Keywords: HIV, Immune modelling, Agent-based system, individual response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1266
362 Overview of Adaptive Spline Interpolation

Authors: Rongli Gai, Zhiyuan Chang, Xiaohong Wang, Jingyu Liu

Abstract:

In view of various situations in the interpolation process, most researchers use self-adaptation to adjust the interpolation process, which is also one of the current and future research hotspots in the field of CNC (Computerized Numerical Control) machining. In the interpolation process, according to the overview of the spline curve interpolation algorithm, the adaptive analysis is carried out from the factors affecting the interpolation process. The adaptive operation is reflected in various aspects, such as speed, parameters, errors, nodes, feed rates, random period, sensitive point, step size, curvature, adaptive segmentation, adaptive optimization, etc. This paper will analyze and summarize the research of adaptive imputation in the direction of the above factors affecting imputation.

Keywords: Adaptive algorithm, CNC machining, interpolation constraints, spline curve interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 535
361 Study the Effect of Soft Errors on FlexRay-Based Automotive Systems

Authors: Yung-Yuan Chen, Kuen-Long Leu

Abstract:

FlexRay, as a communication protocol for automotive control systems, is developed to fulfill the increasing demand on the electronic control units for implementing systems with higher safety and more comfort. In this work, we study the impact of radiation-induced soft errors on FlexRay-based steer-by-wire system. We injected the soft errors into general purpose register set of FlexRay nodes to identify the most critical registers, the failure modes of the steer-by-wire system, and measure the probability distribution of failure modes when an error occurs in the register file.

Keywords: Soft errors, FlexRay, fault injection, steer-by-wirer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
360 Local Error Control in the RK5GL3 Method

Authors: J.S.C. Prentice

Abstract:

The RK5GL3 method is a numerical method for solving initial value problems in ordinary differential equations, and is based on a combination of a fifth-order Runge-Kutta method and 3-point Gauss-Legendre quadrature. In this paper we describe an effective local error control algorithm for RK5GL3, which uses local extrapolation with an eighth-order Runge-Kutta method in tandem with RK5GL3, and a Hermite interpolating polynomial for solution estimation at the Gauss-Legendre quadrature nodes.

Keywords: RK5GL3, RKrGLm, Runge-Kutta, Gauss-Legendre, Hermite interpolating polynomial, initial value problem, local error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
359 Benchmarking: Performance on ALPS and Formosa Clusters

Authors: Chih-Wei Hsieh, Chau-Yi Chou, Sheng-HsiuKuo, Tsung-Che Tsai, I-Chen Wu

Abstract:

This paper presents the benchmarking results and performance evaluation of differentclustersbuilt atthe National Center for High-Performance Computingin Taiwan. Performance of processor, memory subsystem andinterconnect is a critical factor in the overall performance of high performance computing platforms. The evaluation compares different system architecture and software platforms. Most supercomputer used HPL to benchmark their system performance, in accordance with the requirement of the TOP500 List. In this paper we consider system memory access factors that affect benchmark performance, such as processor and memory performance.We hope these works will provide useful information for future development and construct cluster system.

Keywords: Performance Evaluation, Benchmarking and High-Performance Computing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
358 Joint Design of MIMO Relay Networks Based on MMSE Criterion

Authors: Seungwon Choi, Seungri Jin, Ayoung Heo, Jung-Hyun Park, Dong-Jo Park

Abstract:

This paper deals with wireless relay communication systems in which multiple sources transmit information to the destination node by the help of multiple relays. We consider a signal forwarding technique based on the minimum mean-square error (MMSE) approach with multiple antennas for each relay. A source-relay-destination joint design strategy is proposed with power constraints at the destination and the source nodes. Simulation results confirm that the proposed joint design method improves the average MSE performance compared with that of conventional MMSE relaying schemes.

Keywords: minimum mean squre error (MMSE), multiple relay, MIMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710