Search results for: Fault diagnostics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 416

Search results for: Fault diagnostics

56 Health Monitoring and Failure Detection of Electronic and Structural Components in Small Unmanned Aerial Vehicles

Authors: Gopi Kandaswamy, P. Balamuralidhar

Abstract:

Fully autonomous small Unmanned Aerial Vehicles (UAVs) are increasingly being used in many commercial applications. Although a lot of research has been done to develop safe, reliable and durable UAVs, accidents due to electronic and structural failures are not uncommon and pose a huge safety risk to the UAV operators and the public. Hence there is a strong need for an automated health monitoring system for UAVs with a view to minimizing mission failures thereby increasing safety. This paper describes our approach to monitoring the electronic and structural components in a small UAV without the need for additional sensors to do the monitoring. Our system monitors data from four sources; sensors, navigation algorithms, control inputs from the operator and flight controller outputs. It then does statistical analysis on the data and applies a rule based engine to detect failures. This information can then be fed back into the UAV and a decision to continue or abort the mission can be taken automatically by the UAV and independent of the operator. Our system has been verified using data obtained from real flights over the past year from UAVs of various sizes that have been designed and deployed by us for various applications.

Keywords: Fault detection, health monitoring, unmanned aerial vehicles, vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
55 A Grey-Fuzzy Controller for Optimization Technique in Wireless Networks

Authors: Yao-Tien Wang, Hsiang-Fu Yu, Dung Chen Chiou

Abstract:

In wireless and mobile communications, this progress provides opportunities for introducing new standards and improving existing services. Supporting multimedia traffic with wireless networks quality of service (QoS). In this paper, a grey-fuzzy controller for radio resource management (GF-RRM) is presented to maximize the number of the served calls and QoS provision in wireless networks. In a wireless network, the call arrival rate, the call duration and the communication overhead between the base stations and the control center are vague and uncertain. In this paper, we develop a method to predict the cell load and to solve the RRM problem based on the GF-RRM, and support the present facility has been built on the application-level of the wireless networks. The GF-RRM exhibits the better adaptability, fault-tolerant capability and performance than other algorithms. Through simulations, we evaluate the blocking rate, update overhead, and channel acquisition delay time of the proposed method. The results demonstrate our algorithm has the lower blocking rate, less updated overhead, and shorter channel acquisition delay.

Keywords: radio resource management, grey prediction, fuzzylogic control, wireless networks, quality of service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
54 Research on Transformer Condition-based Maintenance System using the Method of Fuzzy Comprehensive Evaluation

Authors: Po-Chun Lin, Jyh-Cherng Gu

Abstract:

This study adopted previous fault patterns, results of detection analysis, historical records and data, and experts- experiences to establish fuzzy principles and estimate the failure probability index of components of a power transformer. Considering that actual parameters and limiting conditions of parameters may differ, this study used the standard data of IEC, IEEE, and CIGRE as condition parameters. According to the characteristics of each condition parameter, relative degradation was introduced to reflect the degree of influence of the factors on the transformer condition. The method of fuzzy mathematics was adopted to determine the subordinate function of the transformer condition. The calculation used the Matlab Fuzzy Tool Box to select the condition parameters of coil winding, iron core, bushing, OLTC, insulating oil and other auxiliary components and factors (e.g., load records, performance history, and maintenance records) of the transformer to establish the fuzzy principles. Examples were presented to support the rationality and effectiveness of the evaluation method of power transformer performance conditions, as based on fuzzy comprehensive evaluation.

Keywords: Fuzzy, relative degradation degree, condition-basedmaintenance, power transformer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2470
53 Clustering Mixed Data Using Non-normal Regression Tree for Process Monitoring

Authors: Youngji Yoo, Cheong-Sool Park, Jun Seok Kim, Young-Hak Lee, Sung-Shick Kim, Jun-Geol Baek

Abstract:

In the semiconductor manufacturing process, large amounts of data are collected from various sensors of multiple facilities. The collected data from sensors have several different characteristics due to variables such as types of products, former processes and recipes. In general, Statistical Quality Control (SQC) methods assume the normality of the data to detect out-of-control states of processes. Although the collected data have different characteristics, using the data as inputs of SQC will increase variations of data, require wide control limits, and decrease performance to detect outof- control. Therefore, it is necessary to separate similar data groups from mixed data for more accurate process control. In the paper, we propose a regression tree using split algorithm based on Pearson distribution to handle non-normal distribution in parametric method. The regression tree finds similar properties of data from different variables. The experiments using real semiconductor manufacturing process data show improved performance in fault detecting ability.

Keywords: Semiconductor, non-normal mixed process data, clustering, Statistical Quality Control (SQC), regression tree, Pearson distribution system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
52 Genetic Algorithm Based Design of Fuzzy Logic Power System Stabilizers in Multimachine Power System

Authors: Manisha Dubey, Aalok Dubey

Abstract:

This paper presents an approach for the design of fuzzy logic power system stabilizers using genetic algorithms. In the proposed fuzzy expert system, speed deviation and its derivative have been selected as fuzzy inputs. In this approach the parameters of the fuzzy logic controllers have been tuned using genetic algorithm. Incorporation of GA in the design of fuzzy logic power system stabilizer will add an intelligent dimension to the stabilizer and significantly reduces computational time in the design process. It is shown in this paper that the system dynamic performance can be improved significantly by incorporating a genetic-based searching mechanism. To demonstrate the robustness of the genetic based fuzzy logic power system stabilizer (GFLPSS), simulation studies on multimachine system subjected to small perturbation and three-phase fault have been carried out. Simulation results show the superiority and robustness of GA based power system stabilizer as compare to conventionally tuned controller to enhance system dynamic performance over a wide range of operating conditions.

Keywords: Dynamic stability, Fuzzy logic power systemstabilizer, Genetic Algorithms, Genetic based power systemstabilizer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2736
51 The Feasibility of Augmenting an Augmented Reality Image Card on a Quick Response Code

Authors: Alfred Chen, Shr Yu Lu, Cong Seng Hong, Yur-June Wang

Abstract:

This research attempts to study the feasibility of augmenting an augmented reality (AR) image card on a Quick Response (QR) code. The authors have developed a new visual tag, which contains a QR code and an augmented AR image card. The new visual tag has features of reading both of the revealed data of the QR code and the instant data from the AR image card. Furthermore, a handheld communicating device is used to read and decode the new visual tag, and then the concealed data of the new visual tag can be revealed and read through its visual display. In general, the QR code is designed to store the corresponding data or, as a key, to access the corresponding data from the server through internet. Those reveled data from the QR code are represented in text. Normally, the AR image card is designed to store the corresponding data in 3-Dimensional or animation/video forms. By using QR code's property of high fault tolerant rate, the new visual tag can access those two different types of data by using a handheld communicating device. The new visual tag has an advantage of carrying much more data than independent QR code or AR image card. The major findings of this research are: 1) the most efficient area for the designed augmented AR card augmenting on the QR code is 9% coverage area out of the total new visual tag-s area, and 2) the best location for the augmented AR image card augmenting on the QR code is located in the bottom-right corner of the new visual tag.

Keywords: Augmented reality, QR code, Visual tag, Handheldcommunicating device

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
50 Predicting the Impact of the Defect on the Overall Environment in Function Based Systems

Authors: Parvinder S. Sandhu, Urvashi Malhotra, E. Ardil

Abstract:

There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.

Keywords: Software Metrics, Fuzzy, Neuro-Fuzzy, Software Faults, Accuracy, MAE, RMSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
49 Health Monitoring of Power Transformers by Dissolved Gas Analysis using Regression Method and Study the Effect of Filtration on Oil

Authors: Anjali Chatterjee, Nirmal Kumar Roy

Abstract:

Economically transformers constitute one of the largest investments in a Power system. For this reason, transformer condition assessment and management is a high priority task. If a transformer fails, it would have a significant negative impact on revenue and service reliability. Monitoring the state of health of power transformers has traditionally been carried out using laboratory Dissolved Gas Analysis (DGA) tests performed at periodic intervals on the oil sample, collected from the transformers. DGA of transformer oil is the single best indicator of a transformer-s overall condition and is a universal practice today, which started somewhere in the 1960s. Failure can occur in a transformer due to different reasons. Some failures can be limited or prevented by maintenance. Oil filtration is one of the methods to remove the dissolve gases and prevent the deterioration of the oil. In this paper we analysis the DGA data by regression method and predict the gas concentration in the oil in the future. We bring about a comparative study of different traditional methods of regression and the errors generated out of their predictions. With the help of these data we can deduce the health of the transformer by finding the type of fault if it has occurred or will occur in future. Additional in this paper effect of filtration on the transformer health is highlight by calculating the probability of failure of a transformer with and without oil filtrating.

Keywords: Power Transformers, Dissolve gas Analysis, Regression method, Filtration, oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2943
48 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study

Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple

Abstract:

There is a dramatic surge in the adoption of Machine Learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. Artificial Intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and two defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt ML techniques without rigorous testing, since they may be vulnerable to adversarial attacks, especially in security-critical areas such as the nuclear industry. We observed that while the adopted defence methods can effectively defend against different attacks, none of them could protect against all five adversarial attacks entirely.

Keywords: Resilient Machine Learning, attacks, defences, nuclear industry, crack detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501
47 Application of Machine Learning Methods to Online Test Error Detection in Semiconductor Test

Authors: Matthias Kirmse, Uwe Petersohn, Elief Paffrath

Abstract:

As in today's semiconductor industries test costs can make up to 50 percent of the total production costs, an efficient test error detection becomes more and more important. In this paper, we present a new machine learning approach to test error detection that should provide a faster recognition of test system faults as well as an improved test error recall. The key idea is to learn a classifier ensemble, detecting typical test error patterns in wafer test results immediately after finishing these tests. Since test error detection has not yet been discussed in the machine learning community, we define central problem-relevant terms and provide an analysis of important domain properties. Finally, we present comparative studies reflecting the failure detection performance of three individual classifiers and three ensemble methods based upon them. As base classifiers we chose a decision tree learner, a support vector machine and a Bayesian network, while the compared ensemble methods were simple and weighted majority vote as well as stacking. For the evaluation, we used cross validation and a specially designed practical simulation. By implementing our approach in a semiconductor test department for the observation of two products, we proofed its practical applicability.

Keywords: Ensemble methods, fault detection, machine learning, semiconductor test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2274
46 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
45 A Modified Run Length Coding Technique for Test Data Compression Based on Multi-Level Selective Huffman Coding

Authors: C. Kalamani, K. Paramasivam

Abstract:

Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques.

Keywords: Modified run length coding, multilevel selective Huffman coding, built-in-self-test modified selective Huffman coding, automatic test equipment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274
44 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
43 Exploring the Physical Environment and Building Features in Earthquake Disaster Areas

Authors: Chang Hsueh-Sheng, Chen Tzu-Ling

Abstract:

Earthquake is an unpredictable natural disaster and intensive earthquakes have caused serious impacts on social-economic system, environmental and social resilience. Conventional ways to mitigate earthquake disaster are to enhance building codes and advance structural engineering measures. However, earthquake-induced ground damage such as liquefaction, land subsidence, landslide happen on places nearby earthquake prone or poor soil condition areas. Therefore, this study uses spatial statistical analysis to explore the spatial pattern of damaged buildings. Afterwards, principle components analysis (PCA) is applied to categorize the similar features in different kinds of clustered patterns. The results show that serious landslide prone area, close to fault, vegetated ground surface and mudslide prone area are common in those highly damaged buildings. In addition, the oldest building might not be directly referred to the most vulnerable one. In fact, it seems that buildings built between 1974 and 1989 become more fragile during the earthquake. The incorporation of both spatial statistical analyses and PCA can provide more accurate information to subsidize retrofit programs to enhance earthquake resistance in particular areas.

Keywords: Earthquake disaster, spatial statistical analysis, principle components analysis, clustered patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
42 Spread Spectrum Code Estimationby Particle Swarm Algorithm

Authors: Vahid R. Asghari, Mehrdad Ardebilipour

Abstract:

In the context of spectrum surveillance, a new method to recover the code of spread spectrum signal is presented, while the receiver has no knowledge of the transmitter-s spreading sequence. In our previous paper, we used Genetic algorithm (GA), to recover spreading code. Although genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems, but nonetheless, by increasing the length of the code, we will often lead to an unacceptable slow convergence speed. To solve this problem we introduce Particle Swarm Optimization (PSO) into code estimation in spread spectrum communication system. In searching process for code estimation, the PSO algorithm has the merits of rapid convergence to the global optimum, without being trapped in local suboptimum, and good robustness to noise. In this paper we describe how to implement PSO as a component of a searching algorithm in code estimation. Swarm intelligence boasts a number of advantages due to the use of mobile agents. Some of them are: Scalability, Fault tolerance, Adaptation, Speed, Modularity, Autonomy, and Parallelism. These properties make swarm intelligence very attractive for spread spectrum code estimation. They also make swarm intelligence suitable for a variety of other kinds of channels. Our results compare between swarm-based algorithms and Genetic algorithms, and also show PSO algorithm performance in code estimation process.

Keywords: Code estimation, Particle Swarm Optimization(PSO), Spread spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
41 Modeling and Analysis of DFIG Based Wind Power System Using Instantaneous Power Components

Authors: Jaimala Gambhir, Tilak Thakur, Puneet Chawla

Abstract:

As per the statistical data, the Doubly-fed Induction Generator (DFIG) based wind turbine with variable speed and variable pitch control is the most common wind turbine in the growing wind market. This machine is usually used on the grid connected wind energy conversion system to satisfy grid code requirements such as grid stability, Fault Ride Through (FRT), power quality improvement, grid synchronization and power control etc. Though the requirements are not fulfilled directly by the machine, the control strategy is used in both the stator as well as rotor side along with power electronic converters to fulfil the requirements stated above. To satisfy the grid code requirements of wind turbine, usually grid side converter is playing a major role. So in order to improve the operation capacity of wind turbine under critical situation, the intensive study of both machine side converter control and grid side converter control is necessary In this paper DFIG is modeled using power components as variables and the performance of the DFIG system is analysed under grid voltage fluctuations. The voltage fluctuations are made by lowering and raising the voltage values in the utility grid intentionally for the purpose of simulation keeping in view of different grid disturbances.

Keywords: DFIG, dynamic modeling, DPC, sag, swell, voltage fluctuations, FRT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2655
40 PSS with Multiple FACTS Controllers Coordinated Design and Real-Time Implementation Using Advanced Adaptive PSO

Authors: Rajendraprasad Narne, P. C. Panda

Abstract:

In this article, coordinated tuning of power system stabilizer (PSS) with static var compensator (SVC) and thyristor controlled series capacitor (TCSC) in multi-machine power system is proposed. The design of proposed coordinated damping controller is formulated as an optimization problem and the controller gains are optimized instantaneously using advanced adaptive particle swarm optimization (AAPSO). The objective function is framed with the inter-area speed deviations of the generators and it is minimized using AAPSO to improve the dynamic stability of power system under severe disturbance. The proposed coordinated controller performance is evaluated under a wide range of system operating conditions with three-phase fault disturbance. Using time domain simulations the damping characteristics of proposed controller is compared with individually tuned PSS, SVC and TCSC controllers. Finally, the real-time simulations are carried out in Opal-RT hardware simulator to synchronize the proposed controller performance in the real world.

Keywords: Advanced adaptive particle swarm optimization, Coordinated design, Power system stabilizer, Real-time implementation, static var compensator, Thyristor controlled series capacitor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2591
39 The Transient Reactive Power Regulation Capability of SVC for Large Scale WECS Connected to Distribution Networks

Authors: Y. Ates, A. R. Boynuegri, M. Uzunoglu, A. Karakas

Abstract:

The recent interest in alternative and renewable energy systems results in increased installed capacity ratio of such systems in total energy production of the world. Specifically, Wind Energy Conversion Systems (WECS) draw significant attention among possible alternative energy options, recently. On the contrary of the positive points of penetrating WECS in all over the world in terms of environment protection, energy independence of the countries, etc., there are significant problems to be solved for the grid connection of large scale WECS. The reactive power regulation, voltage variation suppression, etc. can be presented as major issues to be considered in this regard. Thus, this paper evaluates the application of a Static VAr Compensator (SVC) unit for the reactive power regulation and operation continuity of WECS during a fault condition. The system is modeled employing the IEEE 13 node test system. Thus, it is possible to evaluate the system performance with an overall grid simulation model close to real grid systems. The overall simulation model is developed in MATLAB/Simulink/SimPowerSystems® environments and the obtained results effectively match the target of the provided study.

Keywords: IEEE 13 bus distribution system, reactive power regulation, static VAr compensator, wind energy conversion system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
38 Autonomic Sonar Sensor Fault Manager for Mobile Robots

Authors: Martin Doran, Roy Sterritt, George Wilkie

Abstract:

NASA, ESA, and NSSC space agencies have plans to put planetary rovers on Mars in 2020. For these future planetary rovers to succeed, they will heavily depend on sensors to detect obstacles. This will also become of vital importance in the future, if rovers become less dependent on commands received from earth-based control and more dependent on self-configuration and self-decision making. These planetary rovers will face harsh environments and the possibility of hardware failure is high, as seen in missions from the past. In this paper, we focus on using Autonomic principles where self-healing, self-optimization, and self-adaption are explored using the MAPE-K model and expanding this model to encapsulate the attributes such as Awareness, Analysis, and Adjustment (AAA-3). In the experimentation, a Pioneer P3-DX research robot is used to simulate a planetary rover. The sonar sensors on the P3-DX robot are used to simulate the sensors on a planetary rover (even though in reality, sonar sensors cannot operate in a vacuum). Experiments using the P3-DX robot focus on how our software system can be adapted with the loss of sonar sensor functionality. The autonomic manager system is responsible for the decision making on how to make use of remaining ‘enabled’ sonars sensors to compensate for those sonar sensors that are ‘disabled’. The key to this research is that the robot can still detect objects even with reduced sonar sensor capability.

Keywords: Autonomic, self-adaption, self-healing, self-optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1002
37 Efficient Dimensionality Reduction of Directional Overcurrent Relays Optimal Coordination Problem

Authors: Fouad Salha , X. Guillaud

Abstract:

Directional over current relays (DOCR) are commonly used in power system protection as a primary protection in distribution and sub-transmission electrical systems and as a secondary protection in transmission systems. Coordination of protective relays is necessary to obtain selective tripping. In this paper, an approach for efficiency reduction of DOCRs nonlinear optimum coordination (OC) is proposed. This was achieved by modifying the objective function and relaxing several constraints depending on the four constraints classification, non-valid, redundant, pre-obtained and valid constraints. According to this classification, the far end fault effect on the objective function and constraints, and in consequently on relay operating time, was studied. The study was carried out, firstly by taking into account the near-end and far-end faults in DOCRs coordination problem formulation; and then faults very close to the primary relays (nearend faults). The optimal coordination (OC) was achieved by simultaneously optimizing all variables (TDS and Ip) in nonlinear environment by using of Genetic algorithm nonlinear programming techniques. The results application of the above two approaches on 6-bus and 26-bus system verify that the far-end faults consideration on OC problem formulation don-t lose the optimality.

Keywords: Backup/Primary relay, Coordination time interval (CTI), directional over current relays, Genetic algorithm, time dial setting (TDS), pickup current setting (Ip), nonlinear programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
36 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: Data grid, data replication, simulation, replica selection, replica placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
35 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
34 Rock Slope Stabilization and Protection for Roads and Multi-Storey Structures in Jabal Omar, Saudi Arabia

Authors: Ibrahim Abdel Gadir Malik, Dafalla Siddig Dafalla, Abdelazim Ibrahim

Abstract:

Jabal Omar is located in the western side of Makkah city in Saudi Arabia. The proposed Jabal Omar Development project includes several multi-storey buildings, roads, bridges and below ground structures founded at various depths. In this study, geological mapping and site inspection which covered pre-selected areas were carried out within the easily accessed parts. Geological features; including rock types, structures, degree of weathering, and geotechnical hazards were observed and analyzed with specified software and also were documented in form of photographs. The presence of joints and fractures in the area made the rock blocks small and weak. The site is full of jointing; it was observed that, the northern side consists of 3 to 4 jointing systems with 2 random fractures associated with dykes. The southern part is affected by 2 to 3 jointing systems with minor fault and shear zones. From the field measurements and observations, it was concluded that, the Jabal Omar intruded by andesitic and basaltic dykes of different thickness and orientation. These dykes made the outcrop weak, highly deformed and made the rock masses sensitive to weathering.

Keywords: Rock, slope, stabilization, protection, Makkah.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
33 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
32 Hazard Identification and Sensitivity of Potential Resource of Emergency Water Supply

Authors: A. Bumbová, M. Čáslavský, F. Božek, J. Dvořák, E. Bakoš

Abstract:

The paper presents the case study of hazard identification and sensitivity of potential resource of emergency water supply as part of the application of methodology classifying the resources of drinking water for emergency supply of population. The case study has been carried out on a selected resource of emergency water supply in one region of the Czech Republic. The hazard identification and sensitivity of potential resource of emergency water supply is based on a unique procedure and developed general registers of selected types of hazards and sensitivities. The registers have been developed with the help of the “Fault Tree Analysis” method in combination with the “What if method”. The identified hazards for the assessed resource include hailstorms and torrential rains, drought, soil erosion, accidents of farm machinery, and agricultural production. The developed registers of hazards and vulnerabilities and a semi-quantitative assessment of hazards for individual parts of hydrological structure and technological elements of presented drilled wells are the basis for a semi-quantitative risk assessment of potential resource of emergency supply of population and the subsequent classification of such resource within the system of crisis planning.

Keywords: Hazard identification, register of hazards, sensitivity identification, register of sensitivity, emergency water supply, state of crisis, resource of emergency water supply, ground water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
31 Ground Motion Modelling in Bangladesh Using Stochastic Method

Authors: Mizan Ahmed, Srikanth Venkatesan

Abstract:

Geological and tectonic framework indicates that Bangladesh is one of the most seismically active regions in the world. The Bengal Basin is at the junction of three major interacting plates: the Indian, Eurasian, and Burma Plates. Besides there are many active faults within the region, e.g. the large Dauki fault in the north. The country has experienced a number of destructive earthquakes due to the movement of these active faults. Current seismic provisions of Bangladesh are mostly based on earthquake data prior to the 1990. Given the record of earthquakes post 1990, there is a need to revisit the design provisions of the code. This paper compares the base shear demand of three major cities in Bangladesh: Dhaka (the capital city), Sylhet, and Chittagong for earthquake scenarios of magnitudes 7.0MW, 7.5MW, 8.0MW, and 8.5MW using a stochastic model. In particular, the stochastic model allows the flexibility to input region specific parameters such as shear wave velocity profile (that were developed from Global Crustal Model CRUST2.0) and include the effects of attenuation as individual components. Effects of soil amplification were analysed using the Extended Component Attenuation Model (ECAM). Results show that the estimated base shear demand is higher in comparison with code provisions leading to the suggestion of additional seismic design consideration in the study regions.

Keywords: Attenuation, earthquake, ground motion, stochastic, seismic hazard.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038
30 Pyrite from Zones of Mz-Kz Reactivation of Large Faults on the Eastern Slope of the Ural Mountains, Russia

Authors: O. B. Azovskova, А. А. Malyugin, А. А. Nekrasova, M. Yu. Yanchenko

Abstract:

Pyritisation halos are identified in weathering crusts and unconsolidated formations at five locations within large fault structure of the Urals’ eastern slope. Electron microscopy reveals the presence of inclusions and growths on pyrite faces – normally on cubic pyrite with striations, or combinations of cubes and other forms. Following neogenesis types are established: native elements and intermetallic compounds (including gold and silver), halogenides, sulphides, sulfosalts, tellurides, sulphotellurides, selenides, tungstates, sulphates, phosphates, carbon-based substances. Direct relationship is noted between amount and diversity of such mineral phases, and proximity to and scale of ore-grade mineralization. Gold and silver, both in native form and within tellurides, presence of lead (galena, native lead), native tungsten, and, possibly, molybdenite and sulfosalts can indicate gold-bearing formations. First find of native tungsten in the Urals is for the first time – in crystallised and druse-like form. Link is suggested between unusual mineralization and “reducing” hydrothermal fluids from deep-seated faults at later stages of Urals’ reactivation. 

Keywords: Gold in weathering crust, low temperature metasomatism, pyrite, native tungsten, Urals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
29 A Novel Machining Signal Filtering Technique: Z-notch Filter

Authors: Nuawi M. Z., Lamin F., Ismail A. R., Abdullah S., Wahid Z.

Abstract:

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Keywords: Digital signal filtering, I-kaz method, Machiningmonitoring, Noise Cancelling, Sound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
28 Altered Network Organization in Mild Alzheimer's Disease Compared to Mild Cognitive Impairment Using Resting-State EEG

Authors: Chia-Feng Lu, Yuh-Jen Wang, Shin Teng, Yu-Te Wu, Sui-Hing Yan

Abstract:

Brain functional networks based on resting-state EEG data were compared between patients with mild Alzheimer’s disease (mAD) and matched patients with amnestic subtype of mild cognitive impairment (aMCI). We integrated the time–frequency cross mutual information (TFCMI) method to estimate the EEG functional connectivity between cortical regions and the network analysis based on graph theory to further investigate the alterations of functional networks in mAD compared with aMCI group. We aimed at investigating the changes of network integrity, local clustering, information processing efficiency, and fault tolerance in mAD brain networks for different frequency bands based on several topological properties, including degree, strength, clustering coefficient, shortest path length, and efficiency. Results showed that the disruptions of network integrity and reductions of network efficiency in mAD characterized by lower degree, decreased clustering coefficient, higher shortest path length, and reduced global and local efficiencies in the delta, theta, beta2, and gamma bands were evident. The significant changes in network organization can be used in assisting discrimination of mAD from aMCI in clinical.

Keywords: EEG, functional connectivity, graph theory, TFCMI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2505
27 Automotive ECU Design with Functional Safety for Electro-Mechanical Actuator Systems

Authors: Kyung-Jung Lee, Young-Hun Ki, Hyun-Sik Ahn

Abstract:

In this paper, we propose a hardware and software design method for automotive Electronic Control Units (ECU) considering the functional safety. The proposed ECU is considered for the application to Electro-Mechanical Actuator systems and the validity of the design method is shown by the application to the Electro-Mechanical Brake (EMB) control system which is used as a brake actuator in Brake-By-Wire (BBW) systems. The importance of a functional safety-based design approach to EMB ECU design has been emphasized because of its safety-critical functions, which are executed with the aid of many electric actuators, sensors, and application software. Based on hazard analysis and risk assessment according to ISO26262, the EMB system should be ASIL-D-compliant, the highest ASIL level. To this end, an external signature watchdog and an Infineon 32-bit microcontroller TriCore are used to reduce risks considering common-cause hardware failure. Moreover, a software design method is introduced for implementing functional safety-oriented monitoring functions based on an asymmetric dual core architecture considering redundancy and diversity. The validity of the proposed ECU design approach is verified by using the EMB Hardware-In-the-Loop (HILS) system, which consists of the EMB assembly, actuator ECU, a host PC, and a few debugging devices. Furthermore, it is shown that the existing sensor fault tolerant control system can be used more effectively for mitigating the effects of hardware and software faults by applying the proposed ECU design method.

Keywords: BBW (Brake-By-wire), EMB (Electro-Mechanical Brake), Functional Safety, ISO26262.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6996