Search results for: Radial Basis Function Neural Networks
5134 Augmented Lyapunov Approach to Robust Stability of Discrete-time Stochastic Neural Networks with Time-varying Delays
Authors: Shu Lü, Shouming Zhong, Zixin Liu
Abstract:
In this paper, the robust exponential stability problem of discrete-time uncertain stochastic neural networks with timevarying delays is investigated. By introducing a new augmented Lyapunov function, some delay-dependent stable results are obtained in terms of linear matrix inequality (LMI) technique. Compared with some existing results in the literature, the conservatism of the new criteria is reduced notably. Three numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed method.
Keywords: Robust exponential stability, delay-dependent stability, discrete-time neural networks, stochastic, time-varying delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14365133 A Novel Fuzzy-Neural Based Medical Diagnosis System
Authors: S. Moein, S. A. Monadjemi, P. Moallem
Abstract:
In this paper, application of artificial neural networks in typical disease diagnosis has been investigated. The real procedure of medical diagnosis which usually is employed by physicians was analyzed and converted to a machine implementable format. Then after selecting some symptoms of eight different diseases, a data set contains the information of a few hundreds cases was configured and applied to a MLP neural network. The results of the experiments and also the advantages of using a fuzzy approach were discussed as well. Outcomes suggest the role of effective symptoms selection and the advantages of data fuzzificaton on a neural networks-based automatic medical diagnosis system.Keywords: Artificial Neural Networks, Fuzzy Logic, MedicalDiagnosis, Symptoms, Fuzzification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22605132 An Inverse Optimal Control Approach for the Nonlinear System Design Using ANN
Authors: M. P. Nanda Kumar, K. Dheeraj
Abstract:
The design of a feedback controller, so as to minimize a given performance criterion, for a general non-linear dynamical system is difficult; if not impossible. But for a large class of non-linear dynamical systems, the open loop control that minimizes a performance criterion can be obtained using calculus of variations and Pontryagin’s minimum principle. In this paper, the open loop optimal trajectories, that minimizes a given performance measure, is used to train the neural network whose inputs are state variables of non-linear dynamical systems and the open loop optimal control as the desired output. This trained neural network is used as the feedback controller. In other words, attempts are made here to solve the “inverse optimal control problem” by using the state and control trajectories that are optimal in an open loop sense.
Keywords: Inverse Optimal Control, Radial basis function neural network, Controller Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22885131 Testing the Accuracy of ML-ANN for Harmonic Estimation in Balanced Industrial Distribution Power System
Authors: Wael M. El-Mamlouk, Metwally A. El-Sharkawy, Hossam. E. Mostafa
Abstract:
In this paper, we analyze and test a scheme for the estimation of electrical fundamental frequency signals from the harmonic load current and voltage signals. The scheme was based on using two different Multi Layer Artificial Neural Networks (ML-ANN) one for the current and the other for the voltage. This study also analyzes and tests the effect of choosing the optimum artificial neural networks- sizes which determine the quality and accuracy of the estimation of electrical fundamental frequency signals. The simulink tool box of the Matlab program for the simulation of the test system and the test of the neural networks has been used.Keywords: Harmonics, Neural Networks, Modeling, Simulation, Active filters, electric Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15945130 Prediction of Bath Temperature Using Neural Networks
Authors: H. Meradi, S. Bouhouche, M. Lahreche
Abstract:
In this work, we consider an application of neural networks in LD converter. Application of this approach assumes a reliable prediction of steel temperature and reduces a reblow ratio in steel work. It has been applied a conventional model to charge calculation, the obtained results by this technique are not always good, this is due to the process complexity. Difficulties are mainly generated by the noisy measurement and the process non linearities. Artificial Neural Networks (ANNs) have become a powerful tool for these complex applications. It is used a backpropagation algorithm to learn the neural nets. (ANNs) is used to predict the steel bath temperature in oxygen converter process for the end condition. This model has 11 inputs process variables and one output. The model was tested in steel work, the obtained results by neural approach are better than the conventional model.
Keywords: LD converter, bath temperature, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18375129 Applications of Cascade Correlation Neural Networks for Cipher System Identification
Authors: B. Chandra, P. Paul Varghese
Abstract:
Crypto System Identification is one of the challenging tasks in Crypt analysis. The paper discusses the possibility of employing Neural Networks for identification of Cipher Systems from cipher texts. Cascade Correlation Neural Network and Back Propagation Network have been employed for identification of Cipher Systems. Very large collection of cipher texts were generated using a Block Cipher (Enhanced RC6) and a Stream Cipher (SEAL). Promising results were obtained in terms of accuracy using both the Neural Network models but it was observed that the Cascade Correlation Neural Network Model performed better compared to Back Propagation Network.
Keywords: Back Propagation Neural Networks, CascadeCorrelation Neural Network, Crypto systems, Block Cipher, StreamCipher.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24445128 Improvement of Ground Truth Data for Eye Location on Infrared Driver Recordings
Authors: Sorin Valcan, Mihail Găianu
Abstract:
Labeling is a very costly and time consuming process which aims to generate datasets for training neural networks in several functionalities and projects. For driver monitoring system projects, the need of labeled images has a significant impact on the budget and distribution of effort. This paper presents the modifications done to a ground truth data generation algorithm for 2D eyes location on infrared images with drivers in order to improve the quality of the data and performance of the trained neural networks. The algorithm restrictions become tougher which makes it more accurate but also less constant. The resulting dataset becomes smaller and shall not be altered by any kind of manual labels adjustment before being used in the neural networks training process. These changes resulted in a much better performance of the trained neural networks.
Keywords: Labeling automation, infrared camera, driver monitoring, eye detection, Convolutional Neural Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4205127 Modeling and Simulation of Position Estimation of Switched Reluctance Motor with Artificial Neural Networks
Authors: Oguz Ustun, Erdal Bekiroglu
Abstract:
In the present study, position estimation of switched reluctance motor (SRM) has been achieved on the basis of the artificial neural networks (ANNs). The ANNs can estimate the rotor position without using an extra rotor position sensor by measuring the phase flux linkages and phase currents. Flux linkage-phase current-rotor position data set and supervised backpropagation learning algorithm are used in training of the ANN based position estimator. A 4-phase SRM have been used to verify the accuracy and feasibility of the proposed position estimator. Simulation results show that the proposed position estimator gives precise and accurate position estimations for both under the low and high level reference speeds of the SRM
Keywords: Artificial neural networks, modeling andsimulation, position observer, switched reluctance motor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20625126 Strongly Coupled Finite Element Formulation of Electromechanical Systems with Integrated Mesh Morphing using Radial Basis Functions
Authors: D. Kriebel, J. E. Mehner
Abstract:
The paper introduces a method to efficiently simulate nonlinear changing electrostatic fields occurring in micro-electromechanical systems (MEMS). Large deflections of the capacitor electrodes usually introduce nonlinear electromechanical forces on the mechanical system. Traditional finite element methods require a time-consuming remeshing process to capture exact results for this physical domain interaction. In order to accelerate the simulation process and eliminate the remeshing process, a formulation of a strongly coupled electromechanical transducer element will be introduced which uses a combination of finite-element with an advanced mesh morphing technique using radial basis functions (RBF). The RBF allows large geometrical changes of the electric field domain while retain high element quality of the deformed mesh. Coupling effects between mechanical and electrical domains are directly included within the element formulation. Fringing field effects are described accurate by using traditional arbitrary shape functions.
Keywords: electromechanical, electric field, transducer, simulation, modeling, finite-element, mesh morphing, radial basis function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5175125 Global Exponential Stability of Impulsive BAM Fuzzy Cellular Neural Networks with Time Delays in the Leakage Terms
Authors: Liping Zhang, Kelin Li
Abstract:
In this paper, a class of impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms is formulated and investigated. By establishing a delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM fuzzy cellular neural networks. An example is given to show the effectiveness of the results obtained here.
Keywords: Global exponential stability, bidirectional associative memory, fuzzy cellular neural networks, leakage delays, impulses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13305124 Almost Periodic Solution for an Impulsive Neural Networks with Distributed Delays
Authors: Lili Wang
Abstract:
By using the estimation of the Cauchy matrix of linear impulsive differential equations and Banach fixed point theorem as well as Gronwall-Bellman’s inequality, some sufficient conditions are obtained for the existence and exponential stability of almost periodic solution for an impulsive neural networks with distributed delays. An example is presented to illustrate the feasibility and effectiveness of the results.
Keywords: Almost periodic solution, Exponential stability, Neural networks, Impulses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16155123 Improved Robust Stability Criteria for Discrete-time Neural Networks
Authors: Zixin Liu, Shu Lü, Shouming Zhong, Mao Ye
Abstract:
In this paper, the robust exponential stability problem of uncertain discrete-time recurrent neural networks with timevarying delay is investigated. By constructing a new augmented Lyapunov-Krasovskii function, some new improved stability criteria are obtained in forms of linear matrix inequality (LMI). Compared with some recent results in literature, the conservatism of the new criteria is reduced notably. Two numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed results.
Keywords: Robust exponential stability, delay-dependent stability, discrete-time neutral networks, time-varying delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14775122 Delay-Distribution-Dependent Stability Criteria for BAM Neural Networks with Time-Varying Delays
Authors: J.H. Park, S. Lakshmanan, H.Y. Jung, S.M. Lee
Abstract:
This paper is concerned with the delay-distributiondependent stability criteria for bidirectional associative memory (BAM) neural networks with time-varying delays. Based on the Lyapunov-Krasovskii functional and stochastic analysis approach, a delay-probability-distribution-dependent sufficient condition is derived to achieve the globally asymptotically mean square stable of the considered BAM neural networks. The criteria are formulated in terms of a set of linear matrix inequalities (LMIs), which can be checked efficiently by use of some standard numerical packages. Finally, a numerical example and its simulation is given to demonstrate the usefulness and effectiveness of the proposed results.Keywords: BAM neural networks, Probabilistic time-varying delays, Stability criteria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14185121 Robotic Arm Control with Neural Networks Using Genetic Algorithm Optimization Approach
Authors: A. Pajaziti, H. Cana
Abstract:
In this paper, the structural genetic algorithm is used to optimize the neural network to control the joint movements of robotic arm. The robotic arm has also been modeled in 3D and simulated in real-time in MATLAB. It is found that Neural Networks provide a simple and effective way to control the robot tasks. Computer simulation examples are given to illustrate the significance of this method. By combining Genetic Algorithm optimization method and Neural Networks for the given robotic arm with 5 D.O.F. the obtained the results shown that the base joint movements overshooting time without controller was about 0.5 seconds, while with Neural Network controller (optimized with Genetic Algorithm) was about 0.2 seconds, and the population size of 150 gave best results.
Keywords: Robotic Arm, Neural Network, Genetic Algorithm, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35955120 Some Remarkable Properties of a Hopfield Neural Network with Time Delay
Authors: Kelvin Rozier, Vladimir E. Bondarenko
Abstract:
It is known that an analog Hopfield neural network with time delay can generate the outputs which are similar to the human electroencephalogram. To gain deeper insights into the mechanisms of rhythm generation by the Hopfield neural networks and to study the effects of noise on their activities, we investigated the behaviors of the networks with symmetric and asymmetric interneuron connections. The neural network under the study consists of 10 identical neurons. For symmetric (fully connected) networks all interneuron connections aij = +1; the interneuron connections for asymmetric networks form an upper triangular matrix with non-zero entries aij = +1. The behavior of the network is described by 10 differential equations, which are solved numerically. The results of simulations demonstrate some remarkable properties of a Hopfield neural network, such as linear growth of outputs, dependence of synchronization properties on the connection type, huge amplification of oscillation by the external uniform noise, and the capability of the neural network to transform one type of noise to another.Keywords: Chaos, Hopfield neural network, noise, synchronization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18905119 A Novel Approach to Positive Almost Periodic Solution of BAM Neural Networks with Time-Varying Delays
Abstract:
In this paper, based on almost periodic functional hull theory and M-matrix theory, some sufficient conditions are established for the existence and uniqueness of positive almost periodic solution for a class of BAM neural networks with time-varying delays. An example is given to illustrate the main results.
Keywords: Delayed BAM neural networks, Hull theorem, Mmatrix, Almost periodic solution, Global exponential stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14155118 An Analysis of Global Stability of Cohen-Grossberg Neural Networks with Multiple Time Delays
Authors: Zeynep Orman, Sabri Arik
Abstract:
This paper presents a new sufficient condition for the existence, uniqueness and global asymptotic stability of the equilibrium point for Cohen-Grossberg neural networks with multiple time delays. The results establish a relationship between the network parameters of the neural system independently of the delay parameters. The results are also compared with the previously reported results in the literature.Keywords: Equilibrium and stability analysis, Cohen-Grossberg Neural Networks, Lyapunov Functionals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13855117 Passivity Analysis of Stochastic Neural Networks With Multiple Time Delays
Authors: Biao Qin, Jin Huang, Jiaojiao Ren, Wei Kang
Abstract:
This paper deals with the problem of passivity analysis for stochastic neural networks with leakage, discrete and distributed delays. By using delay partitioning technique, free weighting matrix method and stochastic analysis technique, several sufficient conditions for the passivity of the addressed neural networks are established in terms of linear matrix inequalities (LMIs), in which both the time-delay and its time derivative can be fully considered. A numerical example is given to show the usefulness and effectiveness of the obtained results.
Keywords: Passivity, Stochastic neural networks, Multiple time delays, Linear matrix inequalities (LMIs).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17035116 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks
Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha
Abstract:
Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs – Sigmoid, ReLU, and Tanh – have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment on multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLU-ReLU) combination. Our results show that on using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).
Keywords: Activation Function, Universal Approximation function, Neural Networks, convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545115 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model
Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy
Abstract:
A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17305114 A Parameter-Tuning Framework for Metaheuristics Based on Design of Experiments and Artificial Neural Networks
Authors: Felix Dobslaw
Abstract:
In this paper, a framework for the simplification and standardization of metaheuristic related parameter-tuning by applying a four phase methodology, utilizing Design of Experiments and Artificial Neural Networks, is presented. Metaheuristics are multipurpose problem solvers that are utilized on computational optimization problems for which no efficient problem specific algorithm exist. Their successful application to concrete problems requires the finding of a good initial parameter setting, which is a tedious and time consuming task. Recent research reveals the lack of approach when it comes to this so called parameter-tuning process. In the majority of publications, researchers do have a weak motivation for their respective choices, if any. Because initial parameter settings have a significant impact on the solutions quality, this course of action could lead to suboptimal experimental results, and thereby a fraudulent basis for the drawing of conclusions.Keywords: Parameter-Tuning, Metaheuristics, Design of Experiments, Artificial Neural Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17775113 Robust Artificial Neural Network Architectures
Authors: A. Schuster
Abstract:
Many artificial intelligence (AI) techniques are inspired by problem-solving strategies found in nature. Robustness is a key feature in many natural systems. This paper studies robustness in artificial neural networks (ANNs) and proposes several novel, nature inspired ANN architectures. The paper includes encouraging results from experimental studies on these networks showing increased robustness.Keywords: robustness, robust artificial neural networks architectures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14075112 PTH Moment Exponential Stability of Stochastic Recurrent Neural Networks with Distributed Delays
Authors: Zixin Liu, Jianjun Jiao Wanping Bai
Abstract:
In this paper, the issue of pth moment exponential stability of stochastic recurrent neural network with distributed time delays is investigated. By using the method of variation parameters, inequality techniques, and stochastic analysis, some sufficient conditions ensuring pth moment exponential stability are obtained. The method used in this paper does not resort to any Lyapunov function, and the results derived in this paper generalize some earlier criteria reported in the literature. One numerical example is given to illustrate the main results.
Keywords: Stochastic recurrent neural networks, pth moment exponential stability, distributed time delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12545111 Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function
Authors: S. Anna Durai, E. Anna Saro
Abstract:
Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.Keywords: Back-propagation Neural Network, Cumulative Distribution Function, Correlation, Convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25525110 Mixtures of Monotone Networks for Prediction
Authors: Marina Velikova, Hennie Daniels, Ad Feelders
Abstract:
In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.Keywords: mixture models, monotone neural networks, partially monotone models, partially monotone problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12465109 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis
Authors: Isao Taguchi, Yasuo Sugai
Abstract:
This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14305108 Exponential Passivity Criteria for BAM Neural Networks with Time-Varying Delays
Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong
Abstract:
In this paper,the exponential passivity criteria for BAM neural networks with time-varying delays is studied.By constructing new Lyapunov-Krasovskii functional and dividing the delay interval into multiple segments,a novel sufficient condition is established to guarantee the exponential stability of the considered system.Finally,a numerical example is provided to illustrate the usefulness of the proposed main results
Keywords: BAM neural networks, Exponential passivity, LMI approach, Time-varying delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19085107 Neural Network Imputation in Complex Survey Design
Authors: Safaa R. Amer
Abstract:
Missing data yields many analysis challenges. In case of complex survey design, in addition to dealing with missing data, researchers need to account for the sampling design to achieve useful inferences. Methods for incorporating sampling weights in neural network imputation were investigated to account for complex survey designs. An estimate of variance to account for the imputation uncertainty as well as the sampling design using neural networks will be provided. A simulation study was conducted to compare estimation results based on complete case analysis, multiple imputation using a Markov Chain Monte Carlo, and neural network imputation. Furthermore, a public-use dataset was used as an example to illustrate neural networks imputation under a complex survey design
Keywords: Complex survey, estimate, imputation, neural networks, variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19725106 Exponential State Estimation for Neural Networks with Leakage, Discrete and Distributed Delays
Authors: Liyuan Wang, Shouming Zhong
Abstract:
In this paper, the design problem of state estimator for neural networks with the mixed time-varying delays are investigated by constructing appropriate Lyapunov-Krasovskii functionals and using some effective mathematical techniques. In order to derive several conditions to guarantee the estimation error systems to be globally exponential stable, we transform the considered systems into the neural-type time-delay systems. Then with a set of linear inequalities(LMIs), we can obtain the stable criteria. Finally, three numerical examples are given to show the effectiveness and less conservatism of the proposed criterion.
Keywords: State estimator, Neural networks, Globally exponential stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16645105 Globally Exponential Stability for Hopfield Neural Networks with Delays and Impulsive Perturbations
Authors: Adnene Arbi, Chaouki Aouiti, Abderrahmane Touati
Abstract:
In this paper, we consider the global exponential stability of the equilibrium point of Hopfield neural networks with delays and impulsive perturbation. Some new exponential stability criteria of the system are derived by using the Lyapunov functional method and the linear matrix inequality approach for estimating the upper bound of the derivative of Lyapunov functional. Finally, we illustrate two numerical examples showing the effectiveness of our theoretical results.
Keywords: Hopfield Neural Networks, Exponential stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348