Search results for: Large neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4362

Search results for: Large neural networks

4332 Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning

Authors: W. D. Wan Rosli, Z. Zainuddin, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, Modeling, Neural Networks, Preconditioned Conjugate Gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
4331 Global Exponential Stability of Impulsive BAM Fuzzy Cellular Neural Networks with Time Delays in the Leakage Terms

Authors: Liping Zhang, Kelin Li

Abstract:

In this paper, a class of impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms is formulated and investigated. By establishing a delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM fuzzy cellular neural networks. An example is given to show the effectiveness of the results obtained here.

Keywords: Global exponential stability, bidirectional associative memory, fuzzy cellular neural networks, leakage delays, impulses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1276
4330 Almost Periodic Solution for an Impulsive Neural Networks with Distributed Delays

Authors: Lili Wang

Abstract:

By using the estimation of the Cauchy matrix of linear impulsive differential equations and Banach fixed point theorem as well as Gronwall-Bellman’s inequality, some sufficient conditions are obtained for the existence and exponential stability of almost periodic solution for an impulsive neural networks with distributed delays. An example is presented to illustrate the feasibility and  effectiveness of the results.

Keywords: Almost periodic solution, Exponential stability, Neural networks, Impulses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
4329 Delay-Distribution-Dependent Stability Criteria for BAM Neural Networks with Time-Varying Delays

Authors: J.H. Park, S. Lakshmanan, H.Y. Jung, S.M. Lee

Abstract:

This paper is concerned with the delay-distributiondependent stability criteria for bidirectional associative memory (BAM) neural networks with time-varying delays. Based on the Lyapunov-Krasovskii functional and stochastic analysis approach, a delay-probability-distribution-dependent sufficient condition is derived to achieve the globally asymptotically mean square stable of the considered BAM neural networks. The criteria are formulated in terms of a set of linear matrix inequalities (LMIs), which can be checked efficiently by use of some standard numerical packages. Finally, a numerical example and its simulation is given to demonstrate the usefulness and effectiveness of the proposed results.

Keywords: BAM neural networks, Probabilistic time-varying delays, Stability criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
4328 A New Robust Stability Criterion for Dynamical Neural Networks with Mixed Time Delays

Authors: Guang Zhou, Shouming Zhong

Abstract:

In this paper, we investigate the problem of the existence, uniqueness and global asymptotic stability of the equilibrium point for a class of neural networks, the neutral system has mixed time delays and parameter uncertainties. Under the assumption that the activation functions are globally Lipschitz continuous, we drive a new criterion for the robust stability of a class of neural networks with time delays by utilizing the Lyapunov stability theorems and the Homomorphic mapping theorem. Numerical examples are given to illustrate the effectiveness and the advantage of the proposed main results.

Keywords: Neural networks, Delayed systems, Lyapunov function, Stability analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
4327 Robotic Arm Control with Neural Networks Using Genetic Algorithm Optimization Approach

Authors: A. Pajaziti, H. Cana

Abstract:

In this paper, the structural genetic algorithm is used to optimize the neural network to control the joint movements of robotic arm. The robotic arm has also been modeled in 3D and simulated in real-time in MATLAB. It is found that Neural Networks provide a simple and effective way to control the robot tasks. Computer simulation examples are given to illustrate the significance of this method. By combining Genetic Algorithm optimization method and Neural Networks for the given robotic arm with 5 D.O.F. the obtained the results shown that the base joint movements overshooting time without controller was about 0.5 seconds, while with Neural Network controller (optimized with Genetic Algorithm) was about 0.2 seconds, and the population size of 150 gave best results.

Keywords: Robotic Arm, Neural Network, Genetic Algorithm, Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3548
4326 Some Remarkable Properties of a Hopfield Neural Network with Time Delay

Authors: Kelvin Rozier, Vladimir E. Bondarenko

Abstract:

It is known that an analog Hopfield neural network with time delay can generate the outputs which are similar to the human electroencephalogram. To gain deeper insights into the mechanisms of rhythm generation by the Hopfield neural networks and to study the effects of noise on their activities, we investigated the behaviors of the networks with symmetric and asymmetric interneuron connections. The neural network under the study consists of 10 identical neurons. For symmetric (fully connected) networks all interneuron connections aij = +1; the interneuron connections for asymmetric networks form an upper triangular matrix with non-zero entries aij = +1. The behavior of the network is described by 10 differential equations, which are solved numerically. The results of simulations demonstrate some remarkable properties of a Hopfield neural network, such as linear growth of outputs, dependence of synchronization properties on the connection type, huge amplification of oscillation by the external uniform noise, and the capability of the neural network to transform one type of noise to another.

Keywords: Chaos, Hopfield neural network, noise, synchronization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836
4325 A Novel Approach to Positive Almost Periodic Solution of BAM Neural Networks with Time-Varying Delays

Authors: Lili Wang, Meng Hu

Abstract:

In this paper, based on almost periodic functional hull theory and M-matrix theory, some sufficient conditions are established for the existence and uniqueness of positive almost periodic solution for a class of BAM neural networks with time-varying delays. An example is given to illustrate the main results.

Keywords: Delayed BAM neural networks, Hull theorem, Mmatrix, Almost periodic solution, Global exponential stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
4324 An Analysis of Global Stability of Cohen-Grossberg Neural Networks with Multiple Time Delays

Authors: Zeynep Orman, Sabri Arik

Abstract:

This paper presents a new sufficient condition for the existence, uniqueness and global asymptotic stability of the equilibrium point for Cohen-Grossberg neural networks with multiple time delays. The results establish a relationship between the network parameters of the neural system independently of the delay parameters. The results are also compared with the previously reported results in the literature.

Keywords: Equilibrium and stability analysis, Cohen-Grossberg Neural Networks, Lyapunov Functionals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332
4323 Passivity Analysis of Stochastic Neural Networks With Multiple Time Delays

Authors: Biao Qin, Jin Huang, Jiaojiao Ren, Wei Kang

Abstract:

This paper deals with the problem of passivity analysis for stochastic neural networks with leakage, discrete and distributed delays. By using delay partitioning technique, free weighting matrix method and stochastic analysis technique, several sufficient conditions for the passivity of the addressed neural networks are established in terms of linear matrix inequalities (LMIs), in which both the time-delay and its time derivative can be fully considered. A numerical example is given to show the usefulness and effectiveness of the obtained results.

Keywords: Passivity, Stochastic neural networks, Multiple time delays, Linear matrix inequalities (LMIs).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
4322 New Stability Analysis for Neural Networks with Time-Varying Delays

Authors: Miaomiao Yang, Shouming Zhong

Abstract:

This paper studies the problem of asymptotically stability for neural networks with time-varying delays.By establishing a suitable Lyapunov-Krasovskii function and several novel sufficient conditions are obtained to guarantee the asymptotically stability of the considered system. Finally,two numerical examples are given to illustrate the effectiveness of the proposed main results.

Keywords: Neural networks, Lyapunov-Krasovskii, Time-varying delays, Linear matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
4321 Neural Networks: From Black Box towards Transparent Box Application to Evapotranspiration Modeling

Authors: A. Johannet, B. Vayssade, D. Bertin

Abstract:

Neural networks are well known for their ability to model non linear functions, but as statistical methods usually does, they use a no parametric approach thus, a priori knowledge is not obvious to be taken into account no more than the a posteriori knowledge. In order to deal with these problematics, an original way to encode the knowledge inside the architecture is proposed. This method is applied to the problem of the evapotranspiration inside karstic aquifer which is a problem of huge utility in order to deal with water resource.

Keywords: Neural-Networks, Hydrology, Evapotranpiration, Hidden Function Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
4320 Training Radial Basis Function Networks with Differential Evolution

Authors: Bing Yu , Xingshi He

Abstract:

In this paper, Differential Evolution (DE) algorithm, a new promising evolutionary algorithm, is proposed to train Radial Basis Function (RBF) network related to automatic configuration of network architecture. Classification tasks on data sets: Iris, Wine, New-thyroid, and Glass are conducted to measure the performance of neural networks. Compared with a standard RBF training algorithm in Matlab neural network toolbox, DE achieves more rational architecture for RBF networks. The resulting networks hence obtain strong generalization abilities.

Keywords: differential evolution, neural network, Rbf function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001
4319 Robust Artificial Neural Network Architectures

Authors: A. Schuster

Abstract:

Many artificial intelligence (AI) techniques are inspired by problem-solving strategies found in nature. Robustness is a key feature in many natural systems. This paper studies robustness in artificial neural networks (ANNs) and proposes several novel, nature inspired ANN architectures. The paper includes encouraging results from experimental studies on these networks showing increased robustness.

Keywords: robustness, robust artificial neural networks architectures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
4318 Exponential Passivity Criteria for BAM Neural Networks with Time-Varying Delays

Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong

Abstract:

In this paper,the exponential passivity criteria for BAM neural networks with time-varying delays is studied.By constructing new Lyapunov-Krasovskii functional and dividing the delay interval into multiple segments,a novel sufficient condition is established to guarantee the exponential stability of the considered system.Finally,a numerical example is provided to illustrate the usefulness of the proposed main results

Keywords: BAM neural networks, Exponential passivity, LMI approach, Time-varying delays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
4317 Neural Network Imputation in Complex Survey Design

Authors: Safaa R. Amer

Abstract:

Missing data yields many analysis challenges. In case of complex survey design, in addition to dealing with missing data, researchers need to account for the sampling design to achieve useful inferences. Methods for incorporating sampling weights in neural network imputation were investigated to account for complex survey designs. An estimate of variance to account for the imputation uncertainty as well as the sampling design using neural networks will be provided. A simulation study was conducted to compare estimation results based on complete case analysis, multiple imputation using a Markov Chain Monte Carlo, and neural network imputation. Furthermore, a public-use dataset was used as an example to illustrate neural networks imputation under a complex survey design

Keywords: Complex survey, estimate, imputation, neural networks, variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1938
4316 Exponential State Estimation for Neural Networks with Leakage, Discrete and Distributed Delays

Authors: Liyuan Wang, Shouming Zhong

Abstract:

In this paper, the design problem of state estimator for neural networks with the mixed time-varying delays are investigated by constructing appropriate Lyapunov-Krasovskii functionals and using some effective mathematical techniques. In order to derive several conditions to guarantee the estimation error systems to be globally exponential stable, we transform the considered systems into the neural-type time-delay systems. Then with a set of linear inequalities(LMIs), we can obtain the stable criteria. Finally, three numerical examples are given to show the effectiveness and less conservatism of the proposed criterion.

Keywords: State estimator, Neural networks, Globally exponential stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
4315 Globally Exponential Stability for Hopfield Neural Networks with Delays and Impulsive Perturbations

Authors: Adnene Arbi, Chaouki Aouiti, Abderrahmane Touati

Abstract:

In this paper, we consider the global exponential stability of the equilibrium point of Hopfield neural networks with delays and impulsive perturbation. Some new exponential stability criteria of the system are derived by using the Lyapunov functional method and the linear matrix inequality approach for estimating the upper bound of the derivative of Lyapunov functional. Finally, we illustrate two numerical examples showing the effectiveness of our theoretical results.

Keywords: Hopfield Neural Networks, Exponential stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2305
4314 Representing Collective Unconsciousness Using Neural Networks

Authors: Pierre Abou-Haila, Richard Hall, Mark Dawes

Abstract:

Instead of representing individual cognition only, population cognition is represented using artificial neural networks whilst maintaining individuality. This population network trains continuously, simulating adaptation. An implementation of two coexisting populations is compared to the Lotka-Volterra model of predator-prey interaction. Applications include multi-agent systems such as artificial life or computer games.

Keywords: Collective unconsciousness, neural networks, adaptation, predator-prey simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1766
4313 Existence and Stability Analysis of Discrete-time Fuzzy BAM Neural Networks with Delays and Impulses

Authors: Chao Wang, Yongkun Li

Abstract:

In this paper, the discrete-time fuzzy BAM neural network with delays and impulses is studied. Sufficient conditions are obtained for the existence and global stability of a unique equilibrium of this class of fuzzy BAM neural networks with Lipschitzian activation functions without assuming their boundedness, monotonicity or differentiability and subjected to impulsive state displacements at fixed instants of time. Some numerical examples are given to demonstrate the effectiveness of the obtained results.

Keywords: Discrete-time fuzzy BAM neural networks, ımpulses, global exponential stability, global asymptotical stability, equilibrium point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462
4312 Self-evolving Neural Networks Based On PSO and JPSO Algorithms

Authors: Abdussamad Ismail, Dong-Sheng Jeng

Abstract:

A self-evolution algorithm for optimizing neural networks using a combination of PSO and JPSO is proposed. The algorithm optimizes both the network topology and parameters simultaneously with the aim of achieving desired accuracy with less complicated networks. The performance of the proposed approach is compared with conventional back-propagation networks using several synthetic functions, with better results in the case of the former. The proposed algorithm is also implemented on slope stability problem to estimate the critical factor of safety. Based on the results obtained, the proposed self evolving network produced a better estimate of critical safety factor in comparison to conventional BPN network.

Keywords: Neural networks, Topology evolution, Particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1757
4311 A Model-following Adaptive Controller for Linear/Nonlinear Plantsusing Radial Basis Function Neural Networks

Authors: Yuichi Masukake, Yoshihisa Ishida

Abstract:

In this paper, we proposed a method to design a model-following adaptive controller for linear/nonlinear plants. Radial basis function neural networks (RBF-NNs), which are known for their stable learning capability and fast training, are used to identify linear/nonlinear plants. Simulation results show that the proposed method is effective in controlling both linear and nonlinear plants with disturbance in the plant input.

Keywords: Linear/nonlinear plants, neural networks, radial basisfunction networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
4310 Investigation of Artificial Neural Networks Performance to Predict Net Heating Value of Crude Oil by Its Properties

Authors: Mousavian, M. Moghimi Mofrad, M. H. Vakili, D. Ashouri, R. Alizadeh

Abstract:

The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.

Keywords: Neural Network, Net Heating Value, Crude Oil, Experimental, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
4309 Daily Global Solar Radiation Modeling Using Multi-Layer Perceptron (MLP) Neural Networks

Authors: Seyed Fazel Ziaei Asl, Ali Karami, Gholamreza Ashari, Azam Behrang, Arezoo Assareh, N.Hedayat

Abstract:

Predict daily global solar radiation (GSR) based on meteorological variables, using Multi-layer perceptron (MLP) neural networks is the main objective of this study. Daily mean air temperature, relative humidity, sunshine hours, evaporation, wind speed, and soil temperature values between 2002 and 2006 for Dezful city in Iran (32° 16' N, 48° 25' E), are used in this study. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data.

Keywords: Multi-layer Perceptron (MLP) Neural Networks;Global Solar Radiation (GSR), Meteorological Parameters, Prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2928
4308 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis

Authors: Isao Taguchi, Yasuo Sugai

Abstract:

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.

Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1387
4307 Solving Partially Monotone Problems with Neural Networks

Authors: Marina Velikova, Hennie Daniels, Ad Feelders

Abstract:

In many applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. Here we consider partially monotone problems, where the target variable depends monotonically on some of the predictor variables but not all. We propose an approach to build partially monotone models based on the convolution of monotone neural networks and kernel functions. The results from simulations and a real case study on house pricing show that our approach has significantly better performance than partially monotone linear models. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Keywords: Mixture models, monotone neural networks, partially monotone models, partially monotone problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
4306 Stability Analysis of Impulsive Stochastic Fuzzy Cellular Neural Networks with Time-varying Delays and Reaction-diffusion Terms

Authors: Xinhua Zhang, Kelin Li

Abstract:

In this paper, the problem of stability analysis for a class of impulsive stochastic fuzzy neural networks with timevarying delays and reaction-diffusion is considered. By utilizing suitable Lyapunov-Krasovskii funcational, the inequality technique and stochastic analysis technique, some sufficient conditions ensuring global exponential stability of equilibrium point for impulsive stochastic fuzzy cellular neural networks with time-varying delays and diffusion are obtained. In particular, the estimate of the exponential convergence rate is also provided, which depends on system parameters, diffusion effect and impulsive disturbed intention. It is believed that these results are significant and useful for the design and applications of fuzzy neural networks. An example is given to show the effectiveness of the obtained results.

Keywords: Exponential stability, stochastic fuzzy cellular neural networks, time-varying delays, impulses, reaction-diffusion terms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332
4305 Fast Forecasting of Stock Market Prices by using New High Speed Time Delay Neural Networks

Authors: Hazem M. El-Bakry, Nikos Mastorakis

Abstract:

Fast forecasting of stock market prices is very important for strategic planning. In this paper, a new approach for fast forecasting of stock market prices is presented. Such algorithm uses new high speed time delay neural networks (HSTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented HSTDNNs is less than that needed by traditional time delay neural networks (TTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Fast Forecasting, Stock Market Prices, Time Delay NeuralNetworks, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2013
4304 Mean Square Exponential Synchronization of Stochastic Neutral Type Chaotic Neural Networks with Mixed Delay

Authors: Zixin Liu, Huawei Yang, Fangwei Chen

Abstract:

This paper studies the mean square exponential synchronization problem of a class of stochastic neutral type chaotic neural networks with mixed delay. On the Basis of Lyapunov stability theory, some sufficient conditions ensuring the mean square exponential synchronization of two identical chaotic neural networks are obtained by using stochastic analysis and inequality technique. These conditions are expressed in the form of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using Matlab LMI Toolbox. The feedback controller used in this paper is more general than those used in previous literatures. One simulation example is presented to demonstrate the effectiveness of the derived results.

Keywords: Exponential synchronization, stochastic analysis, chaotic neural networks, neutral type system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
4303 Estimating Reaction Rate Constants with Neural Networks

Authors: Benedek Kovacs, Janos Toth

Abstract:

Solutions are proposed for the central problem of estimating the reaction rate coefficients in homogeneous kinetics. The first is based upon the fact that the right hand side of a kinetic differential equation is linear in the rate constants, whereas the second one uses the technique of neural networks. This second one is discussed deeply and its advantages, disadvantages and conditions of applicability are analyzed in the mirror of the first one. Numerical analysis carried out on practical models using simulated data, and our programs written in Mathematica.

Keywords: Neural networks, parameter estimation, linear regression, kinetic models, reaction rate coefficients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933