Search results for: Fast Complex Valued Time Delay Neural Networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10026

Search results for: Fast Complex Valued Time Delay Neural Networks

10026 Fast Complex Valued Time Delay Neural Networks

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

Here, a new idea to speed up the operation of complex valued time delay neural networks is presented. The whole data are collected together in a long vector and then tested as a one input pattern. The proposed fast complex valued time delay neural networks uses cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically that the number of computation steps required for the presented fast complex valued time delay neural networks is less than that needed by classical time delay neural networks. Simulation results using MATLAB confirm the theoretical computations.

Keywords: Fast Complex Valued Time Delay Neural Networks, Cross Correlation, Frequency Domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
10025 Fast Forecasting of Stock Market Prices by using New High Speed Time Delay Neural Networks

Authors: Hazem M. El-Bakry, Nikos Mastorakis

Abstract:

Fast forecasting of stock market prices is very important for strategic planning. In this paper, a new approach for fast forecasting of stock market prices is presented. Such algorithm uses new high speed time delay neural networks (HSTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented HSTDNNs is less than that needed by traditional time delay neural networks (TTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Fast Forecasting, Stock Market Prices, Time Delay NeuralNetworks, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
10024 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy

Abstract:

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
10023 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411
10022 Complex-Valued Neural Networks for Blind Equalization of Time-Varying Channels

Authors: Rajoo Pandey

Abstract:

Most of the commonly used blind equalization algorithms are based on the minimization of a nonconvex and nonlinear cost function and a neural network gives smaller residual error as compared to a linear structure. The efficacy of complex valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present two neural network models for blind equalization of time-varying channels, for M-ary QAM and PSK signals. The complex valued activation functions, suitable for these signal constellations in time-varying environment, are introduced and the learning algorithms based on the CMA cost function are derived. The improved performance of the proposed models is confirmed through computer simulations.

Keywords: Blind Equalization, Neural Networks, Constant Modulus Algorithm, Time-varying channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1892
10021 A Fast Neural Algorithm for Serial Code Detection in a Stream of Sequential Data

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in [10] that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations.

Keywords: Fast Code/Data Detection, Neural Networks, Cross Correlation, real/complex values.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628
10020 Globally Exponential Stability and Dissipativity Analysis of Static Neural Networks with Time Delay

Authors: Lijiang Xiang, Shouming Zhong, Yucai Ding

Abstract:

The problems of globally exponential stability and dissipativity analysis for static neural networks (NNs) with time delay is investigated in this paper. Some delay-dependent stability criteria are established for static NNs with time delay using the delay partitioning technique. In terms of this criteria, the delay-dependent sufficient condition is given to guarantee the dissipativity of static NNs with time delay. All the given results in this paper are not only dependent upon the time delay but also upon the number of delay partitions. Two numerical examples are used to show the effectiveness of the proposed methods.

Keywords: Globally exponential stability, Dissipativity, Static neural networks, Time delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
10019 Delay-Dependent Stability Analysis for Neutral Type Neural Networks with Uncertain Parameters and Time-Varying Delay

Authors: Qingqing Wang, Shouming Zhong

Abstract:

In this paper, delay-dependent stability analysis for neutral type neural networks with uncertain paramters and time-varying delay is studied. By constructing new Lyapunov-Krasovskii functional and dividing the delay interval into multiple segments, a novel sufficient condition is established to guarantee the globally asymptotically stability of the considered system. Finally, a numerical example is provided to illustrate the usefulness of the proposed main results.

Keywords: Neutral type neural networks, Time-varying delay, Stability, Linear matrix inequality(LMI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
10018 Performance Evaluation of Complex Valued Neural Networks Using Various Error Functions

Authors: Anita S. Gangal, P. K. Kalra, D. S. Chauhan

Abstract:

The backpropagation algorithm in general employs quadratic error function. In fact, most of the problems that involve minimization employ the Quadratic error function. With alternative error functions the performance of the optimization scheme can be improved. The new error functions help in suppressing the ill-effects of the outliers and have shown good performance to noise. In this paper we have tried to evaluate and compare the relative performance of complex valued neural network using different error functions. During first simulation for complex XOR gate it is observed that some error functions like Absolute error, Cauchy error function can replace Quadratic error function. In the second simulation it is observed that for some error functions the performance of the complex valued neural network depends on the architecture of the network whereas with few other error functions convergence speed of the network is independent of architecture of the neural network.

Keywords: Complex backpropagation algorithm, complex errorfunctions, complex valued neural network, split activation function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425
10017 Improving Quality of Business Networks for Information Systems

Authors: Hazem M. El-Bakry, Ahmed Atwan

Abstract:

Computer networks are essential part in computerbased information systems. The performance of these networks has a great influence on the whole information system. Measuring the usability criteria and customers satisfaction on small computer network is very important. In this article, an effective approach for measuring the usability of business network in an information system is introduced. The usability process for networking provides us with a flexible and a cost-effective way to assess the usability of a network and its products. In addition, the proposed approach can be used to certify network product usability late in the development cycle. Furthermore, it can be used to help in developing usable interfaces very early in the cycle and to give a way to measure, track, and improve usability. Moreover, a new approach for fast information processing over computer networks is presented. The entire data are collected together in a long vector and then tested as a one input pattern. Proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented time delay neural networks is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Usability Criteria, Computer Networks, Fast Information Processing, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035
10016 Delay-Distribution-Dependent Stability Criteria for BAM Neural Networks with Time-Varying Delays

Authors: J.H. Park, S. Lakshmanan, H.Y. Jung, S.M. Lee

Abstract:

This paper is concerned with the delay-distributiondependent stability criteria for bidirectional associative memory (BAM) neural networks with time-varying delays. Based on the Lyapunov-Krasovskii functional and stochastic analysis approach, a delay-probability-distribution-dependent sufficient condition is derived to achieve the globally asymptotically mean square stable of the considered BAM neural networks. The criteria are formulated in terms of a set of linear matrix inequalities (LMIs), which can be checked efficiently by use of some standard numerical packages. Finally, a numerical example and its simulation is given to demonstrate the usefulness and effectiveness of the proposed results.

Keywords: BAM neural networks, Probabilistic time-varying delays, Stability criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
10015 New Delay-Dependent Stability Criteria for Neural Networks With Two Additive Time-varying Delay Components

Authors: Xingyuan Qu, Shouming Zhong

Abstract:

In this paper, the problem of stability criteria of neural networks (NNs) with two-additive time-varying delay compenents is investigated. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay stability criteria for NNs with two-additive time-varying delay components are proposed. Finally, a numerical example is given to illustrate the effectiveness of the proposed method.

Keywords: Delay-dependent stability, time-varying delays, Lyapunov functional, linear matrix inequality (LMI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
10014 Sub-Image Detection Using Fast Neural Processors and Image Decomposition

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

In this paper, an approach to reduce the computation steps required by fast neural networksfor the searching process is presented. The principle ofdivide and conquer strategy is applied through imagedecomposition. Each image is divided into small in sizesub-images and then each one is tested separately usinga fast neural network. The operation of fast neuralnetworks based on applying cross correlation in thefrequency domain between the input image and theweights of the hidden neurons. Compared toconventional and fast neural networks, experimentalresults show that a speed up ratio is achieved whenapplying this technique to locate human facesautomatically in cluttered scenes. Furthermore, fasterface detection is obtained by using parallel processingtechniques to test the resulting sub-images at the sametime using the same number of fast neural networks. Incontrast to using only fast neural networks, the speed upratio is increased with the size of the input image whenusing fast neural networks and image decomposition.

Keywords: Fast Neural Networks, 2D-FFT, CrossCorrelation, Image decomposition, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2179
10013 Fast Object/Face Detection Using Neural Networks and Fast Fourier Transform

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

Recently, fast neural networks for object/face detection were presented in [1-3]. The speed up factor of these networks relies on performing cross correlation in the frequency domain between the input image and the weights of the hidden layer. But, these equations given in [1-3] for conventional and fast neural networks are not valid for many reasons presented here. In this paper, correct equations for cross correlation in the spatial and frequency domains are presented. Furthermore, correct formulas for the number of computation steps required by conventional and fast neural networks given in [1-3] are introduced. A new formula for the speed up ratio is established. Also, corrections for the equations of fast multi scale object/face detection are given. Moreover, commutative cross correlation is achieved. Simulation results show that sub-image detection based on cross correlation in the frequency domain is faster than classical neural networks.

Keywords: Conventional Neural Networks, Fast Neural Networks, Cross Correlation in the Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2481
10012 Passivity Analysis of Stochastic Neural Networks With Multiple Time Delays

Authors: Biao Qin, Jin Huang, Jiaojiao Ren, Wei Kang

Abstract:

This paper deals with the problem of passivity analysis for stochastic neural networks with leakage, discrete and distributed delays. By using delay partitioning technique, free weighting matrix method and stochastic analysis technique, several sufficient conditions for the passivity of the addressed neural networks are established in terms of linear matrix inequalities (LMIs), in which both the time-delay and its time derivative can be fully considered. A numerical example is given to show the usefulness and effectiveness of the obtained results.

Keywords: Passivity, Stochastic neural networks, Multiple time delays, Linear matrix inequalities (LMIs).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
10011 A Modified Cross Correlation in the Frequency Domain for Fast Pattern Detection Using Neural Networks

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

Recently, neural networks have shown good results for detection of a certain pattern in a given image. In our previous papers [1-5], a fast algorithm for pattern detection using neural networks was presented. Such algorithm was designed based on cross correlation in the frequency domain between the input image and the weights of neural networks. Image conversion into symmetric shape was established so that fast neural networks can give the same results as conventional neural networks. Another configuration of symmetry was suggested in [3,4] to improve the speed up ratio. In this paper, our previous algorithm for fast neural networks is developed. The frequency domain cross correlation is modified in order to compensate for the symmetric condition which is required by the input image. Two new ideas are introduced to modify the cross correlation algorithm. Both methods accelerate the speed of the fast neural networks as there is no need for converting the input image into symmetric one as previous. Theoretical and practical results show that both approaches provide faster speed up ratio than the previous algorithm.

Keywords: Fast Pattern Detection, Neural Networks, Modified Cross Correlation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
10010 Novel Delay-Dependent Stability Criteria for Uncertain Discrete-Time Stochastic Neural Networks with Time-Varying Delays

Authors: Mengzhuo Luo, Shouming Zhong

Abstract:

This paper investigates the problem of exponential stability for a class of uncertain discrete-time stochastic neural network with time-varying delays. By constructing a suitable Lyapunov-Krasovskii functional, combining the stochastic stability theory, the free-weighting matrix method, a delay-dependent exponential stability criteria is obtained in term of LMIs. Compared with some previous results, the new conditions obtain in this paper are less conservative. Finally, two numerical examples are exploited to show the usefulness of the results derived.

Keywords: Delay-dependent stability, Neural networks, Time varying delay, Linear matrix inequality (LMI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927
10009 Analysis of Periodic Solution of Delay Fuzzy BAM Neural Networks

Authors: Qianhong Zhang, Lihui Yang, Daixi Liao

Abstract:

In this paper, by employing a new Lyapunov functional and an elementary inequality analysis technique, some sufficient conditions are derived to ensure the existence and uniqueness of periodic oscillatory solution for fuzzy bi-directional memory (BAM) neural networks with time-varying delays, and all other solutions of the fuzzy BAM neural networks converge the uniqueness periodic solution. These criteria are presented in terms of system parameters and have important leading significance in the design and applications of neural networks. Moreover an example is given to illustrate the effectiveness and feasible of results obtained.

Keywords: Fuzzy BAM neural networks, Periodic solution, Global exponential stability, Time-varying delays

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
10008 Augmented Lyapunov Approach to Robust Stability of Discrete-time Stochastic Neural Networks with Time-varying Delays

Authors: Shu Lü, Shouming Zhong, Zixin Liu

Abstract:

In this paper, the robust exponential stability problem of discrete-time uncertain stochastic neural networks with timevarying delays is investigated. By introducing a new augmented Lyapunov function, some delay-dependent stable results are obtained in terms of linear matrix inequality (LMI) technique. Compared with some existing results in the literature, the conservatism of the new criteria is reduced notably. Three numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed method.

Keywords: Robust exponential stability, delay-dependent stability, discrete-time neural networks, stochastic, time-varying delays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437
10007 Stability Analysis of Neural Networks with Leakage, Discrete and Distributed Delays

Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong

Abstract:

This paper deals with the problem of stability of neural networks with leakage, discrete and distributed delays. A new Lyapunov functional which contains some new double integral terms are introduced. By using appropriate model transformation that shifts the considered systems into the neutral-type time-delay system, and by making use of some inequality techniques, delay-dependent criteria are developed to guarantee the stability of the considered system. Finally, numerical examples are provided to illustrate the usefulness of the proposed main results.

Keywords: Neural networks, Stability, Time-varying delays, Linear matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
10006 Some Remarkable Properties of a Hopfield Neural Network with Time Delay

Authors: Kelvin Rozier, Vladimir E. Bondarenko

Abstract:

It is known that an analog Hopfield neural network with time delay can generate the outputs which are similar to the human electroencephalogram. To gain deeper insights into the mechanisms of rhythm generation by the Hopfield neural networks and to study the effects of noise on their activities, we investigated the behaviors of the networks with symmetric and asymmetric interneuron connections. The neural network under the study consists of 10 identical neurons. For symmetric (fully connected) networks all interneuron connections aij = +1; the interneuron connections for asymmetric networks form an upper triangular matrix with non-zero entries aij = +1. The behavior of the network is described by 10 differential equations, which are solved numerically. The results of simulations demonstrate some remarkable properties of a Hopfield neural network, such as linear growth of outputs, dependence of synchronization properties on the connection type, huge amplification of oscillation by the external uniform noise, and the capability of the neural network to transform one type of noise to another.

Keywords: Chaos, Hopfield neural network, noise, synchronization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
10005 Improved Robust Stability Criteria for Discrete-time Neural Networks

Authors: Zixin Liu, Shu Lü, Shouming Zhong, Mao Ye

Abstract:

In this paper, the robust exponential stability problem of uncertain discrete-time recurrent neural networks with timevarying delay is investigated. By constructing a new augmented Lyapunov-Krasovskii function, some new improved stability criteria are obtained in forms of linear matrix inequality (LMI). Compared with some recent results in literature, the conservatism of the new criteria is reduced notably. Two numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed results.

Keywords: Robust exponential stability, delay-dependent stability, discrete-time neutral networks, time-varying delays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
10004 New Approaches on Stability Analysis for Neural Networks with Time-Varying Delay

Authors: Qingqing Wang, Shouming Zhong

Abstract:

Utilizing the Lyapunov functional method and combining linear matrix inequality (LMI) techniques and integral inequality approach (IIA) to analyze the global asymptotic stability for delayed neural networks (DNNs),a new sufficient criterion ensuring the global stability of DNNs is obtained.The criteria are formulated in terms of a set of linear matrix inequalities,which can be checked efficiently by use of some standard numercial packages.In order to show the stability condition in this paper gives much less conservative results than those in the literature,numerical examples are considered.

Keywords: Neural networks, Globally asymptotic stability , LMI approach , IIA approach , Time-varying delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
10003 New Approaches on Exponential Stability Analysis for Neural Networks with Time-Varying Delays

Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong

Abstract:

In this paper, utilizing the Lyapunov functional method and combining linear matrix inequality (LMI) techniques and integral inequality approach (IIA) to study the exponential stability problem for neural networks with discrete and distributed time-varying delays.By constructing new Lyapunov-Krasovskii functional and dividing the discrete delay interval into multiple segments,some new delay-dependent exponential stability criteria are established in terms of LMIs and can be easily checked.In order to show the stability condition in this paper gives much less conservative results than those in the literature,numerical examples are considered.

Keywords: Neural networks, Exponential stability, LMI approach, Time-varying delays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
10002 Exponential Passivity Criteria for BAM Neural Networks with Time-Varying Delays

Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong

Abstract:

In this paper,the exponential passivity criteria for BAM neural networks with time-varying delays is studied.By constructing new Lyapunov-Krasovskii functional and dividing the delay interval into multiple segments,a novel sufficient condition is established to guarantee the exponential stability of the considered system.Finally,a numerical example is provided to illustrate the usefulness of the proposed main results

Keywords: BAM neural networks, Exponential passivity, LMI approach, Time-varying delays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910
10001 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
10000 An Analysis of Global Stability of Cohen-Grossberg Neural Networks with Multiple Time Delays

Authors: Zeynep Orman, Sabri Arik

Abstract:

This paper presents a new sufficient condition for the existence, uniqueness and global asymptotic stability of the equilibrium point for Cohen-Grossberg neural networks with multiple time delays. The results establish a relationship between the network parameters of the neural system independently of the delay parameters. The results are also compared with the previously reported results in the literature.

Keywords: Equilibrium and stability analysis, Cohen-Grossberg Neural Networks, Lyapunov Functionals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385
9999 A Novel Hopfield Neural Network for Perfect Calculation of Magnetic Resonance Spectroscopy

Authors: Hazem M. El-Bakry

Abstract:

In this paper, an automatic determination algorithm for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations is presented. In such method, the problem of NMR spectrum determination is transformed into the determination of the parameters of a mathematical model of the NMR signal. To calculate these parameters efficiently, a new model called modified Hopfield neural network is designed. The main achievement of this paper over the work in literature [30] is that the speed of the modified Hopfield neural network is accelerated. This is done by applying cross correlation in the frequency domain between the input values and the input weights. The modified Hopfield neural network can accomplish complex dignals perfectly with out any additinal computation steps. This is a valuable advantage as NMR signals are complex-valued. In addition, a technique called “modified sequential extension of section (MSES)" that takes into account the damping rate of the NMR signal is developed to be faster than that presented in [30]. Simulation results show that the calculation precision of the spectrum improves when MSES is used along with the neural network. Furthermore, MSES is found to reduce the local minimum problem in Hopfield neural networks. Moreover, the performance of the proposed method is evaluated and there is no effect on the performance of calculations when using the modified Hopfield neural networks.

Keywords: Hopfield Neural Networks, Cross Correlation, Nuclear Magnetic Resonance, Magnetic Resonance Spectroscopy, Fast Fourier Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
9998 Global Exponential Stability of Impulsive BAM Fuzzy Cellular Neural Networks with Time Delays in the Leakage Terms

Authors: Liping Zhang, Kelin Li

Abstract:

In this paper, a class of impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms is formulated and investigated. By establishing a delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM fuzzy cellular neural networks. An example is given to show the effectiveness of the results obtained here.

Keywords: Global exponential stability, bidirectional associative memory, fuzzy cellular neural networks, leakage delays, impulses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331
9997 Comparison of Two Interval Models for Interval-Valued Differential Evolution

Authors: Hidehiko Okada

Abstract:

The author previously proposed an extension of differential evolution. The proposed method extends the processes of DE to handle interval numbers as genotype values so that DE can be applied to interval-valued optimization problems. The interval DE can employ either of two interval models, the lower and upper model or the center and width model, for specifying genotype values. Ability of the interval DE in searching for solutions may depend on the model. In this paper, the author compares the two models to investigate which model contributes better for the interval DE to find better solutions. Application of the interval DE is evolutionary training of interval-valued neural networks. A result of preliminary study indicates that the CW model is better than the LU model: the interval DE with the CW model could evolve better neural networks. 

Keywords: Evolutionary algorithms, differential evolution, neural network, neuroevolution, interval arithmetic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667