Search results for: Hopfield neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3052

Search results for: Hopfield neural network

3052 Some Remarkable Properties of a Hopfield Neural Network with Time Delay

Authors: Kelvin Rozier, Vladimir E. Bondarenko

Abstract:

It is known that an analog Hopfield neural network with time delay can generate the outputs which are similar to the human electroencephalogram. To gain deeper insights into the mechanisms of rhythm generation by the Hopfield neural networks and to study the effects of noise on their activities, we investigated the behaviors of the networks with symmetric and asymmetric interneuron connections. The neural network under the study consists of 10 identical neurons. For symmetric (fully connected) networks all interneuron connections aij = +1; the interneuron connections for asymmetric networks form an upper triangular matrix with non-zero entries aij = +1. The behavior of the network is described by 10 differential equations, which are solved numerically. The results of simulations demonstrate some remarkable properties of a Hopfield neural network, such as linear growth of outputs, dependence of synchronization properties on the connection type, huge amplification of oscillation by the external uniform noise, and the capability of the neural network to transform one type of noise to another.

Keywords: Chaos, Hopfield neural network, noise, synchronization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834
3051 A Novel Hopfield Neural Network for Perfect Calculation of Magnetic Resonance Spectroscopy

Authors: Hazem M. El-Bakry

Abstract:

In this paper, an automatic determination algorithm for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations is presented. In such method, the problem of NMR spectrum determination is transformed into the determination of the parameters of a mathematical model of the NMR signal. To calculate these parameters efficiently, a new model called modified Hopfield neural network is designed. The main achievement of this paper over the work in literature [30] is that the speed of the modified Hopfield neural network is accelerated. This is done by applying cross correlation in the frequency domain between the input values and the input weights. The modified Hopfield neural network can accomplish complex dignals perfectly with out any additinal computation steps. This is a valuable advantage as NMR signals are complex-valued. In addition, a technique called “modified sequential extension of section (MSES)" that takes into account the damping rate of the NMR signal is developed to be faster than that presented in [30]. Simulation results show that the calculation precision of the spectrum improves when MSES is used along with the neural network. Furthermore, MSES is found to reduce the local minimum problem in Hopfield neural networks. Moreover, the performance of the proposed method is evaluated and there is no effect on the performance of calculations when using the modified Hopfield neural networks.

Keywords: Hopfield Neural Networks, Cross Correlation, Nuclear Magnetic Resonance, Magnetic Resonance Spectroscopy, Fast Fourier Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
3050 Implementation of an Associative Memory Using a Restricted Hopfield Network

Authors: Tet H. Yeap

Abstract:

An analog restricted Hopfield Network is presented in this paper. It consists of two layers of nodes, visible and hidden nodes, connected by directional weighted paths forming a bipartite graph with no intralayer connection. An energy or Lyapunov function was derived to show that the proposed network will converge to stable states. By introducing hidden nodes, the proposed network can be trained to store patterns and has increased memory capacity. Training to be an associative memory, simulation results show that the associative memory performs better than a classical Hopfield network by being able to perform better memory recall when the input is noisy.

Keywords: Associative memory, Hopfield network, Lyapunov function, Restricted Hopfield network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 419
3049 Globally Exponential Stability for Hopfield Neural Networks with Delays and Impulsive Perturbations

Authors: Adnene Arbi, Chaouki Aouiti, Abderrahmane Touati

Abstract:

In this paper, we consider the global exponential stability of the equilibrium point of Hopfield neural networks with delays and impulsive perturbation. Some new exponential stability criteria of the system are derived by using the Lyapunov functional method and the linear matrix inequality approach for estimating the upper bound of the derivative of Lyapunov functional. Finally, we illustrate two numerical examples showing the effectiveness of our theoretical results.

Keywords: Hopfield Neural Networks, Exponential stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
3048 Exponential Stability of Uncertain Takagi-Sugeno Fuzzy Hopfield Neural Networks with Time Delays

Authors: Meng Hu, Lili Wang

Abstract:

In this paper, based on linear matrix inequality (LMI), by using Lyapunov functional theory, the exponential stability criterion is obtained for a class of uncertain Takagi-Sugeno fuzzy Hopfield neural networks (TSFHNNs) with time delays. Here we choose a generalized Lyapunov functional and introduce a parameterized model transformation with free weighting matrices to it, these techniques lead to generalized and less conservative stability condition that guarantee the wide stability region. Finally, an example is given to illustrate our results by using MATLAB LMI toolbox.

Keywords: Hopfield neural network, linear matrix inequality, exponential stability, time delay, T-S fuzzy model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
3047 A New Sufficient Conditions of Stability for Discrete Time Non-autonomous Delayed Hopfield Neural Networks

Authors: Adnene Arbi, Chaouki Aouiti, Abderrahmane Touati

Abstract:

In this paper, we consider the uniform asymptotic stability, global asymptotic stability and global exponential stability of the equilibrium point of discrete Hopfield neural networks with delays. Some new stability criteria for system are derived by using the Lyapunov functional method and the linear matrix inequality approach, for estimating the upper bound of Lyapunov functional derivative.

Keywords: Hopfield neural networks, uniform asymptotic stability, global asymptotic stability, exponential stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
3046 Avoiding Catastrophic Forgetting by a Dual-Network Memory Model Using a Chaotic Neural Network

Authors: Motonobu Hattori

Abstract:

In neural networks, when new patterns are learned by a network, the new information radically interferes with previously stored patterns. This drawback is called catastrophic forgetting or catastrophic interference. In this paper, we propose a biologically inspired neural network model which overcomes this problem. The proposed model consists of two distinct networks: one is a Hopfield type of chaotic associative memory and the other is a multilayer neural network. We consider that these networks correspond to the hippocampus and the neocortex of the brain, respectively. Information given is firstly stored in the hippocampal network with fast learning algorithm. Then the stored information is recalled by chaotic behavior of each neuron in the hippocampal network. Finally, it is consolidated in the neocortical network by using pseudopatterns. Computer simulation results show that the proposed model has much better ability to avoid catastrophic forgetting in comparison with conventional models.

Keywords: catastrophic forgetting, chaotic neural network, complementary learning systems, dual-network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041
3045 Hopfield Network as Associative Memory with Multiple Reference Points

Authors: Domingo López-Rodríguez, Enrique Mérida-Casermeiro, Juan M. Ortiz-de-Lazcano-Lobato

Abstract:

Hopfield model of associative memory is studied in this work. In particular, two main problems that it possesses: the apparition of spurious patterns in the learning phase, implying the well-known effect of storing the opposite pattern, and the problem of its reduced capacity, meaning that it is not possible to store a great amount of patterns without increasing the error probability in the retrieving phase. In this paper, a method to avoid spurious patterns is presented and studied, and an explanation of the previously mentioned effect is given. Another technique to increase the capacity of a network is proposed here, based on the idea of using several reference points when storing patterns. It is studied in depth, and an explicit formula for the capacity of the network with this technique is provided.

Keywords: Associative memory, Hopfield network, network capacity, spurious patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1062
3044 Blind Image Deconvolution by Neural Recursive Function Approximation

Authors: Jiann-Ming Wu, Hsiao-Chang Chen, Chun-Chang Wu, Pei-Hsun Hsu

Abstract:

This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.

Keywords: Blind image deconvolution, linear shift-invariant(LSI), linear image degradation model, radial basis functions (rbf), recursive function, annealed Hopfield neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
3043 A Combined Neural Network Approach to Soccer Player Prediction

Authors: Wenbin Zhang, Hantian Wu, Jian Tang

Abstract:

An artificial neural network is a mathematical model inspired by biological neural networks. There are several kinds of neural networks and they are widely used in many areas, such as: prediction, detection, and classification. Meanwhile, in day to day life, people always have to make many difficult decisions. For example, the coach of a soccer club has to decide which offensive player to be selected to play in a certain game. This work describes a novel Neural Network using a combination of the General Regression Neural Network and the Probabilistic Neural Networks to help a soccer coach make an informed decision.

Keywords: General Regression Neural Network, Probabilistic Neural Networks, Neural function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3710
3042 A Literature Survey of Neural Network Applications for Shunt Active Power Filters

Authors: S. Janpong, K-L. Areerak, K-N. Areerak

Abstract:

This paper aims to present the reviews of the application of neural network in shunt active power filter (SAPF). From the review, three out of four components of SAPF structure, which are harmonic detection component, compensating current control, and DC bus voltage control, have been adopted some of neural network architecture as part of its component or even substitution. The objectives of most papers in using neural network in SAPF are to increase the efficiency, stability, accuracy, robustness, tracking ability of the systems of each component. Moreover, minimizing unneeded signal due to the distortion is the ultimate goal in applying neural network to the SAPF. The most famous architecture of neural network in SAPF applications are ADALINE and Backpropagation (BP).

Keywords: Active power filter, neural network, harmonic distortion, harmonic detection and compensation, non-linear load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3012
3041 Applications of Cascade Correlation Neural Networks for Cipher System Identification

Authors: B. Chandra, P. Paul Varghese

Abstract:

Crypto System Identification is one of the challenging tasks in Crypt analysis. The paper discusses the possibility of employing Neural Networks for identification of Cipher Systems from cipher texts. Cascade Correlation Neural Network and Back Propagation Network have been employed for identification of Cipher Systems. Very large collection of cipher texts were generated using a Block Cipher (Enhanced RC6) and a Stream Cipher (SEAL). Promising results were obtained in terms of accuracy using both the Neural Network models but it was observed that the Cascade Correlation Neural Network Model performed better compared to Back Propagation Network.

Keywords: Back Propagation Neural Networks, CascadeCorrelation Neural Network, Crypto systems, Block Cipher, StreamCipher.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2377
3040 Spline Basis Neural Network Algorithm for Numerical Integration

Authors: Lina Yan, Jingjing Di, Ke Wang

Abstract:

A new basis function neural network algorithm is proposed for numerical integration. The main idea is to construct neural network model based on spline basis functions, which is used to approximate the integrand by training neural network weights. The convergence theorem of the neural network algorithm, the theorem for numerical integration and one corollary are presented and proved. The numerical examples, compared with other methods, show that the algorithm is effective and has the characteristics such as high precision and the integrand not required known. Thus, the algorithm presented in this paper can be widely applied in many engineering fields.

Keywords: Numerical integration, Spline basis function, Neural network algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2860
3039 Investigation of Artificial Neural Networks Performance to Predict Net Heating Value of Crude Oil by Its Properties

Authors: Mousavian, M. Moghimi Mofrad, M. H. Vakili, D. Ashouri, R. Alizadeh

Abstract:

The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.

Keywords: Neural Network, Net Heating Value, Crude Oil, Experimental, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
3038 Optimum Neural Network Architecture for Precipitation Prediction of Myanmar

Authors: Khaing Win Mar, Thinn Thu Naing

Abstract:

Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a suitable neural network model for monthly precipitation mapping of Myanmar. Using 3-layerd neural network models, 100 cases are tested by changing the number of input and hidden nodes from 1 to 10 nodes, respectively, and only one outputnode used. The optimum model with the suitable number of nodes is selected in accordance with the minimum forecast error. In measuring network performance using Root Mean Square Error (RMSE), experimental results significantly show that 3 inputs-10 hiddens-1 output architecture model gives the best prediction result for monthly precipitation in Myanmar.

Keywords: Precipitation prediction, monthly precipitation, neural network models, Myanmar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
3037 Development of Gas Chromatography Model: Propylene Concentration Using Neural Network

Authors: Areej Babiker Idris Babiker, Rosdiazli Ibrahim

Abstract:

Gas chromatography (GC) is the most widely used technique in analytical chemistry. However, GC has high initial cost and requires frequent maintenance. This paper examines the feasibility and potential of using a neural network model as an alternative whenever GC is unvailable. It can also be part of system verification on the performance of GC for preventive maintenance activities. It shows the performance of MultiLayer Perceptron (MLP) with Backpropagation structure. Results demonstrate that neural network model when trained using this structure provides an adequate result and is suitable for this purpose. cm.

Keywords: Analyzer, Levenberg-Marquardt, Gas chromatography, Neural network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
3036 Efficient System for Speech Recognition using General Regression Neural Network

Authors: Abderrahmane Amrouche, Jean Michel Rouvaen

Abstract:

In this paper we present an efficient system for independent speaker speech recognition based on neural network approach. The proposed architecture comprises two phases: a preprocessing phase which consists in segmental normalization and features extraction and a classification phase which uses neural networks based on nonparametric density estimation namely the general regression neural network (GRNN). The relative performances of the proposed model are compared to the similar recognition systems based on the Multilayer Perceptron (MLP), the Recurrent Neural Network (RNN) and the well known Discrete Hidden Markov Model (HMM-VQ) that we have achieved also. Experimental results obtained with Arabic digits have shown that the use of nonparametric density estimation with an appropriate smoothing factor (spread) improves the generalization power of the neural network. The word error rate (WER) is reduced significantly over the baseline HMM method. GRNN computation is a successful alternative to the other neural network and DHMM.

Keywords: Speech Recognition, General Regression NeuralNetwork, Hidden Markov Model, Recurrent Neural Network, ArabicDigits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
3035 Identify Features and Parameters to Devise an Accurate Intrusion Detection System Using Artificial Neural Network

Authors: Saman M. Abdulla, Najla B. Al-Dabagh, Omar Zakaria

Abstract:

The aim of this article is to explain how features of attacks could be extracted from the packets. It also explains how vectors could be built and then applied to the input of any analysis stage. For analyzing, the work deploys the Feedforward-Back propagation neural network to act as misuse intrusion detection system. It uses ten types if attacks as example for training and testing the neural network. It explains how the packets are analyzed to extract features. The work shows how selecting the right features, building correct vectors and how correct identification of the training methods with nodes- number in hidden layer of any neural network affecting the accuracy of system. In addition, the work shows how to get values of optimal weights and use them to initialize the Artificial Neural Network.

Keywords: Artificial Neural Network, Attack Features, MisuseIntrusion Detection System, Training Parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2239
3034 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2362
3033 Application of Neural Networks in Financial Data Mining

Authors: Defu Zhang, Qingshan Jiang, Xin Li

Abstract:

This paper deals with the application of a well-known neural network technique, multilayer back-propagation (BP) neural network, in financial data mining. A modified neural network forecasting model is presented, and an intelligent mining system is developed. The system can forecast the buying and selling signs according to the prediction of future trends to stock market, and provide decision-making for stock investors. The simulation result of seven years to Shanghai Composite Index shows that the return achieved by this mining system is about three times as large as that achieved by the buy and hold strategy, so it is advantageous to apply neural networks to forecast financial time series, the different investors could benefit from it.

Keywords: Data mining, neural network, stock forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3538
3032 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy

Abstract:

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
3031 Accelerating Integer Neural Networks On Low Cost DSPs

Authors: Thomas Behan, Zaiyi Liao, Lian Zhao, Chunting Yang

Abstract:

In this paper, low end Digital Signal Processors (DSPs) are applied to accelerate integer neural networks. The use of DSPs to accelerate neural networks has been a topic of study for some time, and has demonstrated significant performance improvements. Recently, work has been done on integer only neural networks, which greatly reduces hardware requirements, and thus allows for cheaper hardware implementation. DSPs with Arithmetic Logic Units (ALUs) that support floating or fixed point arithmetic are generally more expensive than their integer only counterparts due to increased circuit complexity. However if the need for floating or fixed point math operation can be removed, then simpler, lower cost DSPs can be used. To achieve this, an integer only neural network is created in this paper, which is then accelerated by using DSP instructions to improve performance.

Keywords: Digital Signal Processor (DSP), Integer Neural Network(INN), Low Cost Neural Network, Integer Neural Network DSPImplementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
3030 Bayesian Deep Learning Algorithms for Classifying COVID-19 Images

Authors: I. Oloyede

Abstract:

The study investigates the accuracy and loss of deep learning algorithms with the set of coronavirus (COVID-19) images dataset by comparing Bayesian convolutional neural network and traditional convolutional neural network in low dimensional dataset. 50 sets of X-ray images out of which 25 were COVID-19 and the remaining 20 were normal, twenty images were set as training while five were set as validation that were used to ascertained the accuracy of the model. The study found out that Bayesian convolution neural network outperformed conventional neural network at low dimensional dataset that could have exhibited under fitting. The study therefore recommended Bayesian Convolutional neural network (BCNN) for android apps in computer vision for image detection.

Keywords: BCNN, CNN, Images, COVID-19, Deep Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 796
3029 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification

Authors: Abdelhadi Lotfi, Abdelkader Benyettou

Abstract:

In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.

Keywords: Classification, probabilistic neural networks, network optimization, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1152
3028 A Cognitive Model for Frequency Signal Classification

Authors: Rui Antunes, Fernando V. Coito

Abstract:

This article presents the development of a neural network cognitive model for the classification and detection of different frequency signals. The basic structure of the implemented neural network was inspired on the perception process that humans generally make in order to visually distinguish between high and low frequency signals. It is based on the dynamic neural network concept, with delays. A special two-layer feedforward neural net structure was successfully implemented, trained and validated, to achieve minimum target error. Training confirmed that this neural net structure descents and converges to a human perception classification solution, even when far away from the target.

Keywords: Neural Networks, Signal Classification, Adaptative Filters, Cognitive Neuroscience

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
3027 Inverse Problem Methodology for the Measurement of the Electromagnetic Parameters Using MLP Neural Network

Authors: T. Hacib, M. R. Mekideche, N. Ferkha

Abstract:

This paper presents an approach which is based on the use of supervised feed forward neural network, namely multilayer perceptron (MLP) neural network and finite element method (FEM) to solve the inverse problem of parameters identification. The approach is used to identify unknown parameters of ferromagnetic materials. The methodology used in this study consists in the simulation of a large number of parameters in a material under test, using the finite element method (FEM). Both variations in relative magnetic permeability and electrical conductivity of the material under test are considered. Then, the obtained results are used to generate a set of vectors for the training of MLP neural network. Finally, the obtained neural network is used to evaluate a group of new materials, simulated by the FEM, but not belonging to the original dataset. Noisy data, added to the probe measurements is used to enhance the robustness of the method. The reached results demonstrate the efficiency of the proposed approach, and encourage future works on this subject.

Keywords: Inverse problem, MLP neural network, parametersidentification, FEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
3026 Comparison between Beta Wavelets Neural Networks, RBF Neural Networks and Polynomial Approximation for 1D, 2DFunctions Approximation

Authors: Wajdi Bellil, Chokri Ben Amar, Adel M. Alimi

Abstract:

This paper proposes a comparison between wavelet neural networks (WNN), RBF neural network and polynomial approximation in term of 1-D and 2-D functions approximation. We present a novel wavelet neural network, based on Beta wavelets, for 1-D and 2-D functions approximation. Our purpose is to approximate an unknown function f: Rn - R from scattered samples (xi; y = f(xi)) i=1....n, where first, we have little a priori knowledge on the unknown function f: it lives in some infinite dimensional smooth function space and second the function approximation process is performed iteratively: each new measure on the function (xi; f(xi)) is used to compute a new estimate f as an approximation of the function f. Simulation results are demonstrated to validate the generalization ability and efficiency of the proposed Beta wavelet network.

Keywords: Beta wavelets networks, RBF neural network, training algorithms, MSE, 1-D, 2D function approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
3025 Margin-Based Feed-Forward Neural Network Classifiers

Authors: Han Xiao, Xiaoyan Zhu

Abstract:

Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labelled samples and flexible network. We have conducted experiments on four UCI open datasets and achieved good results as expected. In conclusion, our model could handle more sparse labelled and more high-dimension dataset in a high accuracy while modification from old ANN method to our method is easy and almost free of work.

Keywords: Max-Margin Principle, Feed-Forward Neural Network, Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
3024 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

Authors: Koray U. Erbas

Abstract:

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

Keywords: Convolutional Neural Network, Deep Learning, Deep Learning Based FER, Facial Emotion Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
3023 Neural Network Controller for Mobile Robot Motion Control

Authors: Jasmin Velagic, Nedim Osmic, Bakir Lacevic

Abstract:

In this paper the neural network-based controller is designed for motion control of a mobile robot. This paper treats the problems of trajectory following and posture stabilization of the mobile robot with nonholonomic constraints. For this purpose the recurrent neural network with one hidden layer is used. It learns relationship between linear velocities and error positions of the mobile robot. This neural network is trained on-line using the backpropagation optimization algorithm with an adaptive learning rate. The optimization algorithm is performed at each sample time to compute the optimal control inputs. The performance of the proposed system is investigated using a kinematic model of the mobile robot.

Keywords: Mobile robot, kinematic model, neural network, motion control, adaptive learning rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3276