Search results for: Dynamic neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4080

Search results for: Dynamic neural networks

3990 Optimization of the Input Layer Structure for Feed-Forward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: Correlation analysis, F-ratio, Levenberg-Marquardt, MSE, NARX, neural network, optimisation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2150
3989 Stability Analysis of Neural Networks with Leakage, Discrete and Distributed Delays

Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong

Abstract:

This paper deals with the problem of stability of neural networks with leakage, discrete and distributed delays. A new Lyapunov functional which contains some new double integral terms are introduced. By using appropriate model transformation that shifts the considered systems into the neutral-type time-delay system, and by making use of some inequality techniques, delay-dependent criteria are developed to guarantee the stability of the considered system. Finally, numerical examples are provided to illustrate the usefulness of the proposed main results.

Keywords: Neural networks, Stability, Time-varying delays, Linear matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
3988 Evolving Neural Networks using Moment Method for Handwritten Digit Recognition

Authors: H. El Fadili, K. Zenkouar, H. Qjidaa

Abstract:

This paper proposes a neural network weights and topology optimization using genetic evolution and the backpropagation training algorithm. The proposed crossover and mutation operators aims to adapt the networks architectures and weights during the evolution process. Through a specific inheritance procedure, the weights are transmitted from the parents to their offsprings, which allows re-exploitation of the already trained networks and hence the acceleration of the global convergence of the algorithm. In the preprocessing phase, a new feature extraction method is proposed based on Legendre moments with the Maximum entropy principle MEP as a selection criterion. This allows a global search space reduction in the design of the networks. The proposed method has been applied and tested on the well known MNIST database of handwritten digits.

Keywords: Genetic algorithm, Legendre Moments, MEP, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
3987 Complex-Valued Neural Networks for Blind Equalization of Time-Varying Channels

Authors: Rajoo Pandey

Abstract:

Most of the commonly used blind equalization algorithms are based on the minimization of a nonconvex and nonlinear cost function and a neural network gives smaller residual error as compared to a linear structure. The efficacy of complex valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present two neural network models for blind equalization of time-varying channels, for M-ary QAM and PSK signals. The complex valued activation functions, suitable for these signal constellations in time-varying environment, are introduced and the learning algorithms based on the CMA cost function are derived. The improved performance of the proposed models is confirmed through computer simulations.

Keywords: Blind Equalization, Neural Networks, Constant Modulus Algorithm, Time-varying channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
3986 Application of Feed Forward Neural Networks in Modeling and Control of a Fed-Batch Crystallization Process

Authors: Petia Georgieva, Sebastião Feyo de Azevedo

Abstract:

This paper is focused on issues of nonlinear dynamic process modeling and model-based predictive control of a fed-batch sugar crystallization process applying the concept of artificial neural networks as computational tools. The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. A feed forward neural network (FFNN) model of the process is first built as part of the controller structure to predict the process response over a specified (prediction) horizon. The predictions are supplied to an optimization procedure to determine the values of the control action over a specified (control) horizon that minimizes a predefined performance index. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. However, the simulation results demonstrated smooth behavior of the control actions and satisfactory reference tracking.

Keywords: Feed forward neural network, process modelling, model predictive control, crystallization process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
3985 Stability Criteria for Uncertainty Markovian Jumping Parameters of BAM Neural Networks with Leakage and Discrete Delays

Authors: Qingqing Wang, Baocheng Chen, Shouming Zhong

Abstract:

In this paper, the problem of stability criteria for Markovian jumping BAM neural networks with leakage and discrete delays has been investigated. Some new sufficient condition are derived based on a novel Lyapunov-Krasovskii functional approach. These new criteria based on delay partitioning idea are proved to be less conservative because free-weighting matrices method and a convex optimization approach are considered. Finally, one numerical example is given to illustrate the the usefulness and feasibility of the proposed main results.

Keywords: Stability, Markovian jumping neural networks, Timevarying delays, Linear matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5100
3984 Artificial Neural Networks for Identification and Control of a Lab-Scale Distillation Column Using LABVIEW

Authors: J. Fernandez de Canete, S. Gonzalez-Perez, P. del Saz-Orozco

Abstract:

LABVIEW is a graphical programming language that has its roots in automation control and data acquisition. In this paper we have utilized this platform to provide a powerful toolset for process identification and control of nonlinear systems based on artificial neural networks (ANN). This tool has been applied to the monitoring and control of a lab-scale distillation column DELTALAB DC-SP. The proposed control scheme offers high speed of response for changes in set points and null stationary error for dual composition control and shows robustness in presence of externally imposed disturbance.

Keywords: Distillation, neural networks, LABVIEW, monitoring, identification, control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2871
3983 Delay-Dependent Stability Analysis for Neutral Type Neural Networks with Uncertain Parameters and Time-Varying Delay

Authors: Qingqing Wang, Shouming Zhong

Abstract:

In this paper, delay-dependent stability analysis for neutral type neural networks with uncertain paramters and time-varying delay is studied. By constructing new Lyapunov-Krasovskii functional and dividing the delay interval into multiple segments, a novel sufficient condition is established to guarantee the globally asymptotically stability of the considered system. Finally, a numerical example is provided to illustrate the usefulness of the proposed main results.

Keywords: Neutral type neural networks, Time-varying delay, Stability, Linear matrix inequality(LMI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
3982 Stability Analysis of Impulsive BAM Fuzzy Cellular Neural Networks with Distributed Delays and Reaction-diffusion Terms

Authors: Xinhua Zhang, Kelin Li

Abstract:

In this paper, a class of impulsive BAM fuzzy cellular neural networks with distributed delays and reaction-diffusion terms is formulated and investigated. By employing the delay differential inequality and inequality technique developed by Xu et al., some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM fuzzy cellular neural networks with distributed delays and reaction-diffusion terms are obtained. In particular, the estimate of the exponential convergence rate is also provided, which depends on system parameters, diffusion effect and impulsive disturbed intention. It is believed that these results are significant and useful for the design and applications of BAM fuzzy cellular neural networks. An example is given to show the effectiveness of the results obtained here.

Keywords: Bi-directional associative memory, fuzzy cellular neuralnetworks, reaction-diffusion, delays, impulses, global exponentialstability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
3981 A New Self-Adaptive EP Approach for ANN Weights Training

Authors: Kristina Davoian, Wolfram-M. Lippe

Abstract:

Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.

Keywords: Artificial Neural Networks (ANN), Learning Theory, Evolutionary Programming (EP), Mutation, Self-Adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
3980 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2364
3979 Rough Neural Networks in Adapting Cellular Automata Rule for Reducing Image Noise

Authors: Yasser F. Hassan

Abstract:

The reduction or removal of noise in a color image is an essential part of image processing, whether the final information is used for human perception or for an automatic inspection and analysis. This paper describes the modeling system based on the rough neural network model to adaptive cellular automata for various image processing tasks and noise remover. In this paper, we consider the problem of object processing in colored image using rough neural networks to help deriving the rules which will be used in cellular automata for noise image. The proposed method is compared with some classical and recent methods. The results demonstrate that the new model is capable of being trained to perform many different tasks, and that the quality of these results is comparable or better than established specialized algorithms.

Keywords: Rough Sets, Rough Neural Networks, Cellular Automata, Image Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
3978 Limit Cycle Behaviour of a Neural Controller with Delayed Bang-Bang Feedback

Authors: Travis Wiens, Greg Schoenau, Rich Burton

Abstract:

It is well known that a linear dynamic system including a delay will exhibit limit cycle oscillations when a bang-bang sensor is used in the feedback loop of a PID controller. A similar behaviour occurs when a delayed feedback signal is used to train a neural network. This paper develops a method of predicting this behaviour by linearizing the system, which can be shown to behave in a manner similar to an integral controller. Using this procedure, it is possible to predict the characteristics of the neural network driven limit cycle to varying degrees of accuracy, depending on the information known about the system. An application is also presented: the intelligent control of a spark ignition engine.

Keywords: Control and automation, artificial neural networks, limit cycle

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1242
3977 Avoiding Catastrophic Forgetting by a Dual-Network Memory Model Using a Chaotic Neural Network

Authors: Motonobu Hattori

Abstract:

In neural networks, when new patterns are learned by a network, the new information radically interferes with previously stored patterns. This drawback is called catastrophic forgetting or catastrophic interference. In this paper, we propose a biologically inspired neural network model which overcomes this problem. The proposed model consists of two distinct networks: one is a Hopfield type of chaotic associative memory and the other is a multilayer neural network. We consider that these networks correspond to the hippocampus and the neocortex of the brain, respectively. Information given is firstly stored in the hippocampal network with fast learning algorithm. Then the stored information is recalled by chaotic behavior of each neuron in the hippocampal network. Finally, it is consolidated in the neocortical network by using pseudopatterns. Computer simulation results show that the proposed model has much better ability to avoid catastrophic forgetting in comparison with conventional models.

Keywords: catastrophic forgetting, chaotic neural network, complementary learning systems, dual-network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2043
3976 Support Vector Fuzzy Based Neural Networks For Exchange Rate Modeling

Authors: Prof. Chokri SLIM

Abstract:

A Novel fuzzy neural network combining with support vector learning mechanism called support-vector-based fuzzy neural networks (SVBFNN) is proposed. The SVBFNN combine the capability of minimizing the empirical risk (training error) and expected risk (testing error) of support vector learning in high dimensional data spaces and the efficient human-like reasoning of FNN.

Keywords: Neural network, fuzzy inference, machine learning, fuzzy modeling and rule extraction, support vector regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16641
3975 Algorithm and Software Based on Multilayer Perceptron Neural Networks for Estimating Channel Use in the Spectral Decision Stage in Cognitive Radio Networks

Authors: Danilo López, Johana Hernández, Edwin Rivas

Abstract:

The use of the Multilayer Perceptron Neural Networks (MLPNN) technique is presented to estimate the future state of use of a licensed channel by primary users (PUs); this will be useful at the spectral decision stage in cognitive radio networks (CRN) to determine approximately in which time instants of future may secondary users (SUs) opportunistically use the spectral bandwidth to send data through the primary wireless network. To validate the results, sequences of occupancy data of channel were generated by simulation. The results show that the prediction percentage is greater than 60% in some of the tests carried out.

Keywords: Cognitive radio, neural network, prediction, primary user.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947
3974 Improved Stability Criteria for Neural Networks with Two Additive Time-Varying Delays

Authors: Miaomiao Yang, Shouming Zhong

Abstract:

This paper studies the problem of stability criteria for neural networks with two additive time-varying delays.A new Lyapunov-Krasovskii function is constructed and some new delay dependent stability criterias are derived in the terms of linear matrix inequalities(LMI), zero equalities and reciprocally convex approach.The several stability criterion proposed in this paper is simpler and effective. Finally,numerical examples are provided to demonstrate the feasibility and effectiveness of our results.

Keywords: Stability, Neural networks, Linear Matrix Inequalities (LMI) , Lyapunov function, Time-varying delays

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
3973 Application of Feed-Forward Neural Networks Autoregressive Models in Gross Domestic Product Prediction

Authors: Ε. Giovanis

Abstract:

In this paper we present an autoregressive model with neural networks modeling and standard error backpropagation algorithm training optimization in order to predict the gross domestic product (GDP) growth rate of four countries. Specifically we propose a kind of weighted regression, which can be used for econometric purposes, where the initial inputs are multiplied by the neural networks final optimum weights from input-hidden layer after the training process. The forecasts are compared with those of the ordinary autoregressive model and we conclude that the proposed regression-s forecasting results outperform significant those of autoregressive model in the out-of-sample period. The idea behind this approach is to propose a parametric regression with weighted variables in order to test for the statistical significance and the magnitude of the estimated autoregressive coefficients and simultaneously to estimate the forecasts.

Keywords: Autoregressive model, Error back-propagation Feed-Forward neural networks, , Gross Domestic Product

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
3972 New Approaches on Stability Analysis for Neural Networks with Time-Varying Delay

Authors: Qingqing Wang, Shouming Zhong

Abstract:

Utilizing the Lyapunov functional method and combining linear matrix inequality (LMI) techniques and integral inequality approach (IIA) to analyze the global asymptotic stability for delayed neural networks (DNNs),a new sufficient criterion ensuring the global stability of DNNs is obtained.The criteria are formulated in terms of a set of linear matrix inequalities,which can be checked efficiently by use of some standard numercial packages.In order to show the stability condition in this paper gives much less conservative results than those in the literature,numerical examples are considered.

Keywords: Neural networks, Globally asymptotic stability , LMI approach , IIA approach , Time-varying delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
3971 Anti-periodic Solutions for Cohen-Grossberg Shunting Inhibitory Neural Networks with Delays

Authors: Yongkun Li, Tianwei Zhang, Shufa Bai

Abstract:

By using the method of coincidence degree theory and constructing suitable Lyapunov functional, several sufficient conditions are established for the existence and global exponential stability of anti-periodic solutions for Cohen-Grossberg shunting inhibitory neural networks with delays. An example is given to illustrate our feasible results.

Keywords: Anti-periodic solution, coincidence degree, global exponential stability, Cohen-Grossberg shunting inhibitory cellular neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
3970 Comparative Analysis of Sigmoidal Feedforward Artificial Neural Networks and Radial Basis Function Networks Approach for Localization in Wireless Sensor Networks

Authors: Ashish Payal, C. S. Rai, B. V. R. Reddy

Abstract:

With the increasing use and application of Wireless Sensor Networks (WSN), need has arisen to explore them in more effective and efficient manner. An important area which can bring efficiency to WSNs is the localization process, which refers to the estimation of the position of wireless sensor nodes in an ad hoc network setting, in reference to a coordinate system that may be internal or external to the network. In this paper, we have done comparison and analysed Sigmoidal Feedforward Artificial Neural Networks (SFFANNs) and Radial Basis Function (RBF) networks for developing localization framework in WSNs. The presented work utilizes the Received Signal Strength Indicator (RSSI), measured by static node on 100 x 100 m2 grid from three anchor nodes. The comprehensive evaluation of these approaches is done using MATLAB software. The simulation results effectively demonstrate that FFANNs based sensor motes will show better localization accuracy as compared to RBF.

Keywords: Localization, wireless sensor networks, artificial neural network, radial basis function, multi-layer perceptron, backpropagation, RSSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
3969 Optimization of Agricultural Water Demand Using a Hybrid Model of Dynamic Programming and Neural Networks: A Case Study of Algeria

Authors: M. Boudjerda, B. Touaibia, M. K. Mihoubi

Abstract:

In Algeria agricultural irrigation is the primary water consuming sector followed by the domestic and industrial sectors. Economic development in the last decade has weighed heavily on water resources which are relatively limited and gradually decreasing to the detriment of agriculture. The research presented in this paper focuses on the optimization of irrigation water demand. Dynamic Programming-Neural Network (DPNN) method is applied to investigate reservoir optimization. The optimal operation rule is formulated to minimize the gap between water release and water irrigation demand. As a case study, Foum El-Gherza dam’s reservoir system in south of Algeria has been selected to examine our proposed optimization model. The application of DPNN method allowed increasing the satisfaction rate (SR) from 12.32% to 55%. In addition, the operation rule generated showed more reliable and resilience operation for the examined case study.

Keywords: ater management, agricultural demand, dam and reservoir operation, Foum el-Gherza dam, dynamic programming, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 657
3968 Presentation of a Mix Algorithm for Estimating the Battery State of Charge Using Kalman Filter and Neural Networks

Authors: Amin Sedighfar, M. R. Moniri

Abstract:

Determination of state of charge (SOC) in today’s world becomes an increasingly important issue in all the applications that include a battery. In fact, estimation of the SOC is a fundamental need for the battery, which is the most important energy storage in Hybrid Electric Vehicles (HEVs), smart grid systems, drones, UPS and so on. Regarding those applications, the SOC estimation algorithm is expected to be precise and easy to implement. This paper presents an online method for the estimation of the SOC of Valve-Regulated Lead Acid (VRLA) batteries. The proposed method uses the well-known Kalman Filter (KF), and Neural Networks (NNs) and all of the simulations have been done with MATLAB software. The NN is trained offline using the data collected from the battery discharging process. A generic cell model is used, and the underlying dynamic behavior of the model has used two capacitors (bulk and surface) and three resistors (terminal, surface, and end), where the SOC determined from the voltage represents the bulk capacitor. The aim of this work is to compare the performance of conventional integration-based SOC estimation methods with a mixed algorithm. Moreover, by containing the effect of temperature, the final result becomes more accurate. 

Keywords: Kalman filter, neural networks, state-of-charge, VRLA battery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
3967 Representation of Power System for Electromagnetic Transient Calculation

Authors: P. Sowa

Abstract:

The new idea of analyze of power system failure with use of artificial neural network is proposed. An analysis of the possibility of simulating phenomena accompanying system faults and restitution is described. It was indicated that the universal model for the simulation of phenomena in whole analyzed range does not exist. The main classic method of search of optimal structure and parameter identification are described shortly. The example with results of calculation is shown.

Keywords: Dynamic equivalents, Network reduction, Neural networks, Power system analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854
3966 The Application of an Ensemble of Boosted Elman Networks to Time Series Prediction: A Benchmark Study

Authors: Chee Peng Lim, Wei Yee Goh

Abstract:

In this paper, the application of multiple Elman neural networks to time series data regression problems is studied. An ensemble of Elman networks is formed by boosting to enhance the performance of the individual networks. A modified version of the AdaBoost algorithm is employed to integrate the predictions from multiple networks. Two benchmark time series data sets, i.e., the Sunspot and Box-Jenkins gas furnace problems, are used to assess the effectiveness of the proposed system. The simulation results reveal that an ensemble of boosted Elman networks can achieve a higher degree of generalization as well as performance than that of the individual networks. The results are compared with those from other learning systems, and implications of the performance are discussed.

Keywords: AdaBoost, Elman network, neural network ensemble, time series regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
3965 A New Face Recognition Method using PCA, LDA and Neural Network

Authors: A. Hossein Sahoolizadeh, B. Zargham Heidari, C. Hamid Dehghani

Abstract:

In this paper, a new face recognition method based on PCA (principal Component Analysis), LDA (Linear Discriminant Analysis) and neural networks is proposed. This method consists of four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii) feature extraction using LDA and iv) classification using neural network. Combination of PCA and LDA is used for improving the capability of LDA when a few samples of images are available and neural classifier is used to reduce number misclassification caused by not-linearly separable classes. The proposed method was tested on Yale face database. Experimental results on this database demonstrated the effectiveness of the proposed method for face recognition with less misclassification in comparison with previous methods.

Keywords: Face recognition Principal component analysis, Linear discriminant analysis, Neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3159
3964 Handwriting Velocity Modeling by Artificial Neural Networks

Authors: Mohamed Aymen Slim, Afef Abdelkrim, Mohamed Benrejeb

Abstract:

The handwriting is a physical demonstration of a complex cognitive process learnt by man since his childhood. People with disabilities or suffering from various neurological diseases are facing so many difficulties resulting from problems located at the muscle stimuli (EMG) or signals from the brain (EEG) and which arise at the stage of writing. The handwriting velocity of the same writer or different writers varies according to different criteria: age, attitude, mood, writing surface, etc. Therefore, it is interesting to reconstruct an experimental basis records taking, as primary reference, the writing speed for different writers which would allow studying the global system during handwriting process. This paper deals with a new approach of the handwriting system modeling based on the velocity criterion through the concepts of artificial neural networks, precisely the Radial Basis Functions (RBF) neural networks. The obtained simulation results show a satisfactory agreement between responses of the developed neural model and the experimental data for various letters and forms then the efficiency of the proposed approaches.

Keywords: ElectroMyoGraphic (EMG) signals, Experimental approach, Handwriting process, Radial Basis Functions (RBF) neural networks, Velocity Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270
3963 Pseudo-almost Periodic Solutions of a Class Delayed Chaotic Neural Networks

Authors: Farouk Cherif

Abstract:

This paper is concerned with the existence and unique¬ness of pseudo-almost periodic solutions to the chaotic delayed neural networks (t)= —Dx(t) ± A f (x (t)) B f (x (t — r)) C f (x(p))dp J (t) . t-o Under some suitable assumptions on A, B, C, D, J and f, the existence and uniqueness of a pseudo-almost periodic solution to equation above is obtained. The results of this paper are new and they complement previously known results.

Keywords: Chaotic neural network, Hamiltonian systems, Pseudo almost periodic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1266
3962 High Impedance Fault Detection using LVQ Neural Networks

Authors: Abhishek Bansal, G. N. Pillai

Abstract:

This paper presents a new method to detect high impedance faults in radial distribution systems. Magnitudes of third and fifth harmonic components of voltages and currents are used as a feature vector for fault discrimination. The proposed methodology uses a learning vector quantization (LVQ) neural network as a classifier for identifying high impedance arc-type faults. The network learns from the data obtained from simulation of a simple radial system under different fault and system conditions. Compared to a feed-forward neural network, a properly tuned LVQ network gives quicker response.

Keywords: Fault identification, distribution networks, high impedance arc-faults, feature vector, LVQ networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153
3961 Deep Learning Based, End-to-End Metaphor Detection in Greek with Recurrent and Convolutional Neural Networks

Authors: Konstantinos Perifanos, Eirini Florou, Dionysis Goutsos

Abstract:

This paper presents and benchmarks a number of end-to-end Deep Learning based models for metaphor detection in Greek. We combine Convolutional Neural Networks and Recurrent Neural Networks with representation learning to bear on the metaphor detection problem for the Greek language. The models presented achieve exceptional accuracy scores, significantly improving the previous state-of-the-art results, which had already achieved accuracy 0.82. Furthermore, no special preprocessing, feature engineering or linguistic knowledge is used in this work. The methods presented achieve accuracy of 0.92 and F-score 0.92 with Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory networks (LSTMs). Comparable results of 0.91 accuracy and 0.91 F-score are also achieved with bidirectional Gated Recurrent Units (GRUs) and Convolutional Recurrent Neural Nets (CRNNs). The models are trained and evaluated only on the basis of training tuples, the related sentences and their labels. The outcome is a state-of-the-art collection of metaphor detection models, trained on limited labelled resources, which can be extended to other languages and similar tasks.

Keywords: Metaphor detection, deep learning, representation learning, embeddings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 474