Search results for: recurrent neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3079

Search results for: recurrent neural network

3049 Spline Basis Neural Network Algorithm for Numerical Integration

Authors: Lina Yan, Jingjing Di, Ke Wang

Abstract:

A new basis function neural network algorithm is proposed for numerical integration. The main idea is to construct neural network model based on spline basis functions, which is used to approximate the integrand by training neural network weights. The convergence theorem of the neural network algorithm, the theorem for numerical integration and one corollary are presented and proved. The numerical examples, compared with other methods, show that the algorithm is effective and has the characteristics such as high precision and the integrand not required known. Thus, the algorithm presented in this paper can be widely applied in many engineering fields.

Keywords: Numerical integration, Spline basis function, Neural network algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2870
3048 Investigation of Artificial Neural Networks Performance to Predict Net Heating Value of Crude Oil by Its Properties

Authors: Mousavian, M. Moghimi Mofrad, M. H. Vakili, D. Ashouri, R. Alizadeh

Abstract:

The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.

Keywords: Neural Network, Net Heating Value, Crude Oil, Experimental, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
3047 Avoiding Catastrophic Forgetting by a Dual-Network Memory Model Using a Chaotic Neural Network

Authors: Motonobu Hattori

Abstract:

In neural networks, when new patterns are learned by a network, the new information radically interferes with previously stored patterns. This drawback is called catastrophic forgetting or catastrophic interference. In this paper, we propose a biologically inspired neural network model which overcomes this problem. The proposed model consists of two distinct networks: one is a Hopfield type of chaotic associative memory and the other is a multilayer neural network. We consider that these networks correspond to the hippocampus and the neocortex of the brain, respectively. Information given is firstly stored in the hippocampal network with fast learning algorithm. Then the stored information is recalled by chaotic behavior of each neuron in the hippocampal network. Finally, it is consolidated in the neocortical network by using pseudopatterns. Computer simulation results show that the proposed model has much better ability to avoid catastrophic forgetting in comparison with conventional models.

Keywords: catastrophic forgetting, chaotic neural network, complementary learning systems, dual-network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
3046 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: Continuous wavelet transform, convolution neural network, gated recurrent unit, health indicators, remaining useful life.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
3045 Optimum Neural Network Architecture for Precipitation Prediction of Myanmar

Authors: Khaing Win Mar, Thinn Thu Naing

Abstract:

Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a suitable neural network model for monthly precipitation mapping of Myanmar. Using 3-layerd neural network models, 100 cases are tested by changing the number of input and hidden nodes from 1 to 10 nodes, respectively, and only one outputnode used. The optimum model with the suitable number of nodes is selected in accordance with the minimum forecast error. In measuring network performance using Root Mean Square Error (RMSE), experimental results significantly show that 3 inputs-10 hiddens-1 output architecture model gives the best prediction result for monthly precipitation in Myanmar.

Keywords: Precipitation prediction, monthly precipitation, neural network models, Myanmar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
3044 Improved Exponential Stability Analysis for Delayed Recurrent Neural Networks

Authors: Miaomiao Yang, Shouming Zhong

Abstract:

This paper studies the problem of exponential stability analysis for recurrent neural networks with time-varying delay.By establishing a suitable augmented LyapunovCKrasovskii function and a novel sufficient condition is obtained to guarantee the exponential stability of the considered system.In order to get a less conservative results of the condition,zero equalities and reciprocally convex approach are employed. The several exponential stability criterion proposed in this paper is simpler and effective. A numerical example is provided to demonstrate the feasibility and effectiveness of our results.

Keywords: Exponential stability , Neural networks, Linear matrix inequality, Lyapunov-Krasovskii, Time-varying.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
3043 Some Remarkable Properties of a Hopfield Neural Network with Time Delay

Authors: Kelvin Rozier, Vladimir E. Bondarenko

Abstract:

It is known that an analog Hopfield neural network with time delay can generate the outputs which are similar to the human electroencephalogram. To gain deeper insights into the mechanisms of rhythm generation by the Hopfield neural networks and to study the effects of noise on their activities, we investigated the behaviors of the networks with symmetric and asymmetric interneuron connections. The neural network under the study consists of 10 identical neurons. For symmetric (fully connected) networks all interneuron connections aij = +1; the interneuron connections for asymmetric networks form an upper triangular matrix with non-zero entries aij = +1. The behavior of the network is described by 10 differential equations, which are solved numerically. The results of simulations demonstrate some remarkable properties of a Hopfield neural network, such as linear growth of outputs, dependence of synchronization properties on the connection type, huge amplification of oscillation by the external uniform noise, and the capability of the neural network to transform one type of noise to another.

Keywords: Chaos, Hopfield neural network, noise, synchronization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
3042 Development of Gas Chromatography Model: Propylene Concentration Using Neural Network

Authors: Areej Babiker Idris Babiker, Rosdiazli Ibrahim

Abstract:

Gas chromatography (GC) is the most widely used technique in analytical chemistry. However, GC has high initial cost and requires frequent maintenance. This paper examines the feasibility and potential of using a neural network model as an alternative whenever GC is unvailable. It can also be part of system verification on the performance of GC for preventive maintenance activities. It shows the performance of MultiLayer Perceptron (MLP) with Backpropagation structure. Results demonstrate that neural network model when trained using this structure provides an adequate result and is suitable for this purpose. cm.

Keywords: Analyzer, Levenberg-Marquardt, Gas chromatography, Neural network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
3041 Identify Features and Parameters to Devise an Accurate Intrusion Detection System Using Artificial Neural Network

Authors: Saman M. Abdulla, Najla B. Al-Dabagh, Omar Zakaria

Abstract:

The aim of this article is to explain how features of attacks could be extracted from the packets. It also explains how vectors could be built and then applied to the input of any analysis stage. For analyzing, the work deploys the Feedforward-Back propagation neural network to act as misuse intrusion detection system. It uses ten types if attacks as example for training and testing the neural network. It explains how the packets are analyzed to extract features. The work shows how selecting the right features, building correct vectors and how correct identification of the training methods with nodes- number in hidden layer of any neural network affecting the accuracy of system. In addition, the work shows how to get values of optimal weights and use them to initialize the Artificial Neural Network.

Keywords: Artificial Neural Network, Attack Features, MisuseIntrusion Detection System, Training Parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247
3040 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2365
3039 Application of Neural Networks in Financial Data Mining

Authors: Defu Zhang, Qingshan Jiang, Xin Li

Abstract:

This paper deals with the application of a well-known neural network technique, multilayer back-propagation (BP) neural network, in financial data mining. A modified neural network forecasting model is presented, and an intelligent mining system is developed. The system can forecast the buying and selling signs according to the prediction of future trends to stock market, and provide decision-making for stock investors. The simulation result of seven years to Shanghai Composite Index shows that the return achieved by this mining system is about three times as large as that achieved by the buy and hold strategy, so it is advantageous to apply neural networks to forecast financial time series, the different investors could benefit from it.

Keywords: Data mining, neural network, stock forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3544
3038 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy

Abstract:

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
3037 Accelerating Integer Neural Networks On Low Cost DSPs

Authors: Thomas Behan, Zaiyi Liao, Lian Zhao, Chunting Yang

Abstract:

In this paper, low end Digital Signal Processors (DSPs) are applied to accelerate integer neural networks. The use of DSPs to accelerate neural networks has been a topic of study for some time, and has demonstrated significant performance improvements. Recently, work has been done on integer only neural networks, which greatly reduces hardware requirements, and thus allows for cheaper hardware implementation. DSPs with Arithmetic Logic Units (ALUs) that support floating or fixed point arithmetic are generally more expensive than their integer only counterparts due to increased circuit complexity. However if the need for floating or fixed point math operation can be removed, then simpler, lower cost DSPs can be used. To achieve this, an integer only neural network is created in this paper, which is then accelerated by using DSP instructions to improve performance.

Keywords: Digital Signal Processor (DSP), Integer Neural Network(INN), Low Cost Neural Network, Integer Neural Network DSPImplementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
3036 Bayesian Deep Learning Algorithms for Classifying COVID-19 Images

Authors: I. Oloyede

Abstract:

The study investigates the accuracy and loss of deep learning algorithms with the set of coronavirus (COVID-19) images dataset by comparing Bayesian convolutional neural network and traditional convolutional neural network in low dimensional dataset. 50 sets of X-ray images out of which 25 were COVID-19 and the remaining 20 were normal, twenty images were set as training while five were set as validation that were used to ascertained the accuracy of the model. The study found out that Bayesian convolution neural network outperformed conventional neural network at low dimensional dataset that could have exhibited under fitting. The study therefore recommended Bayesian Convolutional neural network (BCNN) for android apps in computer vision for image detection.

Keywords: BCNN, CNN, Images, COVID-19, Deep Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801
3035 A Practical Approach for Electricity Load Forecasting

Authors: T. Rashid, T. Kechadi

Abstract:

This paper is a continuation of our daily energy peak load forecasting approach using our modified network which is part of the recurrent networks family and is called feed forward and feed back multi context artificial neural network (FFFB-MCANN). The inputs to the network were exogenous variables such as the previous and current change in the weather components, the previous and current status of the day and endogenous variables such as the past change in the loads. Endogenous variable such as the current change in the loads were used on the network output. Experiment shows that using endogenous and exogenous variables as inputs to the FFFBMCANN rather than either exogenous or endogenous variables as inputs to the same network produces better results. Experiments show that using the change in variables such as weather components and the change in the past load as inputs to the FFFB-MCANN rather than the absolute values for the weather components and past load as inputs to the same network has a dramatic impact and produce better accuracy.

Keywords: Daily peak load forecasting, feed forward and feedback multi-context neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
3034 Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models

Authors: Rohitash Chandra, Christian W. Omlin

Abstract:

We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.

Keywords: Deterministic finite-state automata, genetic algorithm, hidden Markov models, hybrid systems and recurrent neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
3033 Almost Periodicity in a Harvesting Lotka-Volterra Recurrent Neural Networks with Time-Varying Delays

Authors: Yongzhi Liao

Abstract:

By using the theory of exponential dichotomy and Banach fixed point theorem, this paper is concerned with the problem of the existence and uniqueness of positive almost periodic solution in a delayed Lotka-Volterra recurrent neural networks with harvesting terms. To a certain extent, our work in this paper corrects some result in recent years. Finally, an example is given to illustrate the feasibility and effectiveness of the main result.

Keywords: positive almost periodic solution, Lotka-Volterra, neural networks, Banach fixed point theorem, harvesting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
3032 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification

Authors: Abdelhadi Lotfi, Abdelkader Benyettou

Abstract:

In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.

Keywords: Classification, probabilistic neural networks, network optimization, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1162
3031 A Cognitive Model for Frequency Signal Classification

Authors: Rui Antunes, Fernando V. Coito

Abstract:

This article presents the development of a neural network cognitive model for the classification and detection of different frequency signals. The basic structure of the implemented neural network was inspired on the perception process that humans generally make in order to visually distinguish between high and low frequency signals. It is based on the dynamic neural network concept, with delays. A special two-layer feedforward neural net structure was successfully implemented, trained and validated, to achieve minimum target error. Training confirmed that this neural net structure descents and converges to a human perception classification solution, even when far away from the target.

Keywords: Neural Networks, Signal Classification, Adaptative Filters, Cognitive Neuroscience

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
3030 Inverse Problem Methodology for the Measurement of the Electromagnetic Parameters Using MLP Neural Network

Authors: T. Hacib, M. R. Mekideche, N. Ferkha

Abstract:

This paper presents an approach which is based on the use of supervised feed forward neural network, namely multilayer perceptron (MLP) neural network and finite element method (FEM) to solve the inverse problem of parameters identification. The approach is used to identify unknown parameters of ferromagnetic materials. The methodology used in this study consists in the simulation of a large number of parameters in a material under test, using the finite element method (FEM). Both variations in relative magnetic permeability and electrical conductivity of the material under test are considered. Then, the obtained results are used to generate a set of vectors for the training of MLP neural network. Finally, the obtained neural network is used to evaluate a group of new materials, simulated by the FEM, but not belonging to the original dataset. Noisy data, added to the probe measurements is used to enhance the robustness of the method. The reached results demonstrate the efficiency of the proposed approach, and encourage future works on this subject.

Keywords: Inverse problem, MLP neural network, parametersidentification, FEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
3029 Bi-lingual Handwritten Character and Numeral Recognition using Multi-Dimensional Recurrent Neural Networks (MDRNN)

Authors: Kandarpa Kumar Sarma

Abstract:

The key to the continued success of ANN depends, considerably, on the use of hybrid structures implemented on cooperative frame-works. Hybrid architectures provide the ability to the ANN to validate heterogeneous learning paradigms. This work describes the implementation of a set of Distributed and Hybrid ANN models for Character Recognition applied to Anglo-Assamese scripts. The objective is to describe the effectiveness of Hybrid ANN setups as innovative means of neural learning for an application like multilingual handwritten character and numeral recognition.

Keywords: Assamese, Feature, Recurrent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
3028 Prediction of Air-Water Two-Phase Frictional Pressure Drop Using Artificial Neural Network

Authors: H. B. Mehta, Vipul M. Patel, Jyotirmay Banerjee

Abstract:

The present paper discusses the prediction of gas-liquid two-phase frictional pressure drop in a 2.12 mm horizontal circular minichannel using Artificial Neural Network (ANN). The experimental results are obtained with air as gas phase and water as liquid phase. The superficial gas velocity is kept in the range of 0.0236 m/s to 0.4722 m/s while the values of 0.0944 m/s, 0.1416 m/s and 0.1889 m/s are considered for superficial liquid velocity. The experimental results are predicted using different Artificial Neural Network (ANN) models. Networks used for prediction are radial basis, generalised regression, linear layer, cascade forward back propagation, feed forward back propagation, feed forward distributed time delay, layer recurrent, and Elman back propagation. Transfer functions used for networks are Linear (PURELIN), Logistic sigmoid (LOGSIG), tangent sigmoid (TANSIG) and Gaussian RBF. Combination of networks and transfer functions give different possible neural network models. These models are compared for Mean Absolute Relative Deviation (MARD) and Mean Relative Deviation (MRD) to identify the best predictive model of ANN.

Keywords: Minichannel, Two-Phase Flow, Frictional Pressure Drop, ANN, MARD, MRD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
3027 Comparison between Beta Wavelets Neural Networks, RBF Neural Networks and Polynomial Approximation for 1D, 2DFunctions Approximation

Authors: Wajdi Bellil, Chokri Ben Amar, Adel M. Alimi

Abstract:

This paper proposes a comparison between wavelet neural networks (WNN), RBF neural network and polynomial approximation in term of 1-D and 2-D functions approximation. We present a novel wavelet neural network, based on Beta wavelets, for 1-D and 2-D functions approximation. Our purpose is to approximate an unknown function f: Rn - R from scattered samples (xi; y = f(xi)) i=1....n, where first, we have little a priori knowledge on the unknown function f: it lives in some infinite dimensional smooth function space and second the function approximation process is performed iteratively: each new measure on the function (xi; f(xi)) is used to compute a new estimate f as an approximation of the function f. Simulation results are demonstrated to validate the generalization ability and efficiency of the proposed Beta wavelet network.

Keywords: Beta wavelets networks, RBF neural network, training algorithms, MSE, 1-D, 2D function approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
3026 Margin-Based Feed-Forward Neural Network Classifiers

Authors: Han Xiao, Xiaoyan Zhu

Abstract:

Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is developed and generated from Widrow-Hoff Principle that means to minimize the squared error. In this paper, we propose a new training algorithm for feed-forward neural networks based on Margin-Based Principle, which could effectively promote the accuracy and generalization ability of neural network classifiers with less labelled samples and flexible network. We have conducted experiments on four UCI open datasets and achieved good results as expected. In conclusion, our model could handle more sparse labelled and more high-dimension dataset in a high accuracy while modification from old ANN method to our method is easy and almost free of work.

Keywords: Max-Margin Principle, Feed-Forward Neural Network, Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
3025 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

Authors: Koray U. Erbas

Abstract:

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

Keywords: Convolutional Neural Network, Deep Learning, Deep Learning Based FER, Facial Emotion Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1306
3024 Sociological Impact on Education An Analytical Approach Through Artificial Neural network

Authors: P. R. Jayathilaka, K.L. Jayaratne, H.L. Premaratne

Abstract:

This research presented in this paper is an on-going project of an application of neural network and fuzzy models to evaluate the sociological factors which affect the educational performance of the students in Sri Lanka. One of its major goals is to prepare the grounds to device a counseling tool which helps these students for a better performance at their examinations, especially at their G.C.E O/L (General Certificate of Education-Ordinary Level) examination. Closely related sociological factors are collected as raw data and the noise of these data are filtered through the fuzzy interface and the supervised neural network is being utilized to recognize the performance patterns against the chosen social factors.

Keywords: Education, Fuzzy, neural network, prediction, Sociology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
3023 Neural Network Based Predictive DTC Algorithm for Induction Motors

Authors: N.Vahdatifar, Ss.Mortazavi, R.Kianinezhad

Abstract:

In this paper, a Neural Network based predictive DTC algorithm is proposed .This approach is used as an alternative to classical approaches .An appropriate riate Feed - forward network is chosen and based on its value of derivative electromagnetic torque ; optimal stator voltage vector is determined to be applied to the induction motor (by inverter). Moreover, an appropriate torque and flux observer is proposed.

Keywords: Neural Networks, Predictive DTC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346
3022 Application of Functional Network to Solving Classification Problems

Authors: Yong-Quan Zhou, Deng-Xu He, Zheng Nong

Abstract:

In this paper two models using a functional network were employed to solving classification problem. Functional networks are generalized neural networks, which permit the specification of their initial topology using knowledge about the problem at hand. In this case, and after analyzing the available data and their relations, we systematically discuss a numerical analysis method used for functional network, and apply two functional network models to solving XOR problem. The XOR problem that cannot be solved with two-layered neural network can be solved by two-layered functional network, which reveals a potent computational power of functional networks, and the performance of the proposed model was validated using classification problems.

Keywords: Functional network, neural network, XOR problem, classification, numerical analysis method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1265
3021 Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network

Authors: Li Kewen, Su Zhaoxin, Wang Xingmou, Zhu Jian Bing

Abstract:

Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development.

Keywords: Convolutional neural network, lithology, prediction of reservoir lithology, seismic attributes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597
3020 Nonlinear Adaptive PID Control for a Semi-Batch Reactor Based On an RBF Network

Authors: Magdi M. Nabi, Ding-Li Yu

Abstract:

Control of a semi-batch polymerization reactor using an adaptive radial basis function (RBF) neural network method is investigated in this paper. A neural network inverse model is used to estimate the valve position of the reactor; this method can identify the controlled system with the RBF neural network identifier. The weights of the adaptive PID controller are timely adjusted based on the identification of the plant and self-learning capability of RBFNN. A PID controller is used in the feedback control to regulate the actual temperature by compensating the neural network inverse model output. Simulation results show that the proposed control has strong adaptability, robustness and satisfactory control performance and the nonlinear system is achieved.

Keywords: Chylla-Haase polymerization reactor, RBF neural networks, feed-forward and feedback control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2626