**Commenced**in January 2007

**Frequency:**Monthly

**Edition:**International

**Paper Count:**3515

# Search results for: Competitive Neural Network

##### 3515 A Novel Technique for Ferroresonance Identification in Distribution Networks

**Authors:**
G. Mokryani,
M. R. Haghifam,
J. Esmaeilpoor

**Abstract:**

Happening of Ferroresonance phenomenon is one of the reasons of consuming and ruining transformers, so recognition of Ferroresonance phenomenon has a special importance. A novel method for classification of Ferroresonance presented in this paper. Using this method Ferroresonance can be discriminate from other transients such as capacitor switching, load switching, transformer switching. Wavelet transform is used for decomposition of signals and Competitive Neural Network used for classification. Ferroresonance data and other transients was obtained by simulation using EMTP program. Using Daubechies wavelet transform signals has been decomposed till six levels. The energy of six detailed signals that obtained by wavelet transform are used for training and trailing Competitive Neural Network. Results show that the proposed procedure is efficient in identifying Ferroresonance from other events.

**Keywords:**
Competitive Neural Network,
Ferroresonance,
EMTP program,
Wavelet transform.

##### 3514 A Combined Neural Network Approach to Soccer Player Prediction

**Authors:**
Wenbin Zhang,
Hantian Wu,
Jian Tang

**Abstract:**

An artificial neural network is a mathematical model inspired by biological neural networks. There are several kinds of neural networks and they are widely used in many areas, such as: prediction, detection, and classification. Meanwhile, in day to day life, people always have to make many difficult decisions. For example, the coach of a soccer club has to decide which offensive player to be selected to play in a certain game. This work describes a novel Neural Network using a combination of the General Regression Neural Network and the Probabilistic Neural Networks to help a soccer coach make an informed decision.

**Keywords:**
General Regression Neural Network,
Probabilistic Neural Networks,
Neural function.

##### 3513 Person Re-Identification Using Siamese Convolutional Neural Network

**Authors:**
Sello Mokwena,
Monyepao Thabang

**Abstract:**

In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis of benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques.

**Keywords:**
Camera network,
convolutional neural network topology,
person tracking,
person re-identification,
Siamese.

##### 3512 Multi-Label Hierarchical Classification for Protein Function Prediction

**Authors:**
Helyane B. Borges,
Julio Cesar Nievola

**Abstract:**

Hierarchical classification is a problem with applications in many areas as protein function prediction where the dates are hierarchically structured. Therefore, it is necessary the development of algorithms able to induce hierarchical classification models. This paper presents experimenters using the algorithm for hierarchical classification called Multi-label Hierarchical Classification using a Competitive Neural Network (MHC-CNN). It was tested in ten datasets the Gene Ontology (GO) Cellular Component Domain. The results are compared with the Clus-HMC and Clus-HSC using the hF-Measure.

**Keywords:**
Hierarchical Classification,
Competitive Neural Network,
Global Classifier.

##### 3511 Towards Growing Self-Organizing Neural Networks with Fixed Dimensionality

**Authors:**
Guojian Cheng,
Tianshi Liu,
Jiaxin Han,
Zheng Wang

**Abstract:**

**Keywords:**
Artificial neural networks,
Competitive learning,
Growing cell structures,
Self-organizing feature maps.

##### 3510 Neural Networks Learning Improvement using the K-Means Clustering Algorithm to Detect Network Intrusions

**Authors:**
K. M. Faraoun,
A. Boukelif

**Abstract:**

**Keywords:**
Neural networks,
Intrusion detection,
learningenhancement,
K-means clustering

##### 3509 A Literature Survey of Neural Network Applications for Shunt Active Power Filters

**Authors:**
S. Janpong,
K-L. Areerak,
K-N. Areerak

**Abstract:**

**Keywords:**
Active power filter,
neural network,
harmonic
distortion,
harmonic detection and compensation,
non-linear load.

##### 3508 Applications of Cascade Correlation Neural Networks for Cipher System Identification

**Authors:**
B. Chandra,
P. Paul Varghese

**Abstract:**

Crypto System Identification is one of the challenging tasks in Crypt analysis. The paper discusses the possibility of employing Neural Networks for identification of Cipher Systems from cipher texts. Cascade Correlation Neural Network and Back Propagation Network have been employed for identification of Cipher Systems. Very large collection of cipher texts were generated using a Block Cipher (Enhanced RC6) and a Stream Cipher (SEAL). Promising results were obtained in terms of accuracy using both the Neural Network models but it was observed that the Cascade Correlation Neural Network Model performed better compared to Back Propagation Network.

**Keywords:**
Back Propagation Neural Networks,
CascadeCorrelation Neural Network,
Crypto systems,
Block Cipher,
StreamCipher.

##### 3507 Spline Basis Neural Network Algorithm for Numerical Integration

**Authors:**
Lina Yan,
Jingjing Di,
Ke Wang

**Abstract:**

A new basis function neural network algorithm is proposed for numerical integration. The main idea is to construct neural network model based on spline basis functions, which is used to approximate the integrand by training neural network weights. The convergence theorem of the neural network algorithm, the theorem for numerical integration and one corollary are presented and proved. The numerical examples, compared with other methods, show that the algorithm is effective and has the characteristics such as high precision and the integrand not required known. Thus, the algorithm presented in this paper can be widely applied in many engineering fields.

**Keywords:**
Numerical integration,
Spline basis function,
Neural
network algorithm

##### 3506 Investigation of Artificial Neural Networks Performance to Predict Net Heating Value of Crude Oil by Its Properties

**Authors:**
Mousavian,
M. Moghimi Mofrad,
M. H. Vakili,
D. Ashouri,
R. Alizadeh

**Abstract:**

The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.

**Keywords:**
Neural Network,
Net Heating Value,
Crude Oil,
Experimental,
Modeling.

##### 3505 Avoiding Catastrophic Forgetting by a Dual-Network Memory Model Using a Chaotic Neural Network

**Authors:**
Motonobu Hattori

**Abstract:**

In neural networks, when new patterns are learned by a network, the new information radically interferes with previously stored patterns. This drawback is called catastrophic forgetting or catastrophic interference. In this paper, we propose a biologically inspired neural network model which overcomes this problem. The proposed model consists of two distinct networks: one is a Hopfield type of chaotic associative memory and the other is a multilayer neural network. We consider that these networks correspond to the hippocampus and the neocortex of the brain, respectively. Information given is firstly stored in the hippocampal network with fast learning algorithm. Then the stored information is recalled by chaotic behavior of each neuron in the hippocampal network. Finally, it is consolidated in the neocortical network by using pseudopatterns. Computer simulation results show that the proposed model has much better ability to avoid catastrophic forgetting in comparison with conventional models.

**Keywords:**
catastrophic forgetting,
chaotic neural network,
complementary learning systems,
dual-network

##### 3504 Optimum Neural Network Architecture for Precipitation Prediction of Myanmar

**Authors:**
Khaing Win Mar,
Thinn Thu Naing

**Abstract:**

Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a suitable neural network model for monthly precipitation mapping of Myanmar. Using 3-layerd neural network models, 100 cases are tested by changing the number of input and hidden nodes from 1 to 10 nodes, respectively, and only one outputnode used. The optimum model with the suitable number of nodes is selected in accordance with the minimum forecast error. In measuring network performance using Root Mean Square Error (RMSE), experimental results significantly show that 3 inputs-10 hiddens-1 output architecture model gives the best prediction result for monthly precipitation in Myanmar.

**Keywords:**
Precipitation prediction,
monthly precipitation,
neural network models,
Myanmar.

##### 3503 Some Remarkable Properties of a Hopfield Neural Network with Time Delay

**Authors:**
Kelvin Rozier,
Vladimir E. Bondarenko

**Abstract:**

**Keywords:**
Chaos,
Hopfield neural network,
noise,
synchronization

##### 3502 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks

**Authors:**
Reza Dinarvand,
Mahdi Sadeghian,
Somaye Sadeghian

**Abstract:**

Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.

**Keywords:**
Lateral bearing capacity,
short pile,
clayey soil,
artificial neural network,
Imperialist competition algorithm.

##### 3501 Development of Gas Chromatography Model: Propylene Concentration Using Neural Network

**Authors:**
Areej Babiker Idris Babiker,
Rosdiazli Ibrahim

**Abstract:**

**Keywords:**
Analyzer,
Levenberg-Marquardt,
Gas
chromatography,
Neural network

##### 3500 Developing an Advanced Algorithm Capable of Classifying News, Articles and Other Textual Documents Using Text Mining Techniques

**Authors:**
R. B. Knudsen,
O. T. Rasmussen,
R. A. Alphinas

**Abstract:**

The reason for conducting this research is to develop an algorithm that is capable of classifying news articles from the automobile industry, according to the competitive actions that they entail, with the use of Text Mining (TM) methods. It is needed to test how to properly preprocess the data for this research by preparing pipelines which fits each algorithm the best. The pipelines are tested along with nine different classification algorithms in the realm of regression, support vector machines, and neural networks. Preliminary testing for identifying the optimal pipelines and algorithms resulted in the selection of two algorithms with two different pipelines. The two algorithms are Logistic Regression (LR) and Artificial Neural Network (ANN). These algorithms are optimized further, where several parameters of each algorithm are tested. The best result is achieved with the ANN. The final model yields an accuracy of 0.79, a precision of 0.80, a recall of 0.78, and an F1 score of 0.76. By removing three of the classes that created noise, the final algorithm is capable of reaching an accuracy of 94%.

**Keywords:**
Artificial neural network,
competitive dynamics,
logistic regression,
text classification,
text mining.

##### 3499 Efficient System for Speech Recognition using General Regression Neural Network

**Authors:**
Abderrahmane Amrouche,
Jean Michel Rouvaen

**Abstract:**

**Keywords:**
Speech Recognition,
General Regression NeuralNetwork,
Hidden Markov Model,
Recurrent Neural Network,
ArabicDigits.

##### 3498 Identify Features and Parameters to Devise an Accurate Intrusion Detection System Using Artificial Neural Network

**Authors:**
Saman M. Abdulla,
Najla B. Al-Dabagh,
Omar Zakaria

**Abstract:**

The aim of this article is to explain how features of attacks could be extracted from the packets. It also explains how vectors could be built and then applied to the input of any analysis stage. For analyzing, the work deploys the Feedforward-Back propagation neural network to act as misuse intrusion detection system. It uses ten types if attacks as example for training and testing the neural network. It explains how the packets are analyzed to extract features. The work shows how selecting the right features, building correct vectors and how correct identification of the training methods with nodes- number in hidden layer of any neural network affecting the accuracy of system. In addition, the work shows how to get values of optimal weights and use them to initialize the Artificial Neural Network.

**Keywords:**
Artificial Neural Network,
Attack Features,
MisuseIntrusion Detection System,
Training Parameters.

##### 3497 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

**Authors:**
Anupama Pande,
Vishik Goel

**Abstract:**

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

**Keywords:**
Complex valued neural network,
Radial BasisFunction,
Image recognition.

##### 3496 Application of Neural Networks in Financial Data Mining

**Authors:**
Defu Zhang,
Qingshan Jiang,
Xin Li

**Abstract:**

This paper deals with the application of a well-known neural network technique, multilayer back-propagation (BP) neural network, in financial data mining. A modified neural network forecasting model is presented, and an intelligent mining system is developed. The system can forecast the buying and selling signs according to the prediction of future trends to stock market, and provide decision-making for stock investors. The simulation result of seven years to Shanghai Composite Index shows that the return achieved by this mining system is about three times as large as that achieved by the buy and hold strategy, so it is advantageous to apply neural networks to forecast financial time series, the different investors could benefit from it.

**Keywords:**
Data mining,
neural network,
stock forecasting.

##### 3495 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

**Authors:**
Anupama Pande,
Ashok Kumar Thakur,
Swapnoneel Roy

**Abstract:**

**Keywords:**
Complex valued neural network,
Generalized Meanneuron model,
Signal processing.

##### 3494 Person Identification by Using AR Model for EEG Signals

**Authors:**
Gelareh Mohammadi,
Parisa Shoushtari,
Behnam Molaee Ardekani,
Mohammad B. Shamsollahi

**Abstract:**

**Keywords:**
Person Identification,
Autoregressive Model,
EEG,
Neural Network

##### 3493 Accelerating Integer Neural Networks On Low Cost DSPs

**Authors:**
Thomas Behan,
Zaiyi Liao,
Lian Zhao,
Chunting Yang

**Abstract:**

**Keywords:**
Digital Signal Processor (DSP),
Integer Neural Network(INN),
Low Cost Neural Network,
Integer Neural Network DSPImplementation.

##### 3492 Bayesian Deep Learning Algorithms for Classifying COVID-19 Images

**Authors:**
I. Oloyede

**Abstract:**

**Keywords:**
BCNN,
CNN,
Images,
COVID-19,
Deep Learning.

##### 3491 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification

**Authors:**
Abdelhadi Lotfi,
Abdelkader Benyettou

**Abstract:**

In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.

**Keywords:**
Classification,
probabilistic neural networks,
network optimization,
pattern recognition.

##### 3490 A Cognitive Model for Frequency Signal Classification

**Authors:**
Rui Antunes,
Fernando V. Coito

**Abstract:**

**Keywords:**
Neural Networks,
Signal Classification,
Adaptative
Filters,
Cognitive Neuroscience

##### 3489 Inverse Problem Methodology for the Measurement of the Electromagnetic Parameters Using MLP Neural Network

**Authors:**
T. Hacib,
M. R. Mekideche,
N. Ferkha

**Abstract:**

**Keywords:**
Inverse problem,
MLP neural network,
parametersidentification,
FEM.

##### 3488 Comparison between Beta Wavelets Neural Networks, RBF Neural Networks and Polynomial Approximation for 1D, 2DFunctions Approximation

**Authors:**
Wajdi Bellil,
Chokri Ben Amar,
Adel M. Alimi

**Abstract:**

This paper proposes a comparison between wavelet neural networks (WNN), RBF neural network and polynomial approximation in term of 1-D and 2-D functions approximation. We present a novel wavelet neural network, based on Beta wavelets, for 1-D and 2-D functions approximation. Our purpose is to approximate an unknown function f: Rn - R from scattered samples (xi; y = f(xi)) i=1....n, where first, we have little a priori knowledge on the unknown function f: it lives in some infinite dimensional smooth function space and second the function approximation process is performed iteratively: each new measure on the function (xi; f(xi)) is used to compute a new estimate Ôêºf as an approximation of the function f. Simulation results are demonstrated to validate the generalization ability and efficiency of the proposed Beta wavelet network.

**Keywords:**
Beta wavelets networks,
RBF neural network,
training algorithms,
MSE,
1-D,
2D function approximation.

##### 3487 Margin-Based Feed-Forward Neural Network Classifiers

**Authors:**
Han Xiao,
Xiaoyan Zhu

**Abstract:**

**Keywords:**
Max-Margin Principle,
Feed-Forward Neural Network,
Classifier.

##### 3486 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

**Authors:**
Koray U. Erbas

**Abstract:**

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

**Keywords:**
Convolutional Neural Network,
Deep Learning,
Deep Learning Based FER,
Facial Emotion Recognition.