Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Optimization of the Input Layer Structure for Feed-Forward Narx Neural Networks
Authors: Zongyan Li, Matt Best
Abstract:
This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.Keywords: Correlation analysis, F-ratio, Levenberg-Marquardt, MSE, NARX, neural network, optimisation.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1107061
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193References:
[1] D. Montana and L. Davis, “Training feedforward neural networks using genetic algorithms”. Proc.1989 International Joint Conf. Artificial Intelligence.
[2] M. Khashei and M. Bijari, “An artificial neural network (p, d,q) model for timeseries forecasting”. Expert Systems with Applications Vol. 37 ,2010, pp. 479–489
[3] B. Pradhan and S. Lee and M. F. Buchroithner, “A GIS-based backpropagation neural network model and its cross-application and validation for landslide susceptibility analyses”. Computers, Environment and Urban Systems, Vol 34, 2010, pp. 216-235
[4] M.A. Mohandes and S. Rehman and T.O. Halawani, “A neural networks approach for wind speed prediction”. Renewable Energy, Vol. 13,No. 3, 1998, pp.345-354
[5] L. Zhang and F. Tian and S. Liu etc, “Chaos based neural network optimization for concentration estimation of indoor air contaminants by an electronic nose”. Sensors and Actuators A, Vol. 189, 2013, pp. 161- 167.
[6] P. G. Benardos, and G. C. Vosniakos, “Prediction of surface roughness in CNC face milling using neural networks and Taguchi’s design of experiments” Robotics and Computer Integrated Manufacturing, Vol. 18, 2002, pp. 43–354.
[7] L. Ma, and K. Khorasani, “A new strategy for adaptively constructing multilayer feed-forward neural networks”. Neurocomputing, 51, 2003, pp. 361–385.
[8] J. P Ross, “Taguchi techniques for quality engineering”. New York: McGraw-Hill, 1996.
[9] S. D. Balkin and J. K. Ord, “Automatic neural network modeling for univariate time series”. International Journal of Forecasting, Vol. 16, 2000, pp. 509–515
[10] M. M. Islam and K. Murase, “A new algorithm to design compact two hidden layer artificial neural networks”. Neural Networks, 14, 2001, pp.1265–1278.
[11] X. Jiang and A.H.K.S. Wah, “Constructing and training feed-forward neural networks for pattern classification”. Pattern Recognition Vol. 36, 2003, pp.853–867.
[12] P. G. Benardos and G. C. Vosniakos, “Optimizing feed-forward artificial neural network architecture”. Engineering Applications of Artificial Intelligence, 20, 2007, pp. 365–382.
[13] G. Zhang and B. E. Patuwo and M. Y. Hu, “Forecasting with artificial neural networks: The state of the art”, International Journal of Forecasting, Vol. 14, Issue 1, March, 1998, pp 35–62.
[14] D. Marquardt, “An Algorithm for Least-Squares Estimation of Nonlinear Parameters”. SIAM Journal on Applied Mathematics, Vol. 11, No. 2, June 1963, pp. 431–441.
[15] M. T. Hagan and M. Menhaj, “Training feed-forward networks with the Marquardt algorithm”, IEEE Transactions on Neural Networks, Vol. 5, No. 6,1994, pp. 989–993.
[16] D. Whitley and T. Starkweather and C. Bogart, “Genetic algorithms and neural networks: optimizing connections and connectivity”. Parallel Computing Vol.14, 1990, pp. 347-361