Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
Optimization of a Three-Term Backpropagation Algorithm Used for Neural Network Learning
Authors: Yahya H. Zweiri
Abstract:
The back-propagation algorithm calculates the weight changes of an artificial neural network, and a two-term algorithm with a dynamically optimal learning rate and a momentum factor is commonly used. Recently the addition of an extra term, called a proportional factor (PF), to the two-term BP algorithm was proposed. The third term increases the speed of the BP algorithm. However, the PF term also reduces the convergence of the BP algorithm, and optimization approaches for evaluating the learning parameters are required to facilitate the application of the three terms BP algorithm. This paper considers the optimization of the new back-propagation algorithm by using derivative information. A family of approaches exploiting the derivatives with respect to the learning rate, momentum factor and proportional factor is presented. These autonomously compute the derivatives in the weight space, by using information gathered from the forward and backward procedures. The three-term BP algorithm and the optimization approaches are evaluated using the benchmark XOR problem.Keywords: Neural Networks, Backpropagation, Optimization.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1083171
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546References:
[1] Zweiri, Y.H., Whidborne, J.F., & Seneviratne, L.D. Three-term backpropagation algorithm. Neurocomputing, 50:305-318, 2003.
[2] Rumelhart, D.E. & McClelland, J.L. Parallel Distributed Proccessing: Explorations in the Microstructure of Cognition, volume I. MIT Press, MA, 1986.
[3] Jacobs, R.A. Increasing rate of convergence through learning rate adaptation. Neural Networks, 1(4):295-307, 1988.
[4] Ooyen, A.O., & Neinhuis, B. Improving the convergence of the backpropagation algorithm. Neural Networks, 5:465-471, 1992.
[5] Rigler, A., Irvine, J., & Vodel, T. Rescaling of the variables in backpropagation learning. Neural Networks, 4:225-229, 1991.
[6] Yu, X.H., & Chen, G.A. Efficient backpropagation learning using optimal learning rate and momentum. Neural Networks, 10(3):517-527, 1997.
[7] Yu, X.H., Chen, G.A., & Cheng, S.X. Dynamic learning rate optimization of the backpropagation algorithm. IEEE Transactions on Neural Networks, 6(3):669-677, 1995.
[8] Salomon, R., & Hemmen, J.L. Accelerating backpropagation through dynamic self-adaptation. Neural Networks, 9(4):589-601, 1996.
[9] Fu, L.M., Hsu, H.H., & Principe, C.J. Incremental backpropagation learning networks. IEEE Transactions on Neural Networks, 7(3):757- 761, 1996.
[10] Kuan, C.M., & Hornik, K. Convergence of learning algorithm with constant learning rates. IEEE Transactions on Neural Networks, 2(5):484- 489, 1991.
[11] Gori, M., & Maggini, M. Optimal convergence of on-line backpropagation. IEEE Transactions on Neural Networks, 7:251-254, 1996.
[12] Karras, D.A., & Perantonis, S.J. An efficient constrained training algorithm for feedforward networks. IEEE Transactions on Neural Networks, 6:1420-1434, 1995.
[13] Ellacott, S.W. Techniques for the mathematical-analysis of neural networks. Journal Of Computational And Applied Mathematics, 50(1- 3):283-297, 1994.
[14] Zweiri, Y.H., Seneviratne, L.D., and Althoefer, K. Stability analysis of a three-term backpropagation algorithm. Neural Networks, 18(10):1341- 1347, 2005.
[15] Wolfe, M.A. Numerical Methods for Unconstrained Optimization. VNR, Wokingham, U.K., 1978.
[16] Ampazis, N., Perantonis, S.J., & Taylor, J.G. Dynamics of multilayer networks in the vicinity of temporary minima. Neural Networks, 12:43- 58, 1999.