Improving the Convergence of the Backpropagation Algorithm Using Local Adaptive Techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33093
Improving the Convergence of the Backpropagation Algorithm Using Local Adaptive Techniques

Authors: Z. Zainuddin, N. Mahat, Y. Abu Hassan

Abstract:

Since the presentation of the backpropagation algorithm, a vast variety of improvements of the technique for training a feed forward neural networks have been proposed. This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight. The other, known as Dynamic Adaptation Methods, which dynamically adapts the momentum factors, α, and learning rate, η, with respect to the iteration number or gradient. Some of most popular learning algorithms are described. These techniques have been implemented and tested on several problems and measured in terms of gradient and error function evaluation, and percentage of success. Numerical evidence shows that these techniques improve the convergence of the Backpropagation algorithm.

Keywords: Backpropagation, Dynamic Adaptation Methods, Local Adaptive Techniques, Neural networks.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1075954

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170

References:


[1] Magoulas, G.D., Plagianakos, V.P., & Vrahatis, M.N. Globally Convergent Algorithms with Local Learning Rates. In IEEE Transactions on Neural Networks, Vol 13, No 3, 774 - 779, 2002.
[2] Silva F.M. & Almeida L.B., Acceleration techniques for the backpropagation algorithm. Neural Networks EURASIP Workshop, Sesim, 1990.
[3] Jacobs R. A., Increased rates of convergence through learning rate adaptation, Neural Networks, 1(4), 295-308, 1988.
[4] Tollenaere, T., SuperSab:fast adaptive backpropagation with good scaling properties. Neural Networks, 3(5), 1990.
[5] Fahlman, S.E., An empirical study of learning speed in backpropagation networks, Technical Report, CMU-CS-88-162, 1988
[6] Vrahatis, M.N., Magoulas, G.D. & Plagaianakos, V.P. Convergence Analysis of the Quickprop Method. University of Patras and Athens, 1999.
[7] Riedmiller, M. & Braun, H., A direct adaptive method for faster backpropagation learning. The RPROP algorithm. In Proceedings of the IEEE International Conference on Neural Networks (ICNN) (Ruspini, H. ed), p. 586-591, 1993, San Francisco.
[8] Evans, D.J. and Zainuddin, Z., Acceleration of the backpropagation through dynamic adaptation of the momentum. Neural, Parallel & Scientific Computations, 5(3), 1997, 297-308. (see also Internal Report No.1028, PARC, Loughborough University of Tech., U.K., 1996).
[9] Zainuddin Z. & Evans D.J., Acceleration of the backpropagation through dynamic adaptation of the learning rate, Int. Journal of Computer Mathematics, 334, 1-17, 1997 (see also Internal report No. 1029, PARC, Loughborough University of Tech., U.K. 1996).
[10] Zainuddin Z. & Sathasivam S. , Modeling nonlinear relationships in ecology and biology using neural networks, Proceedings of the National Workshop in Ecological and environmental modeling (ECOMOD), Sept. 3-4, 2001.
[11] Zainuddin Z. & Ahmad Fadzil, M.H., Training Feedforward Neural Networks: An Algorithm Giving Improved Convergence. Proceedings of research & Development in Computer Science & its Application,p.98- 102, 1998, Penang, Malaysia.