Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30663
Modified Levenberg-Marquardt Method for Neural Networks Training

Authors: Amir Abolfazl Suratgar, Mohammad Bagher Tavakoli, Abbas Hoseinabadi

Abstract:

In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.

Keywords: Neural Network, Modification, levenberg-marquardt, variable learning rate

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1333881

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4634

References:


[1] Rumelhart, D. E., Hinton, G. E. and Williams, R. J, "Learning internal representations by error propagation," In Parallel Distributed Processing, Cambridge, MA: MIT Press, vol 1, pp. 318-362.
[2] Rumelhart, D. E., Hinton, G. E. and Wiliams, R. J, "Learning representations by back-propagating errors," Nature, vol. 323, pp. 533- 536, 1986.
[3] Werbos, P. J. "Back-propagation: Past and future," Proceeding of International Conference on Neural Networks, San Diego, CA, 1, pp. 343-354, 1988.
[4] .M. T .Hagan and M. B. Menhaj, "Training feed forward network with the Marquardt algorithm," IEEE Trans. on Neural Net., vol. 5, no. 6, pp.989-993, 1994.
[5] Bello, M. G. "Enhanced training algorithms, and integrated training/architecture selection for multi layer perceptron networks," IEEE Trans. on Neural Net., vol. 3, pp. 864-875, 1992.
[6] Samad, T. "Back-propagation improvements based on heuristic arguments," Proceedings of International Joint Conference on Neural Networks, Washington, 1, pp. 565-568, 1990.
[7] Solla, S. A., Levin, E. and Fleisher, M. "Accelerated learning in layered neural networks," Complex Systems, 2, pp. 625-639, 1988.
[8] Miniani, A. A. and Williams, R. D. "Acceleration of back-propagation through learning rate and momentum adaptation," Proceedings of International Joint Conference on Neural Networks, San Diego, CA, 1, 676-679, 1990.
[9] Jacobs, R. A., "Increased rates of convergence through learning rate adaptation," Neural Networks, vol. 1, no. 4, pp. 295-308, 1988.
[10] Andersen, T. J. and Wilamowski, B.M. "A Modified Regression Algorithm for Fast One Layer Neural Network Training," World Congress of Neural Networks, Washington DC, USA, vol. 1, pp. 687- 690, July 17-21, 1995.
[11] Battiti, R., "First- and second-order methods for learning between steepest descent and Newton-s method," Neural Computation, vol. 4, no. 2, pp. 141-166, 1992.
[12] Charalambous, C., "Conjugate gradient algorithm for efficient training of artificial neural networks," IEE Proceedings, vol. 139, no. 3, pp. 301- 310, 1992.
[13] Shah, S. and Palmieri, F. "MEKA - A fast, local algorithm for training feed forward neural networks," Proceedings of International Joint Conference on Neural Networks, San Diego, CA, 3, pp. 41-46, 1990.
[14] B. M. Wilamowski , Y. Chen, and A. Malinowski, "Efficient algorithm for training neural networks with one hidden layer," In Proc. IJCNN, vol.3, pp.1725-728, 1999.
[15] T. Cong Chen, D. Jian Han, F. T. K. Au, L. G. Than, "Acceleration of Levenberg-Marquardt training of neural networks with variable decay rate," IEEE Trans. on Neural Net., vol. 3, no. 6, pp. 1873 - 1878, 2003.