Modified Levenberg-Marquardt Method for Neural Networks Training
In this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1333881Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4634
 Rumelhart, D. E., Hinton, G. E. and Williams, R. J, "Learning internal representations by error propagation," In Parallel Distributed Processing, Cambridge, MA: MIT Press, vol 1, pp. 318-362.
 Rumelhart, D. E., Hinton, G. E. and Wiliams, R. J, "Learning representations by back-propagating errors," Nature, vol. 323, pp. 533- 536, 1986.
 Werbos, P. J. "Back-propagation: Past and future," Proceeding of International Conference on Neural Networks, San Diego, CA, 1, pp. 343-354, 1988.
 .M. T .Hagan and M. B. Menhaj, "Training feed forward network with the Marquardt algorithm," IEEE Trans. on Neural Net., vol. 5, no. 6, pp.989-993, 1994.
 Bello, M. G. "Enhanced training algorithms, and integrated training/architecture selection for multi layer perceptron networks," IEEE Trans. on Neural Net., vol. 3, pp. 864-875, 1992.
 Samad, T. "Back-propagation improvements based on heuristic arguments," Proceedings of International Joint Conference on Neural Networks, Washington, 1, pp. 565-568, 1990.
 Solla, S. A., Levin, E. and Fleisher, M. "Accelerated learning in layered neural networks," Complex Systems, 2, pp. 625-639, 1988.
 Miniani, A. A. and Williams, R. D. "Acceleration of back-propagation through learning rate and momentum adaptation," Proceedings of International Joint Conference on Neural Networks, San Diego, CA, 1, 676-679, 1990.
 Jacobs, R. A., "Increased rates of convergence through learning rate adaptation," Neural Networks, vol. 1, no. 4, pp. 295-308, 1988.
 Andersen, T. J. and Wilamowski, B.M. "A Modified Regression Algorithm for Fast One Layer Neural Network Training," World Congress of Neural Networks, Washington DC, USA, vol. 1, pp. 687- 690, July 17-21, 1995.
 Battiti, R., "First- and second-order methods for learning between steepest descent and Newton-s method," Neural Computation, vol. 4, no. 2, pp. 141-166, 1992.
 Charalambous, C., "Conjugate gradient algorithm for efficient training of artificial neural networks," IEE Proceedings, vol. 139, no. 3, pp. 301- 310, 1992.
 Shah, S. and Palmieri, F. "MEKA - A fast, local algorithm for training feed forward neural networks," Proceedings of International Joint Conference on Neural Networks, San Diego, CA, 3, pp. 41-46, 1990.
 B. M. Wilamowski , Y. Chen, and A. Malinowski, "Efficient algorithm for training neural networks with one hidden layer," In Proc. IJCNN, vol.3, pp.1725-728, 1999.
 T. Cong Chen, D. Jian Han, F. T. K. Au, L. G. Than, "Acceleration of Levenberg-Marquardt training of neural networks with variable decay rate," IEEE Trans. on Neural Net., vol. 3, no. 6, pp. 1873 - 1878, 2003.