WASET
	%0 Journal Article
	%A Syed Muhammad Aqil Burney and  Tahseen Ahmed Jilani and  C. Ardil
	%D 2007
	%J International Journal of Computer and Information Engineering
	%B World Academy of Science, Engineering and Technology
	%I Open Science Index 1, 2007
	%T A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks 
	%U https://publications.waset.org/pdf/9681
	%V 1
	%X Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.
	%P 145 - 151