Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning
Authors: W. D. Wan Rosli, Z. Zainuddin, R. Lanouette, S. Sathasivam
Abstract:
This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Keywords: Convergence, Modeling, Neural Networks, Preconditioned Conjugate Gradient.
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1331323
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686References:
[1] Haykin, S. Neural Networks: A Comprehensive Foundation (Second Edition), New Jersey, Prentice Hall, 1999.
[2] Fletcher, R. and Reeves, C.M. Function minimization by conjugate gradients. Computer Journal, 1964, 7, 149-154.
[3] Leonard, J. and Kramer, M.A. Improvement to the back-propagation algorithm for training neural networks, Computers and Chemical Engineering, 1990, 14(3), 337-341.
[4] Polak, E. Computational methods in optimization, New York, Academic press, 1971.
[5] Powell, M. J. D. Restart procedures for the conjugate gradient method.Mathematical Programming, 1977, 12, 241-254.
[6] Sathasivam, S. Optimization Methods In Training Neural Networks.Master Thesis, University Science Malaysia, 2003.
[7] Demuth, H. and Bearle, M. Neural Network Toolbox, New York, Mathworks Inc, 2002.
[8] Blum, A. Neural Networks in C++, New York, John Wiley & Sons, 1992.
[9] Lanoutte, R. and Laperriere, L. Evaluation of Sugar Maple (Acer Saccharum) in high yield pulping processes. Pulp and Paper Research Center, University du Quebec, 2002.