Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31903
Advanced Neural Network Learning Applied to Pulping Modeling

Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1333344

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1236

References:


[1] Lanoutte, R. and Laperriere, L. Evaluation of Sugar Maple (Acer Saccharum) in high yield pulping processes. Pulp and Paper Research Center, University du Quebec, 2002.
[2] Haykin, S. Neural Networks: A Comprehensive Foundation (Second Edition), New Jersey, Prentice Hall, 1999.
[3] Fletcher, R. and Reeves, C.M. Function minimization by conjugate gradients. Computer Journal, 1964, 7, 149-154.
[4] Leonard, J. and Kramer, M.A. Improvement to the back-propagation algorithm for training neural networks, Computers and Chemical Engineering, 1990, 14(3), 337-341.
[5] Polak, E. Computational methods in optimization, New York, Academic press, 1971.
[6] Powell, M. J. D. Restart procedures for the conjugate gradient method. Mathematical Programming, 1977, 12, 241-254.
[7] Sathasivam, S. Optimization Methods In Training Neural Networks. Master Thesis, University Science Malaysia, 2003.
[8] Demuth, H. and Bearle, M. Neural Network Toolbox, New York, Mathworks Inc, 2002.
[9] Blum, A. Neural Networks in C++, New York, John Wiley & Sons, 1992.