Search results for: preconditioner
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6

Search results for: preconditioner

6 Comparison of Two Types of Preconditioners for Stokes and Linearized Navier-Stokes Equations

Authors: Ze-Jun Hu, Ting-Zhu Huang, Ning-Bo Tan

Abstract:

To solve saddle point systems efficiently, several preconditioners have been published. There are many methods for constructing preconditioners for linear systems from saddle point problems, for instance, the relaxed dimensional factorization (RDF) preconditioner and the augmented Lagrangian (AL) preconditioner are used for both steady and unsteady Navier-Stokes equations. In this paper we compare the RDF preconditioner with the modified AL (MAL) preconditioner to show which is more effective to solve Navier-Stokes equations. Numerical experiments indicate that the MAL preconditioner is more efficient and robust, especially, for moderate viscosities and stretched grids in steady problems. For unsteady cases, the convergence rate of the RDF preconditioner is slightly faster than the MAL perconditioner in some circumstances, but the parameter of the RDF preconditioner is more sensitive than the MAL preconditioner. Moreover the convergence rate of the MAL preconditioner is still quite acceptable. Therefore we conclude that the MAL preconditioner is more competitive than the RDF preconditioner. These experiments are implemented with IFISS package. 

Keywords: Navier-Stokes equations, Krylov subspace method, preconditioner, dimensional splitting, augmented Lagrangian preconditioner.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
5 The BGMRES Method for Generalized Sylvester Matrix Equation AXB − X = C and Preconditioning

Authors: Azita Tajaddini, Ramleh Shamsi

Abstract:

In this paper, we present the block generalized minimal residual (BGMRES) method in order to solve the generalized Sylvester matrix equation. However, this method may not be converged in some problems. We construct a polynomial preconditioner based on BGMRES which shows why polynomial preconditioner is superior to some block solvers. Finally, numerical experiments report the effectiveness of this method.

Keywords: Linear matrix equation, Block GMRES, matrix Krylov subspace, polynomial preconditioner.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 874
4 An Incomplete Factorization Preconditioner for LMS Adaptive Filter

Authors: Shazia Javed, Noor Atinah Ahmad

Abstract:

In this paper an efficient incomplete factorization preconditioner is proposed for the Least Mean Squares (LMS) adaptive filter. The proposed preconditioner is approximated from a priori knowledge of the factors of input correlation matrix with an incomplete strategy, motivated by the sparsity patter of the upper triangular factor in the QRD-RLS algorithm. The convergence properties of IPLMS algorithm are comparable with those of transform domain LMS(TDLMS) algorithm. Simulation results show efficiency and robustness of the proposed algorithm with reduced computational complexity.

Keywords: Autocorrelation matrix, Cholesky's factor, eigenvalue spread, Markov input.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
3 Convergence and Comparison Theorems of the Modified Gauss-Seidel Method

Authors: Zhouji Chen

Abstract:

In this paper, the modified Gauss-Seidel method with the new preconditioner for solving the linear system Ax = b, where A is a nonsingular M-matrix with unit diagonal, is considered. The convergence property and the comparison theorems of the proposed method are established. Two examples are given to show the efficiency and effectiveness of the modified Gauss-Seidel method with the presented new preconditioner.

Keywords: Preconditioned linear system, M-matrix, Convergence, Comparison theorem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
2 Advanced Neural Network Learning Applied to Pulping Modeling

Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1406
1 Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning

Authors: W. D. Wan Rosli, Z. Zainuddin, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, Modeling, Neural Networks, Preconditioned Conjugate Gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684