**Commenced**in January 2007

**Frequency:**Monthly

**Edition:**International

**Paper Count:**3650

# Search results for: Conjugate Gradient algorithm

##### 3650 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

**Authors:**
N. M. Nawi,
R. S. Ransing,
M. R. Ransing

**Abstract:**

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.

**Keywords:**
Adaptive gain variation,
back-propagation,
activation function,
conjugate gradient,
search direction.

##### 3649 Conjugate Gradient Algorithm for the Symmetric Arrowhead Solution of Matrix Equation AXB=C

**Authors:**
Minghui Wang,
Luping Xu,
Juntao Zhang

**Abstract:**

*AXB=C*and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.

**Keywords:**
Iterative method,
symmetric arrowhead matrix,
conjugate gradient algorithm.

##### 3648 A New Modification of Nonlinear Conjugate Gradient Coefficients with Global Convergence Properties

**Authors:**
Ahmad Alhawarat,
Mustafa Mamat,
Mohd Rivaie,
Ismail Mohd

**Abstract:**

**Keywords:**
Conjugate gradient method,
conjugate gradient
coefficient,
global convergence.

##### 3647 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

**Authors:**
N. M. Nawi,
M. R. Ransing,
R. S. Ransing

**Abstract:**

**Keywords:**
Back-propagation,
activation function,
conjugategradient,
search direction,
gain variation.

##### 3646 A Study on Neural Network Training Algorithm for Multiface Detection in Static Images

**Authors:**
Zulhadi Zakaria,
Nor Ashidi Mat Isa,
Shahrel A. Suandi

**Abstract:**

**Keywords:**
training algorithm,
multiface,
static image,
neural network

##### 3645 Comparison of Three Versions of Conjugate Gradient Method in Predicting an Unknown Irregular Boundary Profile

**Authors:**
V. Ghadamyari,
F. Samadi,
F. Kowsary

**Abstract:**

**Keywords:**
Boundary elements,
Conjugate Gradient Method,
Inverse Geometry Problem,
Sensitivity equation.

##### 3644 A Finite-Time Consensus Protocol of the Multi-Agent Systems

**Authors:**
Xin-Lei Feng,
Ting-Zhu Huang

**Abstract:**

According to conjugate gradient algorithm, a new consensus protocol algorithm of discrete-time multi-agent systems is presented, which can achieve finite-time consensus. Finally, a numerical example is given to illustrate our theoretical result.

**Keywords:**
Consensus protocols; Graph theory; Multi-agent systems;Conjugate gradient algorithm; Finite-time.

##### 3643 On the Algorithmic Iterative Solutions of Conjugate Gradient, Gauss-Seidel and Jacobi Methods for Solving Systems of Linear Equations

**Authors:**
H. D. Ibrahim,
H. C. Chinwenyi,
H. N. Ude

**Abstract:**

In this paper, efforts were made to examine and compare the algorithmic iterative solutions of conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax = b, where A is a real n x n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3 x 3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi and Conjugate Gradient methods) respectively. From the results obtained, we discovered that the Conjugate Gradient method converges faster to exact solutions in fewer iterative steps than the two other methods which took much iteration, much time and kept tending to the exact solutions.

**Keywords:**
conjugate gradient,
linear equations,
symmetric and positive definite matrix,
Gauss-Seidel,
Jacobi,
algorithm

##### 3642 Advanced Neural Network Learning Applied to Pulping Modeling

**Authors:**
Z. Zainuddin,
W. D. Wan Rosli,
R. Lanouette,
S. Sathasivam

**Abstract:**

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

**Keywords:**
Convergence,
pulping modeling,
neural networks,
preconditioned conjugate gradient.

##### 3641 Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning

**Authors:**
W. D. Wan Rosli,
Z. Zainuddin,
R. Lanouette,
S. Sathasivam

**Abstract:**

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

**Keywords:**
Convergence,
Modeling,
Neural Networks,
Preconditioned Conjugate Gradient.

##### 3640 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

**Authors:**
Sergey Kopysov,
Nikita Nedozhogin,
Leonid Tonkov

**Abstract:**

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

**Keywords:**
Conjugate Gradient,
GPU,
parallel programming,
pipelined algorithm.

##### 3639 Signature Recognition Using Conjugate Gradient Neural Networks

**Authors:**
Jamal Fathi Abu Hasna

**Abstract:**

**Keywords:**
Signature Verification,
MATLAB Software,
Conjugate Gradient,
Segmentation,
Skilled Forgery,
and Genuine.

##### 3638 Beam Orientation Optimization Using Ant Colony Optimization in Intensity Modulated Radiation Therapy

**Authors:**
Xi Pei,
Ruifen Cao,
Hui Liu,
Chufeng Jin,
Mengyun Cheng,
Huaqing Zheng,
Yican Wu,
FDS Team

**Abstract:**

**Keywords:**
intensity modulated radiation therapy,
ant colonyoptimization,
Conjugate Gradient algorithm

##### 3637 A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks

**Authors:**
Syed Muhammad Aqil Burney,
Tahseen Ahmed Jilani,
C. Ardil

**Abstract:**

**Keywords:**
Backpropagation algorithm,
conjugacy condition,
line search,
matrix perturbation

##### 3636 Hybrid Gravity Gradient Inversion-Ant Colony Optimization Algorithm for Motion Planning of Mobile Robots

**Authors:**
Meng Wu

**Abstract:**

Motion planning is a common task required to be fulfilled by robots. A strategy combining Ant Colony Optimization (ACO) and gravity gradient inversion algorithm is proposed for motion planning of mobile robots. In this paper, in order to realize optimal motion planning strategy, the cost function in ACO is designed based on gravity gradient inversion algorithm. The obstacles around mobile robot can cause gravity gradient anomalies; the gradiometer is installed on the mobile robot to detect the gravity gradient anomalies. After obtaining the anomalies, gravity gradient inversion algorithm is employed to calculate relative distance and orientation between mobile robot and obstacles. The relative distance and orientation deduced from gravity gradient inversion algorithm is employed as cost function in ACO algorithm to realize motion planning. The proposed strategy is validated by the simulation and experiment results.

**Keywords:**
Motion planning,
gravity gradient inversion algorithm,
ant colony optimization.

##### 3635 Bayesian Inference for Phase Unwrapping Using Conjugate Gradient Method in One and Two Dimensions

**Authors:**
Yohei Saika,
Hiroki Sakaematsu,
Shota Akiyama

**Abstract:**

We investigated statistical performance of Bayesian inference using maximum entropy and MAP estimation for several models which approximated wave-fronts in remote sensing using SAR interferometry. Using Monte Carlo simulation for a set of wave-fronts generated by assumed true prior, we found that the method of maximum entropy realized the optimal performance around the Bayes-optimal conditions by using model of the true prior and the likelihood representing optical measurement due to the interferometer. Also, we found that the MAP estimation regarded as a deterministic limit of maximum entropy almost achieved the same performance as the Bayes-optimal solution for the set of wave-fronts. Then, we clarified that the MAP estimation perfectly carried out phase unwrapping without using prior information, and also that the MAP estimation realized accurate phase unwrapping using conjugate gradient (CG) method, if we assumed the model of the true prior appropriately.

**Keywords:**
Bayesian inference using maximum entropy,
MAP
estimation using conjugate gradient method,
SAR interferometry.

##### 3634 An Iterative Algorithm for KLDA Classifier

**Authors:**
D.N. Zheng,
J.X. Wang,
Y.N. Zhao,
Z.H. Yang

**Abstract:**

**Keywords:**
Linear discriminant analysis (LDA),
kernel LDA
(KLDA),
conjugate gradient algorithm,
nonlinear discriminant classifier.

##### 3633 New Adaptive Linear Discriminante Analysis for Face Recognition with SVM

**Authors:**
Mehdi Ghayoumi

**Abstract:**

**Keywords:**
lda,
adaptive,
svm,
face recognition.

##### 3632 Developing a Conjugate Heat Transfer Solver

**Authors:**
Mansour A. Al Qubeissi

**Abstract:**

The current paper presents a numerical approach in solving the conjugate heat transfer problems. A heat conduction code is coupled internally with a computational fluid dynamics solver for developing a couple conjugate heat transfer solver. Methodology of treating non-matching meshes at interface has also been proposed. The validation results of 1D and 2D cases for the developed conjugate heat transfer code have shown close agreement with the solutions given by analysis.

**Keywords:**
Computational Fluid Dynamics,
Conjugate Heat transfer,
Heat Conduction,
Heat Transfer

##### 3631 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

**Authors:**
Yanwen Li,
Shuguo Xie

**Abstract:**

In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.

**Keywords:**
Gradient image,
segmentation and extract,
mean-shift algorithm,
dictionary learning.

##### 3630 Convergence Analysis of an Alternative Gradient Algorithm for Non-Negative Matrix Factorization

**Authors:**
Chenxue Yang,
Mao Ye,
Zijian Liu,
Tao Li,
Jiao Bao

**Abstract:**

Non-negative matrix factorization (NMF) is a useful computational method to find basis information of multivariate nonnegative data. A popular approach to solve the NMF problem is the multiplicative update (MU) algorithm. But, it has some defects. So the columnwisely alternating gradient (cAG) algorithm was proposed. In this paper, we analyze convergence of the cAG algorithm and show advantages over the MU algorithm. The stability of the equilibrium point is used to prove the convergence of the cAG algorithm. A classic model is used to obtain the equilibrium point and the invariant sets are constructed to guarantee the integrity of the stability. Finally, the convergence conditions of the cAG algorithm are obtained, which help reducing the evaluation time and is confirmed in the experiments. By using the same method, the MU algorithm has zero divisor and is convergent at zero has been verified. In addition, the convergence conditions of the MU algorithm at zero are similar to that of the cAG algorithm at non-zero. However, it is meaningless to discuss the convergence at zero, which is not always the result that we want for NMF. Thus, we theoretically illustrate the advantages of the cAG algorithm.

**Keywords:**
Non-negative matrix factorizations,
convergence,
cAG
algorithm,
equilibrium point,
stability.

##### 3629 Loudspeaker Parameters Inverse Problem for Improving Sound Frequency Response Simulation

**Authors:**
Y. T. Tsai,
Jin H. Huang

**Abstract:**

The sound pressure level (SPL) of the moving-coil loudspeaker (MCL) is often simulated and analyzed using the lumped parameter model. However, the SPL of a MCL cannot be simulated precisely in the high frequency region, because the value of cone effective area is changed due to the geometry variation in different mode shapes, it is also related to affect the acoustic radiation mass and resistance. Herein, the paper presents the inverse method which has a high ability to measure the value of cone effective area in various frequency points, also can estimate the MCL electroacoustic parameters simultaneously. The proposed inverse method comprises the direct problem, adjoint problem, and sensitivity problem in collaboration with nonlinear conjugate gradient method. Estimated values from the inverse method are validated experimentally which compared with the measured SPL curve result. Results presented in this paper not only improve the accuracy of lumped parameter model but also provide the valuable information on loudspeaker cone design.

**Keywords:**
Inverse problem,
cone effective area,
loudspeaker,
nonlinear conjugate gradient method.

##### 3628 Numerical Optimization of Trapezoidal Microchannel Heat Sinks

**Authors:**
Yue-Tzu Yang,
Shu-Ching Liao

**Abstract:**

This study presents the numerical simulation of three-dimensional incompressible steady and laminar fluid flow and conjugate heat transfer of a trapezoidal microchannel heat sink using water as a cooling fluid in a silicon substrate. Navier-Stokes equations with conjugate energy equation are discretized by finite-volume method. We perform numerical computations for a range of 50 ≦ Re ≦ 600, 0.05W ≦ P ≦ 0.8W, 20W/cm^{2 }≦** q"**≦ 40W/cm

^{2}. The present study demonstrates the numerical optimization of a trapezoidal microchannel heat sink design using the response surface methodology (RSM) and the genetic algorithm method (GA). The results show that the average Nusselt number increases with an increase in the Reynolds number or pumping power, and the thermal resistance decreases as the pumping power increases. The thermal resistance of a trapezoidal microchannel is minimized for a constant heat flux and constant pumping power.

**Keywords:**
Microchannel heat sinks,
Conjugate heat transfer,
Optimization,
Genetic algorithm method.

##### 3627 GPS TEC Variation Affected by the Interhemispheric Conjugate Auroral Activity on 21 September 2009

**Authors:**
W. Suparta,
M. A. Mohd. Ali,
M. S. Jit Singh,
B. Yatim,
T. Motoba,
N. Sato,
A. Kadokura,
G. Bjornsson

**Abstract:**

**Keywords:**
Auroral activity,
GPS TEC,
Interhemispheric
conjugate points,
Responses

##### 3626 A Novel Modified Adaptive Fuzzy Inference Engine and Its Application to Pattern Classification

**Authors:**
J. Hossen,
A. Rahman,
K. Samsudin,
F. Rokhani,
S. Sayeed,
R. Hasan

**Abstract:**

**Keywords:**
Apriori algorithm,
Fuzzy C-means,
MAFIE,
TSK

##### 3625 Dynamic Measurement System Modeling with Machine Learning Algorithms

**Authors:**
Changqiao Wu,
Guoqing Ding,
Xin Chen

**Abstract:**

**Keywords:**
Dynamic system modeling,
neural network,
normal
equation,
second order gradient descent.

##### 3624 Fast Intra Prediction Algorithm for H.264/AVC Based on Quadratic and Gradient Model

**Authors:**
A. Elyousfi,
A. Tamtaoui,
E. Bouyakhf

**Abstract:**

**Keywords:**
Intra prediction,
H.264/AVC,
video coding,
encodercomplexity.

##### 3623 Accurate Visualization of Graphs of Functions of Two Real Variables

**Authors:**
Zeitoun D. G.,
Thierry Dana-Picard

**Abstract:**

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

**Keywords:**
Function singularities,
mesh generation,
point allocation,
visualization,
collocation least squares method,
Augmented Lagrangian method,
Uzawa's Algorithm,
Preconditioned Conjugate Gradien

##### 3622 Improving the Convergence of the Backpropagation Algorithm Using Local Adaptive Techniques

**Authors:**
Z. Zainuddin,
N. Mahat,
Y. Abu Hassan

**Abstract:**

Since the presentation of the backpropagation algorithm, a vast variety of improvements of the technique for training a feed forward neural networks have been proposed. This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight. The other, known as Dynamic Adaptation Methods, which dynamically adapts the momentum factors, α, and learning rate, η, with respect to the iteration number or gradient. Some of most popular learning algorithms are described. These techniques have been implemented and tested on several problems and measured in terms of gradient and error function evaluation, and percentage of success. Numerical evidence shows that these techniques improve the convergence of the Backpropagation algorithm.

**Keywords:**
Backpropagation,
Dynamic Adaptation Methods,
Local Adaptive Techniques,
Neural networks.

##### 3621 A Family of Minimal Residual Based Algorithm for Adaptive Filtering

**Authors:**
Noor Atinah Ahmad

**Abstract:**

**Keywords:**
Adaptive filtering,
Adaptive least square,
Minimalresidual method.