Search results for: gradient method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8209

Search results for: gradient method

8209 A New Modification of Nonlinear Conjugate Gradient Coefficients with Global Convergence Properties

Authors: Ahmad Alhawarat, Mustafa Mamat, Mohd Rivaie, Ismail Mohd

Abstract:

Conjugate gradient method has been enormously used to solve large scale unconstrained optimization problems due to the number of iteration, memory, CPU time, and convergence property, in this paper we find a new class of nonlinear conjugate gradient coefficient with global convergence properties proved by exact line search. The numerical results for our new βK give a good result when it compared with well known formulas.

Keywords: Conjugate gradient method, conjugate gradient coefficient, global convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2231
8208 Comparison of Three Versions of Conjugate Gradient Method in Predicting an Unknown Irregular Boundary Profile

Authors: V. Ghadamyari, F. Samadi, F. Kowsary

Abstract:

An inverse geometry problem is solved to predict an unknown irregular boundary profile. The aim is to minimize the objective function, which is the difference between real and computed temperatures, using three different versions of Conjugate Gradient Method. The gradient of the objective function, considered necessary in this method, obtained as a result of solving the adjoint equation. The abilities of three versions of Conjugate Gradient Method in predicting the boundary profile are compared using a numerical algorithm based on the method. The predicted shapes show that due to its convergence rate and accuracy of predicted values, the Powell-Beale version of the method is more effective than the Fletcher-Reeves and Polak –Ribiere versions.

Keywords: Boundary elements, Conjugate Gradient Method, Inverse Geometry Problem, Sensitivity equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
8207 Conjugate Gradient Algorithm for the Symmetric Arrowhead Solution of Matrix Equation AXB=C

Authors: Minghui Wang, Luping Xu, Juntao Zhang

Abstract:

Based on the conjugate gradient (CG) algorithm, the constrained matrix equation AXB=C and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.

Keywords: Iterative method, symmetric arrowhead matrix, conjugate gradient algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366
8206 Steepest Descent Method with New Step Sizes

Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman

Abstract:

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3108
8205 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

Authors: N. M. Nawi, R. S. Ransing, M. R. Ransing

Abstract:

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.

Keywords: Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
8204 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing

Abstract:

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.

Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2788
8203 Accurate Optical Flow Based on Spatiotemporal Gradient Constancy Assumption

Authors: Adam Rabcewicz

Abstract:

Variational methods for optical flow estimation are known for their excellent performance. The method proposed by Brox et al. [5] exemplifies the strength of that framework. It combines several concepts into single energy functional that is then minimized according to clear numerical procedure. In this paper we propose a modification of that algorithm starting from the spatiotemporal gradient constancy assumption. The numerical scheme allows to establish the connection between our model and the CLG(H) method introduced in [18]. Experimental evaluation carried out on synthetic sequences shows the significant superiority of the spatial variant of the proposed method. The comparison between methods for the realworld sequence is also enclosed.

Keywords: optical flow, variational methods, gradient constancy assumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2124
8202 Dynamic Measurement System Modeling with Machine Learning Algorithms

Authors: Changqiao Wu, Guoqing Ding, Xin Chen

Abstract:

In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.

Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720
8201 Traffic Density Measurement by Automatic Detection of Vehicles Using Gradient Vectors from Aerial Images

Authors: Saman Ghaffarian, Ilgın Gökasar

Abstract:

This paper presents a new automatic vehicle detection method from very high resolution aerial images to measure traffic density. The proposed method starts by extracting road regions from image using road vector data. Then, the road image is divided into equal sections considering resolution of the images. Gradient vectors of the road image are computed from edge map of the corresponding image. Gradient vectors on the each boundary of the sections are divided where the gradient vectors significantly change their directions. Finally, number of vehicles in each section is carried out by calculating the standard deviation of the gradient vectors in each group and accepting the group as vehicle that has standard deviation above predefined threshold value. The proposed method was tested in four very high resolution aerial images acquired from Istanbul, Turkey which illustrate roads and vehicles with diverse characteristics. The results show the reliability of the proposed method in detecting vehicles by producing 86% overall F1 accuracy value.

Keywords: Aerial images, intelligent transportation systems, traffic density measurement, vehicle detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2891
8200 Mathematical Modeling of the Working Principle of Gravity Gradient Instrument

Authors: Danni Cong, Meiping Wu, Hua Mu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokun Cai, Hao Qin

Abstract:

Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.

Keywords: Gravity gradient, accelerometer, gravity gradient sensor, single-axis rotation modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993
8199 Simulating Gradient Contour and Mesh of a Scalar Field

Authors: Usman Ali Khan, Bismah Tariq, Khalida Raza, Saima Malik, Aoun Muhammad

Abstract:

This research paper is based upon the simulation of gradient of mathematical functions and scalar fields using MATLAB. Scalar fields, their gradient, contours and mesh/surfaces are simulated using different related MATLAB tools and commands for convenient presentation and understanding. Different mathematical functions and scalar fields are examined here by taking their gradient, visualizing results in 3D with different color shadings and using other necessary relevant commands. In this way the outputs of required functions help us to analyze and understand in a better way as compared to just theoretical study of gradient.

Keywords: MATLAB, Gradient, Contour, Scalar Field, Mesh

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3391
8198 Flexural Strength Design of RC Beams with Consideration of Strain Gradient Effect

Authors: Mantai Chen, Johnny Ching Ming Ho

Abstract:

The stress-strain relationship of concrete under flexure is one of the essential parameters in assessing ultimate flexural strength capacity of RC beams. Currently, the concrete stress-strain curve in flexure is obtained by incorporating a constant scale-down factor of 0.85 in the uniaxial stress-strain curve. However, it was revealed that strain gradient would improve the maximum concrete stress under flexure and concrete stress-strain curve is strain gradient dependent. Based on the strain-gradient-dependent concrete stress-strain curve, the investigation of the combined effects of strain gradient and concrete strength on flexural strength of RC beams was extended to high strength concrete up to 100 MPa by theoretical analysis. As an extension and application of the authors’ previous study, a new flexural strength design method incorporating the combined effects of strain gradient and concrete strength is developed. A set of equivalent rectangular concrete stress block parameters is proposed and applied to produce a series of design charts showing that the flexural strength of RC beams are improved with strain gradient effect considered.

Keywords: Beams, Equivalent concrete stress block, Flexural strength, Strain gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4055
8197 Regularization of the Trajectories of Dynamical Systems by Adjusting Parameters

Authors: Helle Hein, Ülo Lepik

Abstract:

A gradient learning method to regulate the trajectories of some nonlinear chaotic systems is proposed. The method is motivated by the gradient descent learning algorithms for neural networks. It is based on two systems: dynamic optimization system and system for finding sensitivities. Numerical results of several examples are presented, which convincingly illustrate the efficiency of the method.

Keywords: Chaos, Dynamical Systems, Learning, Neural Networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326
8196 A Simple Heat and Mass Transfer Model for Salt Gradient Solar Ponds

Authors: Safwan Kanan, Jonathan Dewsbury, Gregory Lane-Serff

Abstract:

A salinity gradient solar pond is a free energy source system for collecting, convertingand storing solar energy as heat. In thispaper, the principles of solar pond are explained. A mathematical model is developed to describe and simulate heat and mass transferbehaviour of salinity gradient solar pond. MATLAB codes are programmed to solve the one dimensional finite difference method for heat and mass transfer equations. Temperature profiles and concentration distributions are calculated. The numerical results are validated with experimental data and the results arefound to be in good agreement.

Keywords: Finite Difference method, Salt-gradient solar-pond, Solar energy, Transient heat and mass transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4919
8195 On the Algorithmic Iterative Solutions of Conjugate Gradient, Gauss-Seidel and Jacobi Methods for Solving Systems of Linear Equations

Authors: H. D. Ibrahim, H. C. Chinwenyi, H. N. Ude

Abstract:

In this paper, efforts were made to examine and compare the algorithmic iterative solutions of conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax = b, where A is a real n x n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3 x 3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi and Conjugate Gradient methods) respectively. From the results obtained, we discovered that the Conjugate Gradient method converges faster to exact solutions in fewer iterative steps than the two other methods which took much iteration, much time and kept tending to the exact solutions.

Keywords: conjugate gradient, linear equations, symmetric and positive definite matrix, Gauss-Seidel, Jacobi, algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 413
8194 Research on the Correlation of the Fluctuating Density Gradient of the Compressible Flows

Authors: Yasuo Obikane

Abstract:

This work is to study a roll of the fluctuating density gradient in the compressible flows for the computational fluid dynamics (CFD). A new anisotropy tensor with the fluctuating density gradient is introduced, and is used for an invariant modeling technique to model the turbulent density gradient correlation equation derived from the continuity equation. The modeling equation is decomposed into three groups: group proportional to the mean velocity, and that proportional to the mean strain rate, and that proportional to the mean density. The characteristics of the correlation in a wake are extracted from the results by the two dimensional direct simulation, and shows the strong correlation with the vorticity in the wake near the body. Thus, it can be concluded that the correlation of the density gradient is a significant parameter to describe the quick generation of the turbulent property in the compressible flows.

Keywords: Turbulence Modeling , Density Gradient Correlation, Compressible

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
8193 Advanced Neural Network Learning Applied to Pulping Modeling

Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
8192 Segmentation of Noisy Digital Images with Stochastic Gradient Kernel

Authors: Abhishek Neogi, Jayesh Verma, Pinaki Pratim Acharjya

Abstract:

Image segmentation and edge detection is a fundamental section in image processing. In case of noisy images Edge Detection is very less effective if we use conventional Spatial Filters like Sobel, Prewitt, LOG, Laplacian etc. To overcome this problem we have proposed the use of Stochastic Gradient Mask instead of Spatial Filters for generating gradient images. The present study has shown that the resultant images obtained by applying Stochastic Gradient Masks appear to be much clearer and sharper as per Edge detection is considered.

Keywords: Image segmentation, edge Detection, noisy images, spatialfilters, stochastic gradient kernel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473
8191 Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning

Authors: W. D. Wan Rosli, Z. Zainuddin, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, Modeling, Neural Networks, Preconditioned Conjugate Gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
8190 Robot Movement Using the Trust Region Policy Optimization

Authors: Romisaa Ali

Abstract:

The Policy Gradient approach is a subset of the Deep Reinforcement Learning (DRL) combines Deep Neural Networks (DNN) with Reinforcement Learning (RL). This approach finds the optimal policy of robot movement, based on the experience it gains from interaction with its environment. Unlike previous policy gradient algorithms, which were unable to handle the two types of error variance and bias introduced by the DNN model due to over- or underestimation, this algorithm is capable of handling both types of error variance and bias. This article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.

Keywords: Deep neural networks, deep reinforcement learning, Proximal Policy Optimization, state-of-the-art, trust region policy optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 108
8189 Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument

Authors: Danni Cong, Meiping Wu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokuncai, Hao Qin

Abstract:

Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.

Keywords: Gravity gradient sensor, radial installation limit error, accelerometer, uniaxial rotational modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
8188 Moving Object Detection Using Histogram of Uniformly Oriented Gradient

Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang

Abstract:

Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.

Keywords: Moving object detection, histogram of oriented gradient histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1184
8187 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case

Authors: A. Selmi, A. Bisharat

Abstract:

Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.

Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, Spherical inclusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 676
8186 Fast Intra Prediction Algorithm for H.264/AVC Based on Quadratic and Gradient Model

Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf

Abstract:

The H.264/AVC standard uses an intra prediction, 9 directional modes for 4x4 luma blocks and 8x8 luma blocks, 4 directional modes for 16x16 macroblock and 8x8 chroma blocks, respectively. It means that, for a macroblock, it has to perform 736 different RDO calculation before a best RDO modes is determined. With this Multiple intra-mode prediction, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards, but computational complexity is increased significantly. This paper presents a fast intra prediction algorithm for H.264/AVC intra prediction based a characteristic of homogeneity information. In this study, the gradient prediction method used to predict the homogeneous area and the quadratic prediction function used to predict the nonhomogeneous area. Based on the correlation between the homogeneity and block size, the smaller block is predicted by gradient prediction and quadratic prediction, so the bigger block is predicted by gradient prediction. Experimental results are presented to show that the proposed method reduce the complexity by up to 76.07% maintaining the similar PSNR quality with about 1.94%bit rate increase in average.

Keywords: Intra prediction, H.264/AVC, video coding, encodercomplexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
8185 Bayesian Inference for Phase Unwrapping Using Conjugate Gradient Method in One and Two Dimensions

Authors: Yohei Saika, Hiroki Sakaematsu, Shota Akiyama

Abstract:

We investigated statistical performance of Bayesian inference using maximum entropy and MAP estimation for several models which approximated wave-fronts in remote sensing using SAR interferometry. Using Monte Carlo simulation for a set of wave-fronts generated by assumed true prior, we found that the method of maximum entropy realized the optimal performance around the Bayes-optimal conditions by using model of the true prior and the likelihood representing optical measurement due to the interferometer. Also, we found that the MAP estimation regarded as a deterministic limit of maximum entropy almost achieved the same performance as the Bayes-optimal solution for the set of wave-fronts. Then, we clarified that the MAP estimation perfectly carried out phase unwrapping without using prior information, and also that the MAP estimation realized accurate phase unwrapping using conjugate gradient (CG) method, if we assumed the model of the true prior appropriately.

Keywords: Bayesian inference using maximum entropy, MAP estimation using conjugate gradient method, SAR interferometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
8184 Iris Recognition Based On the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
8183 Simulation of Effect of Current Stressing on Reliability of Solder Joints with Cu-Pillar Bumps

Authors: Y. Li, Q. S. Zhang, H. Z. Huang, B. Y. Wu

Abstract:

The mechanism behind the electromigration and thermomigration failure in flip-chip solder joints with Cu-pillar bumps was investigated in this paper through using finite element method. Hot spot and the current crowding occurrs in the upper corner of copper column instead of solders of the common solder ball. The simulation results show that the change in thermal gradient is noticeable, which might greatly affect the reliability of solder joints with Cu-pillar bumps under current stressing. When the average applied current density is increased from 1×104 A/cm2 to 3×104 A/cm2 in solders, the thermal gradient would increase from 74 K/cm to 901 K/cm at an ambient temperature of 25°C. The force from thermal gradient of 901 K/cm can nearly induce thermomigration by itself. With the increase in applied current, the thermal gradient is growing. It is proposed that thermomigration likely causes a serious reliability issue for Cu column based interconnects.

Keywords: Simulation, Cu-pillar bumps, Electromigration, Thermomigration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
8182 Learning Flexible Neural Networks for Pattern Recognition

Authors: A. Mirzaaghazadeh, H. Motameni, M. Karshenas, H. Nematzadeh

Abstract:

Learning the gradient of neuron's activity function like the weight of links causes a new specification which is flexibility. In flexible neural networks because of supervising and controlling the operation of neurons, all the burden of the learning is not dedicated to the weight of links, therefore in each period of learning of each neuron, in fact the gradient of their activity function, cooperate in order to achieve the goal of learning thus the number of learning will be decreased considerably. Furthermore, learning neurons parameters immunes them against changing in their inputs and factors which cause such changing. Likewise initial selecting of weights, type of activity function, selecting the initial gradient of activity function and selecting a fixed amount which is multiplied by gradient of error to calculate the weight changes and gradient of activity function, has a direct affect in convergence of network for learning.

Keywords: Back propagation, Flexible, Gradient, Learning, Neural network, Pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
8181 Hybrid Gravity Gradient Inversion-Ant Colony Optimization Algorithm for Motion Planning of Mobile Robots

Authors: Meng Wu

Abstract:

Motion planning is a common task required to be fulfilled by robots. A strategy combining Ant Colony Optimization (ACO) and gravity gradient inversion algorithm is proposed for motion planning of mobile robots. In this paper, in order to realize optimal motion planning strategy, the cost function in ACO is designed based on gravity gradient inversion algorithm. The obstacles around mobile robot can cause gravity gradient anomalies; the gradiometer is installed on the mobile robot to detect the gravity gradient anomalies. After obtaining the anomalies, gravity gradient inversion algorithm is employed to calculate relative distance and orientation between mobile robot and obstacles. The relative distance and orientation deduced from gravity gradient inversion algorithm is employed as cost function in ACO algorithm to realize motion planning. The proposed strategy is validated by the simulation and experiment results.

Keywords: Motion planning, gravity gradient inversion algorithm, ant colony optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1038
8180 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

Authors: Yanwen Li, Shuguo Xie

Abstract:

In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.

Keywords: Gradient image, segmentation and extract, mean-shift algorithm, dictionary learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 926