Search results for: Gradient descent method
8265 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31608264 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7818263 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks
Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing
Abstract:
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28388262 Face Reconstruction and Camera Pose Using Multi-dimensional Descent
Authors: Varin Chouvatut, Suthep Madarasmi, Mihran Tuceryan
Abstract:
This paper aims to propose a novel, robust, and simple method for obtaining a human 3D face model and camera pose (position and orientation) from a video sequence. Given a video sequence of a face recorded from an off-the-shelf digital camera, feature points used to define facial parts are tracked using the Active- Appearance Model (AAM). Then, the face-s 3D structure and camera pose of each video frame can be simultaneously calculated from the obtained point correspondences. This proposed method is primarily based on the combined approaches of Gradient Descent and Powell-s Multidimensional Minimization. Using this proposed method, temporarily occluded point including the case of self-occlusion does not pose a problem. As long as the point correspondences displayed in the video sequence have enough parallax, these missing points can still be reconstructed.
Keywords: Camera Pose, Face Reconstruction, Gradient Descent, Powell's Multidimensional Minimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15848261 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent
Authors: Zhifeng Kong
Abstract:
Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.Keywords: Over-parameterization, Rectified Linear Units (ReLU), convergence, gradient descent, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8968260 Affine Projection Adaptive Filter with Variable Regularization
Authors: Young-Seok Choi
Abstract:
We propose two affine projection algorithms (APA) with variable regularization parameter. The proposed algorithms dynamically update the regularization parameter that is fixed in the conventional regularized APA (R-APA) using a gradient descent based approach. By introducing the normalized gradient, the proposed algorithms give birth to an efficient and a robust update scheme for the regularization parameter. Through experiments we demonstrate that the proposed algorithms outperform conventional R-APA in terms of the convergence rate and the misadjustment error.Keywords: Affine projection, regularization, gradient descent, system identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16098259 Regularization of the Trajectories of Dynamical Systems by Adjusting Parameters
Authors: Helle Hein, Ülo Lepik
Abstract:
A gradient learning method to regulate the trajectories of some nonlinear chaotic systems is proposed. The method is motivated by the gradient descent learning algorithms for neural networks. It is based on two systems: dynamic optimization system and system for finding sensitivities. Numerical results of several examples are presented, which convincingly illustrate the efficiency of the method.Keywords: Chaos, Dynamical Systems, Learning, Neural Networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13668258 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks
Authors: N. M. Nawi, R. S. Ransing, M. R. Ransing
Abstract:
The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.
Keywords: Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15218257 New Adaptive Linear Discriminante Analysis for Face Recognition with SVM
Authors: Mehdi Ghayoumi
Abstract:
We have applied new accelerated algorithm for linear discriminate analysis (LDA) in face recognition with support vector machine. The new algorithm has the advantage of optimal selection of the step size. The gradient descent method and new algorithm has been implemented in software and evaluated on the Yale face database B. The eigenfaces of these approaches have been used to training a KNN. Recognition rate with new algorithm is compared with gradient.Keywords: lda, adaptive, svm, face recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14228256 A Descent-projection Method for Solving Monotone Structured Variational Inequalities
Authors: Min Sun, Zhenyu Liu
Abstract:
In this paper, a new descent-projection method with a new search direction for monotone structured variational inequalities is proposed. The method is simple, which needs only projections and some function evaluations, so its computational load is very tiny. Under mild conditions on the problem-s data, the method is proved to converges globally. Some preliminary computational results are also reported to illustrate the efficiency of the method.Keywords: variational inequalities, monotone function, global convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12928255 A New Modification of Nonlinear Conjugate Gradient Coefficients with Global Convergence Properties
Authors: Ahmad Alhawarat, Mustafa Mamat, Mohd Rivaie, Ismail Mohd
Abstract:
Conjugate gradient method has been enormously used to solve large scale unconstrained optimization problems due to the number of iteration, memory, CPU time, and convergence property, in this paper we find a new class of nonlinear conjugate gradient coefficient with global convergence properties proved by exact line search. The numerical results for our new βK give a good result when it compared with well known formulas.Keywords: Conjugate gradient method, conjugate gradient coefficient, global convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22938254 Comparison of Three Versions of Conjugate Gradient Method in Predicting an Unknown Irregular Boundary Profile
Authors: V. Ghadamyari, F. Samadi, F. Kowsary
Abstract:
An inverse geometry problem is solved to predict an unknown irregular boundary profile. The aim is to minimize the objective function, which is the difference between real and computed temperatures, using three different versions of Conjugate Gradient Method. The gradient of the objective function, considered necessary in this method, obtained as a result of solving the adjoint equation. The abilities of three versions of Conjugate Gradient Method in predicting the boundary profile are compared using a numerical algorithm based on the method. The predicted shapes show that due to its convergence rate and accuracy of predicted values, the Powell-Beale version of the method is more effective than the Fletcher-Reeves and Polak –Ribiere versions.Keywords: Boundary elements, Conjugate Gradient Method, Inverse Geometry Problem, Sensitivity equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18348253 On the Sphere Method of Linear Programming Using Multiple Interior Points Approach
Authors: Job H. Domingo, Carolina Bancayrin-Baguio
Abstract:
The Sphere Method is a flexible interior point algorithm for linear programming problems. This was developed mainly by Professor Katta G. Murty. It consists of two steps, the centering step and the descent step. The centering step is the most expensive part of the algorithm. In this centering step we proposed some improvements such as introducing two or more initial feasible solutions as we solve for the more favorable new solution by objective value while working with the rigorous updates of the feasible region along with some ideas integrated in the descent step. An illustration is given confirming the advantage of using the proposed procedure.
Keywords: Interior point, linear programming, sphere method, initial feasible solution, feasible region, centering and descent steps, optimal solution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18038252 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis
Authors: Yakin Hajlaoui, Richard Labib, Jean-Franc¸ois Plante, Michel Gamache
Abstract:
This study presents the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs’ processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW’s ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. We employ gradient descent and backpropagation to train ML-IDW. The performance of the proposed model is compared against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. Our results highlight the efficacy of ML-IDW, particularly in handling complex spatial dataset, exhibiting lower mean square error in regression and higher F1 score in classification.
Keywords: Deep Learning, Multi-Layer Neural Networks, Gradient Descent, Spatial Interpolation, Inverse Distance Weighting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 338251 Conjugate Gradient Algorithm for the Symmetric Arrowhead Solution of Matrix Equation AXB=C
Authors: Minghui Wang, Luping Xu, Juntao Zhang
Abstract:
Based on the conjugate gradient (CG) algorithm, the constrained matrix equation AXB=C and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.Keywords: Iterative method, symmetric arrowhead matrix, conjugate gradient algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14098250 A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks
Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil
Abstract:
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.Keywords: Backpropagation algorithm, conjugacy condition, line search, matrix perturbation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36438249 Self – Tuning Method of Fuzzy System: An Application on Greenhouse Process
Authors: M. Massour El Aoud, M. Franceschi, M. Maher
Abstract:
The approach proposed here is oriented in the direction of fuzzy system for the analysis and the synthesis of intelligent climate controllers, the simulation of the internal climate of the greenhouse is achieved by a linear model whose coefficients are obtained by identification. The use of fuzzy logic controllers for the regulation of climate variables represents a powerful way to minimize the energy cost. Strategies of reduction and optimization are adopted to facilitate the tuning and to reduce the complexity of the controller.
Keywords: Greenhouse, fuzzy logic, optimization, gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19478248 Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting
Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil
Abstract:
Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.
Keywords: Gradient descent method, jacobian matrix.Levenberg-Marquardt algorithm, quadratic error surfaces,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24748247 Autonomous Vehicle Navigation Using Harmonic Functions via Modified Arithmetic Mean Iterative Method
Authors: Azali Saudi, Jumat Sulaiman
Abstract:
Harmonic functions are solutions to Laplace’s equation that are known to have an advantage as a global approach in providing the potential values for autonomous vehicle navigation. However, the computation for obtaining harmonic functions is often too slow particularly when it involves very large environment. This paper presents a two-stage iterative method namely Modified Arithmetic Mean (MAM) method for solving 2D Laplace’s equation. Once the harmonic functions are obtained, the standard Gradient Descent Search (GDS) is performed for path finding of an autonomous vehicle from arbitrary initial position to the specified goal position. Details of the MAM method are discussed. Several simulations of vehicle navigation with path planning in a static known indoor environment were conducted to verify the efficiency of the MAM method. The generated paths obtained from the simulations are presented. The performance of the MAM method in computing harmonic functions in 2D environment to solve path planning problem for an autonomous vehicle navigation is also provided.Keywords: Modified Arithmetic Mean method, Harmonic functions, Laplace’s equation, path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8618246 Accurate Optical Flow Based on Spatiotemporal Gradient Constancy Assumption
Authors: Adam Rabcewicz
Abstract:
Variational methods for optical flow estimation are known for their excellent performance. The method proposed by Brox et al. [5] exemplifies the strength of that framework. It combines several concepts into single energy functional that is then minimized according to clear numerical procedure. In this paper we propose a modification of that algorithm starting from the spatiotemporal gradient constancy assumption. The numerical scheme allows to establish the connection between our model and the CLG(H) method introduced in [18]. Experimental evaluation carried out on synthetic sequences shows the significant superiority of the spatial variant of the proposed method. The comparison between methods for the realworld sequence is also enclosed.Keywords: optical flow, variational methods, gradient constancy assumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21808245 Mathematical Modeling of the Working Principle of Gravity Gradient Instrument
Authors: Danni Cong, Meiping Wu, Hua Mu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokun Cai, Hao Qin
Abstract:
Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.Keywords: Gravity gradient, accelerometer, gravity gradient sensor, single-axis rotation modulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10638244 Simulating Gradient Contour and Mesh of a Scalar Field
Authors: Usman Ali Khan, Bismah Tariq, Khalida Raza, Saima Malik, Aoun Muhammad
Abstract:
This research paper is based upon the simulation of gradient of mathematical functions and scalar fields using MATLAB. Scalar fields, their gradient, contours and mesh/surfaces are simulated using different related MATLAB tools and commands for convenient presentation and understanding. Different mathematical functions and scalar fields are examined here by taking their gradient, visualizing results in 3D with different color shadings and using other necessary relevant commands. In this way the outputs of required functions help us to analyze and understand in a better way as compared to just theoretical study of gradient.Keywords: MATLAB, Gradient, Contour, Scalar Field, Mesh
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34418243 Flexural Strength Design of RC Beams with Consideration of Strain Gradient Effect
Authors: Mantai Chen, Johnny Ching Ming Ho
Abstract:
The stress-strain relationship of concrete under flexure is one of the essential parameters in assessing ultimate flexural strength capacity of RC beams. Currently, the concrete stress-strain curve in flexure is obtained by incorporating a constant scale-down factor of 0.85 in the uniaxial stress-strain curve. However, it was revealed that strain gradient would improve the maximum concrete stress under flexure and concrete stress-strain curve is strain gradient dependent. Based on the strain-gradient-dependent concrete stress-strain curve, the investigation of the combined effects of strain gradient and concrete strength on flexural strength of RC beams was extended to high strength concrete up to 100 MPa by theoretical analysis. As an extension and application of the authors’ previous study, a new flexural strength design method incorporating the combined effects of strain gradient and concrete strength is developed. A set of equivalent rectangular concrete stress block parameters is proposed and applied to produce a series of design charts showing that the flexural strength of RC beams are improved with strain gradient effect considered.
Keywords: Beams, Equivalent concrete stress block, Flexural strength, Strain gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41068242 Traffic Density Measurement by Automatic Detection of Vehicles Using Gradient Vectors from Aerial Images
Authors: Saman Ghaffarian, Ilgın Gökasar
Abstract:
This paper presents a new automatic vehicle detection method from very high resolution aerial images to measure traffic density. The proposed method starts by extracting road regions from image using road vector data. Then, the road image is divided into equal sections considering resolution of the images. Gradient vectors of the road image are computed from edge map of the corresponding image. Gradient vectors on the each boundary of the sections are divided where the gradient vectors significantly change their directions. Finally, number of vehicles in each section is carried out by calculating the standard deviation of the gradient vectors in each group and accepting the group as vehicle that has standard deviation above predefined threshold value. The proposed method was tested in four very high resolution aerial images acquired from Istanbul, Turkey which illustrate roads and vehicles with diverse characteristics. The results show the reliability of the proposed method in detecting vehicles by producing 86% overall F1 accuracy value.Keywords: Aerial images, intelligent transportation systems, traffic density measurement, vehicle detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29358241 A Simple Heat and Mass Transfer Model for Salt Gradient Solar Ponds
Authors: Safwan Kanan, Jonathan Dewsbury, Gregory Lane-Serff
Abstract:
A salinity gradient solar pond is a free energy source system for collecting, convertingand storing solar energy as heat. In thispaper, the principles of solar pond are explained. A mathematical model is developed to describe and simulate heat and mass transferbehaviour of salinity gradient solar pond. MATLAB codes are programmed to solve the one dimensional finite difference method for heat and mass transfer equations. Temperature profiles and concentration distributions are calculated. The numerical results are validated with experimental data and the results arefound to be in good agreement.
Keywords: Finite Difference method, Salt-gradient solar-pond, Solar energy, Transient heat and mass transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49798240 Trajectory Estimation and Control of Vehicle using Neuro-Fuzzy Technique
Authors: B. Selma, S. Chouraqui
Abstract:
Nonlinear system identification is becoming an important tool which can be used to improve control performance. This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for controlling a car. The vehicle must follow a predefined path by supervised learning. Backpropagation gradient descent method was performed to train the ANFIS system. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in controlling the non linear system.
Keywords: Adaptive neuro-fuzzy inference system (ANFIS), Fuzzy logic, neural network, nonlinear system, control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17858239 On the Algorithmic Iterative Solutions of Conjugate Gradient, Gauss-Seidel and Jacobi Methods for Solving Systems of Linear Equations
Authors: H. D. Ibrahim, H. C. Chinwenyi, H. N. Ude
Abstract:
In this paper, efforts were made to examine and compare the algorithmic iterative solutions of conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax = b, where A is a real n x n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3 x 3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi and Conjugate Gradient methods) respectively. From the results obtained, we discovered that the Conjugate Gradient method converges faster to exact solutions in fewer iterative steps than the two other methods which took much iteration, much time and kept tending to the exact solutions.
Keywords: conjugate gradient, linear equations, symmetric and positive definite matrix, Gauss-Seidel, Jacobi, algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4738238 Research on the Correlation of the Fluctuating Density Gradient of the Compressible Flows
Authors: Yasuo Obikane
Abstract:
This work is to study a roll of the fluctuating density gradient in the compressible flows for the computational fluid dynamics (CFD). A new anisotropy tensor with the fluctuating density gradient is introduced, and is used for an invariant modeling technique to model the turbulent density gradient correlation equation derived from the continuity equation. The modeling equation is decomposed into three groups: group proportional to the mean velocity, and that proportional to the mean strain rate, and that proportional to the mean density. The characteristics of the correlation in a wake are extracted from the results by the two dimensional direct simulation, and shows the strong correlation with the vorticity in the wake near the body. Thus, it can be concluded that the correlation of the density gradient is a significant parameter to describe the quick generation of the turbulent property in the compressible flows.Keywords: Turbulence Modeling , Density Gradient Correlation, Compressible
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14468237 Multi-Context Recurrent Neural Network for Time Series Applications
Authors: B. Q. Huang, Tarik Rashid, M-T. Kechadi
Abstract:
this paper presents a multi-context recurrent network for time series analysis. While simple recurrent network (SRN) are very popular among recurrent neural networks, they still have some shortcomings in terms of learning speed and accuracy that need to be addressed. To solve these problems, we proposed a multi-context recurrent network (MCRN) with three different learning algorithms. The performance of this network is evaluated on some real-world application such as handwriting recognition and energy load forecasting. We study the performance of this network and we compared it to a very well established SRN. The experimental results showed that MCRN is very efficient and very well suited to time series analysis and its applications.
Keywords: Gradient descent method, recurrent neural network, learning algorithms, time series, BP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30438236 Segmentation of Noisy Digital Images with Stochastic Gradient Kernel
Authors: Abhishek Neogi, Jayesh Verma, Pinaki Pratim Acharjya
Abstract:
Image segmentation and edge detection is a fundamental section in image processing. In case of noisy images Edge Detection is very less effective if we use conventional Spatial Filters like Sobel, Prewitt, LOG, Laplacian etc. To overcome this problem we have proposed the use of Stochastic Gradient Mask instead of Spatial Filters for generating gradient images. The present study has shown that the resultant images obtained by applying Stochastic Gradient Masks appear to be much clearer and sharper as per Edge detection is considered.Keywords: Image segmentation, edge Detection, noisy images, spatialfilters, stochastic gradient kernel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520