Search results for: gradient descent.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 315

Search results for: gradient descent.

315 Dynamic Measurement System Modeling with Machine Learning Algorithms

Authors: Changqiao Wu, Guoqing Ding, Xin Chen

Abstract:

In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.

Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
314 Steepest Descent Method with New Step Sizes

Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman

Abstract:

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3106
313 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent

Authors: Zhifeng Kong

Abstract:

Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.

Keywords: Over-parameterization, Rectified Linear Units (ReLU), convergence, gradient descent, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
312 Affine Projection Adaptive Filter with Variable Regularization

Authors: Young-Seok Choi

Abstract:

We propose two affine projection algorithms (APA) with variable regularization parameter. The proposed algorithms dynamically update the regularization parameter that is fixed in the conventional regularized APA (R-APA) using a gradient descent based approach. By introducing the normalized gradient, the proposed algorithms give birth to an efficient and a robust update scheme for the regularization parameter. Through experiments we demonstrate that the proposed algorithms outperform conventional R-APA in terms of the convergence rate and the misadjustment error.

Keywords: Affine projection, regularization, gradient descent, system identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
311 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing

Abstract:

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.

Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2787
310 Face Reconstruction and Camera Pose Using Multi-dimensional Descent

Authors: Varin Chouvatut, Suthep Madarasmi, Mihran Tuceryan

Abstract:

This paper aims to propose a novel, robust, and simple method for obtaining a human 3D face model and camera pose (position and orientation) from a video sequence. Given a video sequence of a face recorded from an off-the-shelf digital camera, feature points used to define facial parts are tracked using the Active- Appearance Model (AAM). Then, the face-s 3D structure and camera pose of each video frame can be simultaneously calculated from the obtained point correspondences. This proposed method is primarily based on the combined approaches of Gradient Descent and Powell-s Multidimensional Minimization. Using this proposed method, temporarily occluded point including the case of self-occlusion does not pose a problem. As long as the point correspondences displayed in the video sequence have enough parallax, these missing points can still be reconstructed.

Keywords: Camera Pose, Face Reconstruction, Gradient Descent, Powell's Multidimensional Minimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
309 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

Authors: N. M. Nawi, R. S. Ransing, M. R. Ransing

Abstract:

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.

Keywords: Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
308 New Adaptive Linear Discriminante Analysis for Face Recognition with SVM

Authors: Mehdi Ghayoumi

Abstract:

We have applied new accelerated algorithm for linear discriminate analysis (LDA) in face recognition with support vector machine. The new algorithm has the advantage of optimal selection of the step size. The gradient descent method and new algorithm has been implemented in software and evaluated on the Yale face database B. The eigenfaces of these approaches have been used to training a KNN. Recognition rate with new algorithm is compared with gradient.

Keywords: lda, adaptive, svm, face recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1374
307 Regularization of the Trajectories of Dynamical Systems by Adjusting Parameters

Authors: Helle Hein, Ülo Lepik

Abstract:

A gradient learning method to regulate the trajectories of some nonlinear chaotic systems is proposed. The method is motivated by the gradient descent learning algorithms for neural networks. It is based on two systems: dynamic optimization system and system for finding sensitivities. Numerical results of several examples are presented, which convincingly illustrate the efficiency of the method.

Keywords: Chaos, Dynamical Systems, Learning, Neural Networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1325
306 A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks

Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil

Abstract:

Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.

Keywords: Backpropagation algorithm, conjugacy condition, line search, matrix perturbation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3595
305 Mathematical Modeling of the Working Principle of Gravity Gradient Instrument

Authors: Danni Cong, Meiping Wu, Hua Mu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokun Cai, Hao Qin

Abstract:

Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.

Keywords: Gravity gradient, accelerometer, gravity gradient sensor, single-axis rotation modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992
304 Simulating Gradient Contour and Mesh of a Scalar Field

Authors: Usman Ali Khan, Bismah Tariq, Khalida Raza, Saima Malik, Aoun Muhammad

Abstract:

This research paper is based upon the simulation of gradient of mathematical functions and scalar fields using MATLAB. Scalar fields, their gradient, contours and mesh/surfaces are simulated using different related MATLAB tools and commands for convenient presentation and understanding. Different mathematical functions and scalar fields are examined here by taking their gradient, visualizing results in 3D with different color shadings and using other necessary relevant commands. In this way the outputs of required functions help us to analyze and understand in a better way as compared to just theoretical study of gradient.

Keywords: MATLAB, Gradient, Contour, Scalar Field, Mesh

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3389
303 A New Modification of Nonlinear Conjugate Gradient Coefficients with Global Convergence Properties

Authors: Ahmad Alhawarat, Mustafa Mamat, Mohd Rivaie, Ismail Mohd

Abstract:

Conjugate gradient method has been enormously used to solve large scale unconstrained optimization problems due to the number of iteration, memory, CPU time, and convergence property, in this paper we find a new class of nonlinear conjugate gradient coefficient with global convergence properties proved by exact line search. The numerical results for our new βK give a good result when it compared with well known formulas.

Keywords: Conjugate gradient method, conjugate gradient coefficient, global convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2229
302 Research on the Correlation of the Fluctuating Density Gradient of the Compressible Flows

Authors: Yasuo Obikane

Abstract:

This work is to study a roll of the fluctuating density gradient in the compressible flows for the computational fluid dynamics (CFD). A new anisotropy tensor with the fluctuating density gradient is introduced, and is used for an invariant modeling technique to model the turbulent density gradient correlation equation derived from the continuity equation. The modeling equation is decomposed into three groups: group proportional to the mean velocity, and that proportional to the mean strain rate, and that proportional to the mean density. The characteristics of the correlation in a wake are extracted from the results by the two dimensional direct simulation, and shows the strong correlation with the vorticity in the wake near the body. Thus, it can be concluded that the correlation of the density gradient is a significant parameter to describe the quick generation of the turbulent property in the compressible flows.

Keywords: Turbulence Modeling , Density Gradient Correlation, Compressible

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389
301 On the Sphere Method of Linear Programming Using Multiple Interior Points Approach

Authors: Job H. Domingo, Carolina Bancayrin-Baguio

Abstract:

The Sphere Method is a flexible interior point algorithm for linear programming problems. This was developed mainly by Professor Katta G. Murty. It consists of two steps, the centering step and the descent step. The centering step is the most expensive part of the algorithm. In this centering step we proposed some improvements such as introducing two or more initial feasible solutions as we solve for the more favorable new solution by objective value while working with the rigorous updates of the feasible region along with some ideas integrated in the descent step. An illustration is given confirming the advantage of using the proposed procedure.

Keywords: Interior point, linear programming, sphere method, initial feasible solution, feasible region, centering and descent steps, optimal solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
300 Segmentation of Noisy Digital Images with Stochastic Gradient Kernel

Authors: Abhishek Neogi, Jayesh Verma, Pinaki Pratim Acharjya

Abstract:

Image segmentation and edge detection is a fundamental section in image processing. In case of noisy images Edge Detection is very less effective if we use conventional Spatial Filters like Sobel, Prewitt, LOG, Laplacian etc. To overcome this problem we have proposed the use of Stochastic Gradient Mask instead of Spatial Filters for generating gradient images. The present study has shown that the resultant images obtained by applying Stochastic Gradient Masks appear to be much clearer and sharper as per Edge detection is considered.

Keywords: Image segmentation, edge Detection, noisy images, spatialfilters, stochastic gradient kernel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
299 Self – Tuning Method of Fuzzy System: An Application on Greenhouse Process

Authors: M. Massour El Aoud, M. Franceschi, M. Maher

Abstract:

The approach proposed here is oriented in the direction of fuzzy system for the analysis and the synthesis of intelligent climate controllers, the simulation of the internal climate of the greenhouse is achieved by a linear model whose coefficients are obtained by identification. The use of fuzzy logic controllers for the regulation of climate variables represents a powerful way to minimize the energy cost. Strategies of reduction and optimization are adopted to facilitate the tuning and to reduce the complexity of the controller.

Keywords: Greenhouse, fuzzy logic, optimization, gradient descent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
298 Variable Regularization Parameter Normalized Least Mean Square Adaptive Filter

Authors: Young-Seok Choi

Abstract:

We present a normalized LMS (NLMS) algorithm with robust regularization. Unlike conventional NLMS with the fixed regularization parameter, the proposed approach dynamically updates the regularization parameter. By exploiting a gradient descent direction, we derive a computationally efficient and robust update scheme for the regularization parameter. In simulation, we demonstrate the proposed algorithm outperforms conventional NLMS algorithms in terms of convergence rate and misadjustment error.

Keywords: Regularization, normalized LMS, system identification, robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
297 Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting

Authors: Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C. Ardil

Abstract:

Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.

Keywords: Gradient descent method, jacobian matrix.Levenberg-Marquardt algorithm, quadratic error surfaces,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2417
296 A Descent-projection Method for Solving Monotone Structured Variational Inequalities

Authors: Min Sun, Zhenyu Liu

Abstract:

In this paper, a new descent-projection method with a new search direction for monotone structured variational inequalities is proposed. The method is simple, which needs only projections and some function evaluations, so its computational load is very tiny. Under mild conditions on the problem-s data, the method is proved to converges globally. Some preliminary computational results are also reported to illustrate the efficiency of the method.

Keywords: variational inequalities, monotone function, global convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
295 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case

Authors: A. Selmi, A. Bisharat

Abstract:

Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.

Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, Spherical inclusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675
294 Flexural Strength Design of RC Beams with Consideration of Strain Gradient Effect

Authors: Mantai Chen, Johnny Ching Ming Ho

Abstract:

The stress-strain relationship of concrete under flexure is one of the essential parameters in assessing ultimate flexural strength capacity of RC beams. Currently, the concrete stress-strain curve in flexure is obtained by incorporating a constant scale-down factor of 0.85 in the uniaxial stress-strain curve. However, it was revealed that strain gradient would improve the maximum concrete stress under flexure and concrete stress-strain curve is strain gradient dependent. Based on the strain-gradient-dependent concrete stress-strain curve, the investigation of the combined effects of strain gradient and concrete strength on flexural strength of RC beams was extended to high strength concrete up to 100 MPa by theoretical analysis. As an extension and application of the authors’ previous study, a new flexural strength design method incorporating the combined effects of strain gradient and concrete strength is developed. A set of equivalent rectangular concrete stress block parameters is proposed and applied to produce a series of design charts showing that the flexural strength of RC beams are improved with strain gradient effect considered.

Keywords: Beams, Equivalent concrete stress block, Flexural strength, Strain gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4053
293 Learning Flexible Neural Networks for Pattern Recognition

Authors: A. Mirzaaghazadeh, H. Motameni, M. Karshenas, H. Nematzadeh

Abstract:

Learning the gradient of neuron's activity function like the weight of links causes a new specification which is flexibility. In flexible neural networks because of supervising and controlling the operation of neurons, all the burden of the learning is not dedicated to the weight of links, therefore in each period of learning of each neuron, in fact the gradient of their activity function, cooperate in order to achieve the goal of learning thus the number of learning will be decreased considerably. Furthermore, learning neurons parameters immunes them against changing in their inputs and factors which cause such changing. Likewise initial selecting of weights, type of activity function, selecting the initial gradient of activity function and selecting a fixed amount which is multiplied by gradient of error to calculate the weight changes and gradient of activity function, has a direct affect in convergence of network for learning.

Keywords: Back propagation, Flexible, Gradient, Learning, Neural network, Pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444
292 Hybrid Gravity Gradient Inversion-Ant Colony Optimization Algorithm for Motion Planning of Mobile Robots

Authors: Meng Wu

Abstract:

Motion planning is a common task required to be fulfilled by robots. A strategy combining Ant Colony Optimization (ACO) and gravity gradient inversion algorithm is proposed for motion planning of mobile robots. In this paper, in order to realize optimal motion planning strategy, the cost function in ACO is designed based on gravity gradient inversion algorithm. The obstacles around mobile robot can cause gravity gradient anomalies; the gradiometer is installed on the mobile robot to detect the gravity gradient anomalies. After obtaining the anomalies, gravity gradient inversion algorithm is employed to calculate relative distance and orientation between mobile robot and obstacles. The relative distance and orientation deduced from gravity gradient inversion algorithm is employed as cost function in ACO algorithm to realize motion planning. The proposed strategy is validated by the simulation and experiment results.

Keywords: Motion planning, gravity gradient inversion algorithm, ant colony optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1035
291 Determination of the Proper Quality Costs Parameters via Variable Step Size Steepest Descent Algorithm

Authors: Danupun Visuwan, Pongchanun Luangpaiboon

Abstract:

This paper presents the determination of the proper quality costs parameters which provide the optimum return. The system dynamics simulation was applied. The simulation model was constructed by the real data from a case of the electronic devices manufacturer in Thailand. The Steepest Descent algorithm was employed to optimise. The experimental results show that the company should spend on prevention and appraisal activities for 850 and 10 Baht/day respectively. It provides minimum cumulative total quality cost, which is 258,000 Baht in twelve months. The effect of the step size in the stage of improving the variables to the optimum was also investigated. It can be stated that the smaller step size provided a better result with more experimental runs. However, the different yield in this case is not significant in practice. Therefore, the greater step size is recommended because the region of optima could be reached more easily and rapidly.

Keywords: Quality costs, Steepest Descent Algorithm, StepSize, System Dynamics Simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1233
290 Impact of Viscous and Heat Relaxation Loss on the Critical Temperature Gradients of Thermoacoustic Stacks

Authors: Zhibin Yu, Artur J. Jaworski, Abdulrahman S. Abduljalil

Abstract:

A stack with a small critical temperature gradient is desirable for a standing wave thermoacoustic engine to obtain a low onset temperature difference (the minimum temperature difference to start engine-s self-oscillation). The viscous and heat relaxation loss in the stack determines the critical temperature gradient. In this work, a dimensionless critical temperature gradient factor is obtained based on the linear thermoacoustic theory. It is indicated that the impedance determines the proportion between the viscous loss, heat relaxation losses and the power production from the heat energy. It reveals the effects of the channel dimensions, geometrical configuration and the local acoustic impedance on the critical temperature gradient in stacks. The numerical analysis shows that there exists a possible optimum combination of these parameters which leads to the lowest critical temperature gradient. Furthermore, several different geometries have been tested and compared numerically.

Keywords: Critical temperature gradient, heat relaxation, stack, viscous effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
289 A Refined Nonlocal Strain Gradient Theory for Assessing Scaling-Dependent Vibration Behavior of Microbeams

Authors: Xiaobai Li, Li Li, Yujin Hu, Weiming Deng, Zhe Ding

Abstract:

A size-dependent Euler–Bernoulli beam model, which accounts for nonlocal stress field, strain gradient field and higher order inertia force field, is derived based on the nonlocal strain gradient theory considering velocity gradient effect. The governing equations and boundary conditions are derived both in dimensional and dimensionless form by employed the Hamilton principle. The analytical solutions based on different continuum theories are compared. The effect of higher order inertia terms is extremely significant in high frequency range. It is found that there exists an asymptotic frequency for the proposed beam model, while for the nonlocal strain gradient theory the solutions diverge. The effect of strain gradient field in thickness direction is significant in low frequencies domain and it cannot be neglected when the material strain length scale parameter is considerable with beam thickness. The influence of each of three size effect parameters on the natural frequencies are investigated. The natural frequencies increase with the increasing material strain gradient length scale parameter or decreasing velocity gradient length scale parameter and nonlocal parameter.

Keywords: Euler-Bernoulli Beams, free vibration, higher order inertia, nonlocal strain gradient theory, velocity gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
288 Comparison of Three Versions of Conjugate Gradient Method in Predicting an Unknown Irregular Boundary Profile

Authors: V. Ghadamyari, F. Samadi, F. Kowsary

Abstract:

An inverse geometry problem is solved to predict an unknown irregular boundary profile. The aim is to minimize the objective function, which is the difference between real and computed temperatures, using three different versions of Conjugate Gradient Method. The gradient of the objective function, considered necessary in this method, obtained as a result of solving the adjoint equation. The abilities of three versions of Conjugate Gradient Method in predicting the boundary profile are compared using a numerical algorithm based on the method. The predicted shapes show that due to its convergence rate and accuracy of predicted values, the Powell-Beale version of the method is more effective than the Fletcher-Reeves and Polak –Ribiere versions.

Keywords: Boundary elements, Conjugate Gradient Method, Inverse Geometry Problem, Sensitivity equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
287 Conjugate Gradient Algorithm for the Symmetric Arrowhead Solution of Matrix Equation AXB=C

Authors: Minghui Wang, Luping Xu, Juntao Zhang

Abstract:

Based on the conjugate gradient (CG) algorithm, the constrained matrix equation AXB=C and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.

Keywords: Iterative method, symmetric arrowhead matrix, conjugate gradient algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
286 Blind Identification of MA Models Using Cumulants

Authors: Mohamed Boulouird, Moha M'Rabet Hassani

Abstract:

In this paper, many techniques for blind identification of moving average (MA) process are presented. These methods utilize third- and fourth-order cumulants of the noisy observations of the system output. The system is driven by an independent and identically distributed (i.i.d) non-Gaussian sequence that is not observed. Two nonlinear optimization algorithms, namely the Gradient Descent and the Gauss-Newton algorithms are exposed. An algorithm based on the joint-diagonalization of the fourth-order cumulant matrices (FOSI) is also considered, as well as an improved version of the classical C(q, 0, k) algorithm based on the choice of the Best 1-D Slice of fourth-order cumulants. To illustrate the effectiveness of our methods, various simulation examples are presented.

Keywords: Cumulants, Identification, MA models, Parameter estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362