Search results for: Gradient of average reward
1691 A State Aggregation Approach to Singularly Perturbed Markov Reward Processes
Authors: Dali Zhang, Baoqun Yin, Hongsheng Xi
Abstract:
In this paper, we propose a single sample path based algorithm with state aggregation to optimize the average rewards of singularly perturbed Markov reward processes (SPMRPs) with a large scale state spaces. It is assumed that such a reward process depend on a set of parameters. Differing from the other kinds of Markov chain, SPMRPs have their own hierarchical structure. Based on this special structure, our algorithm can alleviate the load in the optimization for performance. Moreover, our method can be applied on line because of its evolution with the sample path simulated. Compared with the original algorithm applied on these problems of general MRPs, a new gradient formula for average reward performance metric in SPMRPs is brought in, which will be proved in Appendix, and then based on these gradients, the schedule of the iteration algorithm is presented, which is based on a single sample path, and eventually a special case in which parameters only dominate the disturbance matrices will be analyzed, and a precise comparison with be displayed between our algorithm with the old ones which is aim to solve these problems in general Markov reward processes. When applied in SPMRPs, our method will approach a fast pace in these cases. Furthermore, to illustrate the practical value of SPMRPs, a simple example in multiple programming in computer systems will be listed and simulated. Corresponding to some practical model, physical meanings of SPMRPs in networks of queues will be clarified.Keywords: Singularly perturbed Markov processes, Gradient of average reward, Differential reward, State aggregation, Perturbed close network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16351690 Corporate Culture and Innovation: Implications for Reward Systems
Authors: Ivana Nacinovic, Lovorka Galetic, Nevenka Cavlek
Abstract:
Continuous innovation is becoming a necessity if firms want to stay competitive. Different factors influence the rate of innovation in a firm, among which corporate culture has often been recognized among the most important factors. In this paper we argue that the development of corporate culture that will support and foster innovation must be accompanied with an appropriate reward system. A research conducted among Croatian firms showed that a statistically significant relationship exists among corporate culture that supports innovations and reward system features.Keywords: Corporate culture, innovation, reward systems, Croatia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44741689 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case
Authors: A. Selmi, A. Bisharat
Abstract:
Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.
Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, Spherical inclusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7241688 Mathematical Modeling of the Working Principle of Gravity Gradient Instrument
Authors: Danni Cong, Meiping Wu, Hua Mu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokun Cai, Hao Qin
Abstract:
Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.Keywords: Gravity gradient, accelerometer, gravity gradient sensor, single-axis rotation modulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10631687 Simulating Gradient Contour and Mesh of a Scalar Field
Authors: Usman Ali Khan, Bismah Tariq, Khalida Raza, Saima Malik, Aoun Muhammad
Abstract:
This research paper is based upon the simulation of gradient of mathematical functions and scalar fields using MATLAB. Scalar fields, their gradient, contours and mesh/surfaces are simulated using different related MATLAB tools and commands for convenient presentation and understanding. Different mathematical functions and scalar fields are examined here by taking their gradient, visualizing results in 3D with different color shadings and using other necessary relevant commands. In this way the outputs of required functions help us to analyze and understand in a better way as compared to just theoretical study of gradient.Keywords: MATLAB, Gradient, Contour, Scalar Field, Mesh
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34401686 A New Modification of Nonlinear Conjugate Gradient Coefficients with Global Convergence Properties
Authors: Ahmad Alhawarat, Mustafa Mamat, Mohd Rivaie, Ismail Mohd
Abstract:
Conjugate gradient method has been enormously used to solve large scale unconstrained optimization problems due to the number of iteration, memory, CPU time, and convergence property, in this paper we find a new class of nonlinear conjugate gradient coefficient with global convergence properties proved by exact line search. The numerical results for our new βK give a good result when it compared with well known formulas.Keywords: Conjugate gradient method, conjugate gradient coefficient, global convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22931685 Simulation of Effect of Current Stressing on Reliability of Solder Joints with Cu-Pillar Bumps
Authors: Y. Li, Q. S. Zhang, H. Z. Huang, B. Y. Wu
Abstract:
The mechanism behind the electromigration and thermomigration failure in flip-chip solder joints with Cu-pillar bumps was investigated in this paper through using finite element method. Hot spot and the current crowding occurrs in the upper corner of copper column instead of solders of the common solder ball. The simulation results show that the change in thermal gradient is noticeable, which might greatly affect the reliability of solder joints with Cu-pillar bumps under current stressing. When the average applied current density is increased from 1×104 A/cm2 to 3×104 A/cm2 in solders, the thermal gradient would increase from 74 K/cm to 901 K/cm at an ambient temperature of 25°C. The force from thermal gradient of 901 K/cm can nearly induce thermomigration by itself. With the increase in applied current, the thermal gradient is growing. It is proposed that thermomigration likely causes a serious reliability issue for Cu column based interconnects.Keywords: Simulation, Cu-pillar bumps, Electromigration, Thermomigration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18651684 Research on the Correlation of the Fluctuating Density Gradient of the Compressible Flows
Authors: Yasuo Obikane
Abstract:
This work is to study a roll of the fluctuating density gradient in the compressible flows for the computational fluid dynamics (CFD). A new anisotropy tensor with the fluctuating density gradient is introduced, and is used for an invariant modeling technique to model the turbulent density gradient correlation equation derived from the continuity equation. The modeling equation is decomposed into three groups: group proportional to the mean velocity, and that proportional to the mean strain rate, and that proportional to the mean density. The characteristics of the correlation in a wake are extracted from the results by the two dimensional direct simulation, and shows the strong correlation with the vorticity in the wake near the body. Thus, it can be concluded that the correlation of the density gradient is a significant parameter to describe the quick generation of the turbulent property in the compressible flows.Keywords: Turbulence Modeling , Density Gradient Correlation, Compressible
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14441683 Information System for Early Diabetic Retinopathy Diagnostics Based on Multiscale Texture Gradient Method
Authors: L. S. Godlevsky, N. V. Kresyun, V. P. Martsenyuk, K. S. Shakun, T. V. Tatarchuk, K. O. Prybolovets, L. F. Kalinichenko, M. Karpinski, T. Gancarczyk
Abstract:
Structures of eye bottom were extracted using multiscale texture gradient method and color characteristics of macular zone and vessels were verified in CIELAB scale. The difference of average values of L*, a* and b* coordinates of CIE (International Commision of Illumination) scale in patients with diabetes and healthy volunteers was compared. The average value of L* in diabetic patients exceeded such one in the group of practically healthy persons by 2.71 times (P < 0.05), while the value of a* index was reduced by 3.8 times when compared with control one (P < 0.05). b* index exceeded such one in the control group by 12.4 times (P < 0.05). The integrated index on color difference (ΔE) exceeded control value by 2.87 times (P < 0.05). More pronounced differences with ΔE were followed by a shorter period of MA appearance with a correlation level at -0.56 (P < 0.05). The specificity of diagnostics raised by 2.17 times (P < 0.05) and negative prognostic index exceeded such one determined with the expert method by 2.26 times (P < 0.05).
Keywords: Diabetic retinopathy, multiscale texture gradient, color spectrum analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5761682 Segmentation of Noisy Digital Images with Stochastic Gradient Kernel
Authors: Abhishek Neogi, Jayesh Verma, Pinaki Pratim Acharjya
Abstract:
Image segmentation and edge detection is a fundamental section in image processing. In case of noisy images Edge Detection is very less effective if we use conventional Spatial Filters like Sobel, Prewitt, LOG, Laplacian etc. To overcome this problem we have proposed the use of Stochastic Gradient Mask instead of Spatial Filters for generating gradient images. The present study has shown that the resultant images obtained by applying Stochastic Gradient Masks appear to be much clearer and sharper as per Edge detection is considered.Keywords: Image segmentation, edge Detection, noisy images, spatialfilters, stochastic gradient kernel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15201681 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7811680 Fast Intra Prediction Algorithm for H.264/AVC Based on Quadratic and Gradient Model
Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf
Abstract:
The H.264/AVC standard uses an intra prediction, 9 directional modes for 4x4 luma blocks and 8x8 luma blocks, 4 directional modes for 16x16 macroblock and 8x8 chroma blocks, respectively. It means that, for a macroblock, it has to perform 736 different RDO calculation before a best RDO modes is determined. With this Multiple intra-mode prediction, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards, but computational complexity is increased significantly. This paper presents a fast intra prediction algorithm for H.264/AVC intra prediction based a characteristic of homogeneity information. In this study, the gradient prediction method used to predict the homogeneous area and the quadratic prediction function used to predict the nonhomogeneous area. Based on the correlation between the homogeneity and block size, the smaller block is predicted by gradient prediction and quadratic prediction, so the bigger block is predicted by gradient prediction. Experimental results are presented to show that the proposed method reduce the complexity by up to 76.07% maintaining the similar PSNR quality with about 1.94%bit rate increase in average.Keywords: Intra prediction, H.264/AVC, video coding, encodercomplexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18931679 Flexural Strength Design of RC Beams with Consideration of Strain Gradient Effect
Authors: Mantai Chen, Johnny Ching Ming Ho
Abstract:
The stress-strain relationship of concrete under flexure is one of the essential parameters in assessing ultimate flexural strength capacity of RC beams. Currently, the concrete stress-strain curve in flexure is obtained by incorporating a constant scale-down factor of 0.85 in the uniaxial stress-strain curve. However, it was revealed that strain gradient would improve the maximum concrete stress under flexure and concrete stress-strain curve is strain gradient dependent. Based on the strain-gradient-dependent concrete stress-strain curve, the investigation of the combined effects of strain gradient and concrete strength on flexural strength of RC beams was extended to high strength concrete up to 100 MPa by theoretical analysis. As an extension and application of the authors’ previous study, a new flexural strength design method incorporating the combined effects of strain gradient and concrete strength is developed. A set of equivalent rectangular concrete stress block parameters is proposed and applied to produce a series of design charts showing that the flexural strength of RC beams are improved with strain gradient effect considered.
Keywords: Beams, Equivalent concrete stress block, Flexural strength, Strain gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41061678 Iris Recognition Based On the Low Order Norms of Gradient Components
Authors: Iman A. Saad, Loay E. George
Abstract:
Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.
Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19651677 Learning Flexible Neural Networks for Pattern Recognition
Authors: A. Mirzaaghazadeh, H. Motameni, M. Karshenas, H. Nematzadeh
Abstract:
Learning the gradient of neuron's activity function like the weight of links causes a new specification which is flexibility. In flexible neural networks because of supervising and controlling the operation of neurons, all the burden of the learning is not dedicated to the weight of links, therefore in each period of learning of each neuron, in fact the gradient of their activity function, cooperate in order to achieve the goal of learning thus the number of learning will be decreased considerably. Furthermore, learning neurons parameters immunes them against changing in their inputs and factors which cause such changing. Likewise initial selecting of weights, type of activity function, selecting the initial gradient of activity function and selecting a fixed amount which is multiplied by gradient of error to calculate the weight changes and gradient of activity function, has a direct affect in convergence of network for learning.Keywords: Back propagation, Flexible, Gradient, Learning, Neural network, Pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14931676 On a Conjecture Regarding the Adam Optimizer
Authors: Mohamed Akrout, Douglas Tweed
Abstract:
The great success of deep learning relies on efficient optimizers, which are the algorithms that decide how to adjust network weights and biases based on gradient information. One of the most effective and widely used optimizers in recent years has been the method of adaptive moments, or Adam, but the mathematical reasons behind its effectiveness are still unclear. Attempts to analyse its behaviour have remained incomplete, in part because they hinge on a conjecture which has never been proven, regarding ratios of powers of the first and second moments of the gradient. Here we show that this conjecture is in fact false, but that a modified version of it is true, and can take its place in analyses of Adam.
Keywords: Adam optimizer, Bock’s conjecture, stochastic optimization, average regret.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3751675 Hybrid Gravity Gradient Inversion-Ant Colony Optimization Algorithm for Motion Planning of Mobile Robots
Authors: Meng Wu
Abstract:
Motion planning is a common task required to be fulfilled by robots. A strategy combining Ant Colony Optimization (ACO) and gravity gradient inversion algorithm is proposed for motion planning of mobile robots. In this paper, in order to realize optimal motion planning strategy, the cost function in ACO is designed based on gravity gradient inversion algorithm. The obstacles around mobile robot can cause gravity gradient anomalies; the gradiometer is installed on the mobile robot to detect the gravity gradient anomalies. After obtaining the anomalies, gravity gradient inversion algorithm is employed to calculate relative distance and orientation between mobile robot and obstacles. The relative distance and orientation deduced from gravity gradient inversion algorithm is employed as cost function in ACO algorithm to realize motion planning. The proposed strategy is validated by the simulation and experiment results.
Keywords: Motion planning, gravity gradient inversion algorithm, ant colony optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11471674 Economy-Based Computing with WebCom
Authors: Adarsh Patil, David A. Power, John P. Morrison
Abstract:
Grid environments consist of the volatile integration of discrete heterogeneous resources. The notion of the Grid is to unite different users and organisations and pool their resources into one large computing platform where they can harness, inter-operate, collaborate and interact. If the Grid Community is to achieve this objective, then participants (Users and Organisations) need to be willing to donate or share their resources and permit other participants to use their resources. Resources do not have to be shared at all times, since it may result in users not having access to their own resource. The idea of reward-based computing was developed to address the sharing problem in a pragmatic manner. Participants are offered a reward to donate their resources to the Grid. A reward may include monetary recompense or a pro rata share of available resources when constrained. This latter point may imply a quality of service, which in turn may require some globally agreed reservation mechanism. This paper presents a platform for economybased computing using the WebCom Grid middleware. Using this middleware, participants can configure their resources at times and priority levels to suit their local usage policy. The WebCom system accounts for processing done on individual participants- resources and rewards them accordingly.Keywords: WebCom, Economy-based computing, WebComGrid Bank Reward, Condensed Graph, Distributor, Accounting, GridPoint.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12061673 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31601672 Impact of Viscous and Heat Relaxation Loss on the Critical Temperature Gradients of Thermoacoustic Stacks
Authors: Zhibin Yu, Artur J. Jaworski, Abdulrahman S. Abduljalil
Abstract:
A stack with a small critical temperature gradient is desirable for a standing wave thermoacoustic engine to obtain a low onset temperature difference (the minimum temperature difference to start engine-s self-oscillation). The viscous and heat relaxation loss in the stack determines the critical temperature gradient. In this work, a dimensionless critical temperature gradient factor is obtained based on the linear thermoacoustic theory. It is indicated that the impedance determines the proportion between the viscous loss, heat relaxation losses and the power production from the heat energy. It reveals the effects of the channel dimensions, geometrical configuration and the local acoustic impedance on the critical temperature gradient in stacks. The numerical analysis shows that there exists a possible optimum combination of these parameters which leads to the lowest critical temperature gradient. Furthermore, several different geometries have been tested and compared numerically.Keywords: Critical temperature gradient, heat relaxation, stack, viscous effect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18051671 A Refined Nonlocal Strain Gradient Theory for Assessing Scaling-Dependent Vibration Behavior of Microbeams
Authors: Xiaobai Li, Li Li, Yujin Hu, Weiming Deng, Zhe Ding
Abstract:
A size-dependent Euler–Bernoulli beam model, which accounts for nonlocal stress field, strain gradient field and higher order inertia force field, is derived based on the nonlocal strain gradient theory considering velocity gradient effect. The governing equations and boundary conditions are derived both in dimensional and dimensionless form by employed the Hamilton principle. The analytical solutions based on different continuum theories are compared. The effect of higher order inertia terms is extremely significant in high frequency range. It is found that there exists an asymptotic frequency for the proposed beam model, while for the nonlocal strain gradient theory the solutions diverge. The effect of strain gradient field in thickness direction is significant in low frequencies domain and it cannot be neglected when the material strain length scale parameter is considerable with beam thickness. The influence of each of three size effect parameters on the natural frequencies are investigated. The natural frequencies increase with the increasing material strain gradient length scale parameter or decreasing velocity gradient length scale parameter and nonlocal parameter.Keywords: Euler-Bernoulli Beams, free vibration, higher order inertia, nonlocal strain gradient theory, velocity gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10051670 Comparison of Three Versions of Conjugate Gradient Method in Predicting an Unknown Irregular Boundary Profile
Authors: V. Ghadamyari, F. Samadi, F. Kowsary
Abstract:
An inverse geometry problem is solved to predict an unknown irregular boundary profile. The aim is to minimize the objective function, which is the difference between real and computed temperatures, using three different versions of Conjugate Gradient Method. The gradient of the objective function, considered necessary in this method, obtained as a result of solving the adjoint equation. The abilities of three versions of Conjugate Gradient Method in predicting the boundary profile are compared using a numerical algorithm based on the method. The predicted shapes show that due to its convergence rate and accuracy of predicted values, the Powell-Beale version of the method is more effective than the Fletcher-Reeves and Polak –Ribiere versions.Keywords: Boundary elements, Conjugate Gradient Method, Inverse Geometry Problem, Sensitivity equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18341669 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks
Authors: N. M. Nawi, R. S. Ransing, M. R. Ransing
Abstract:
The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.
Keywords: Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15201668 Conjugate Gradient Algorithm for the Symmetric Arrowhead Solution of Matrix Equation AXB=C
Authors: Minghui Wang, Luping Xu, Juntao Zhang
Abstract:
Based on the conjugate gradient (CG) algorithm, the constrained matrix equation AXB=C and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.Keywords: Iterative method, symmetric arrowhead matrix, conjugate gradient algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14091667 Advanced Neural Network Learning Applied to Pulping Modeling
Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam
Abstract:
This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14061666 Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning
Authors: W. D. Wan Rosli, Z. Zainuddin, R. Lanouette, S. Sathasivam
Abstract:
This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Keywords: Convergence, Modeling, Neural Networks, Preconditioned Conjugate Gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16841665 Moving Object Detection Using Histogram of Uniformly Oriented Gradient
Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang
Abstract:
Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.
Keywords: Moving object detection, histogram of oriented gradient histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12331664 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks
Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing
Abstract:
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28371663 Signature Recognition Using Conjugate Gradient Neural Networks
Authors: Jamal Fathi Abu Hasna
Abstract:
There are two common methodologies to verify signatures: the functional approach and the parametric approach. This paper presents a new approach for dynamic handwritten signature verification (HSV) using the Neural Network with verification by the Conjugate Gradient Neural Network (NN). It is yet another avenue in the approach to HSV that is found to produce excellent results when compared with other methods of dynamic. Experimental results show the system is insensitive to the order of base-classifiers and gets a high verification ratio.Keywords: Signature Verification, MATLAB Software, Conjugate Gradient, Segmentation, Skilled Forgery, and Genuine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16371662 A Simple Heat and Mass Transfer Model for Salt Gradient Solar Ponds
Authors: Safwan Kanan, Jonathan Dewsbury, Gregory Lane-Serff
Abstract:
A salinity gradient solar pond is a free energy source system for collecting, convertingand storing solar energy as heat. In thispaper, the principles of solar pond are explained. A mathematical model is developed to describe and simulate heat and mass transferbehaviour of salinity gradient solar pond. MATLAB codes are programmed to solve the one dimensional finite difference method for heat and mass transfer equations. Temperature profiles and concentration distributions are calculated. The numerical results are validated with experimental data and the results arefound to be in good agreement.
Keywords: Finite Difference method, Salt-gradient solar-pond, Solar energy, Transient heat and mass transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4979