Search results for: Padé approximation.
288 High Speed Video Transmission for Telemedicine using ATM Technology
Authors: J. P. Dubois, H. M. Chiu
Abstract:
In this paper, we study statistical multiplexing of VBR video in ATM networks. ATM promises to provide high speed realtime multi-point to central video transmission for telemedicine applications in rural hospitals and in emergency medical services. Video coders are known to produce variable bit rate (VBR) signals and the effects of aggregating these VBR signals need to be determined in order to design a telemedicine network infrastructure capable of carrying these signals. We first model the VBR video signal and simulate it using a generic continuous-data autoregressive (AR) scheme. We carry out the queueing analysis by the Fluid Approximation Model (FAM) and the Markov Modulated Poisson Process (MMPP). The study has shown a trade off: multiplexing VBR signals reduces burstiness and improves resource utilization, however, the buffer size needs to be increased with an associated economic cost. We also show that the MMPP model and the Fluid Approximation model fit best, respectively, the cell region and the burst region. Therefore, a hybrid MMPP and FAM completely characterizes the overall performance of the ATM statistical multiplexer. The ramifications of this technology are clear: speed, reliability (lower loss rate and jitter), and increased capacity in video transmission for telemedicine. With migration to full IP-based networks still a long way to achieving both high speed and high quality of service, the proposed ATM architecture will remain of significant use for telemedicine.Keywords: ATM, multiplexing, queueing, telemedicine, VBR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744287 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799286 Design and Implementation of a 10-bit SAR ADC
Authors: Hasmayadi Abdul Majid, Rohana Musa
Abstract:
This paper presents the development of a 38.5 kS/s 10-bit low power SAR ADC which is realized in MIMOS’s 0.35 µm CMOS process. The design uses a resistive DAC, a dynamic comparator with pre-amplifier and SAR digital logic to create 10 effective bits while consuming less than 7.8 mW with a 3.3 V power supply.
Keywords: Successive Approximation Register Analog-to- Digital Converter, SAR ADC, Resistive DAC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5437285 Oscillatory Electroosmotic Flow of Power-Law Fluids in a Microchannel
Authors: Rubén Bãnos, José Arcos, Oscar Bautista, Federico Méndez
Abstract:
The Oscillatory electroosmotic flow (OEOF) in power law fluids through a microchannel is studied numerically. A time-dependent external electric field (AC) is suddenly imposed at the ends of the microchannel which induces the fluid motion. The continuity and momentum equations in the x and y direction for the flow field were simplified in the limit of the lubrication approximation theory (LAT), and then solved using a numerical scheme. The solution of the electric potential is based on the Debye-H¨uckel approximation which suggest that the surface potential is small,say, smaller than 0.025V and for a symmetric (z : z) electrolyte. Our results suggest that the velocity profiles across the channel-width are controlled by the following dimensionless parameters: the angular Reynolds number, Reω, the electrokinetic parameter, ¯κ, defined as the ratio of the characteristic length scale to the Debye length, the parameter λ which represents the ratio of the Helmholtz-Smoluchowski velocity to the characteristic length scale and the flow behavior index, n. Also, the results reveal that the velocity profiles become more and more non-uniform across the channel-width as the Reω and ¯κ are increased, so oscillatory OEOF can be really useful in micro-fluidic devices such as micro-mixers.Keywords: Oscillatory electroosmotic flow, Non-Newtonian fluids, power-law model, low zeta potentials.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883284 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results
Authors: C. Villegas-Quezada, J. Climent
Abstract:
Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497283 Note on the Necessity of the Patch Test
Authors: Rado Flajs, Miran Saje
Abstract:
We present a simple nonconforming approximation of the linear two–point boundary value problem which violates patch test requirements. Nevertheless the solutions, obtained from these type of approximations, converge to the exact solution.
Keywords: Generalized patch test, Irons' patch test, nonconforming finite element, convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549282 Estimation of the Bit Side Force by Using Artificial Neural Network
Authors: Mohammad Heidari
Abstract:
Horizontal wells are proven to be better producers because they can be extended for a long distance in the pay zone. Engineers have the technical means to forecast the well productivity for a given horizontal length. However, experiences have shown that the actual production rate is often significantly less than that of forecasted. It is a difficult task, if not impossible to identify the real reason why a horizontal well is not producing what was forecasted. Often the source of problem lies in the drilling of horizontal section such as permeability reduction in the pay zone due to mud invasion or snaky well patterns created during drilling. Although drillers aim to drill a constant inclination hole in the pay zone, the more frequent outcome is a sinusoidal wellbore trajectory. The two factors, which play an important role in wellbore tortuosity, are the inclination and side force at bit. A constant inclination horizontal well can only be drilled if the bit face is maintained perpendicular to longitudinal axis of bottom hole assembly (BHA) while keeping the side force nil at the bit. This approach assumes that there exists no formation force at bit. Hence, an appropriate BHA can be designed if bit side force and bit tilt are determined accurately. The Artificial Neural Network (ANN) is superior to existing analytical techniques. In this study, the neural networks have been employed as a general approximation tool for estimation of the bit side forces. A number of samples are analyzed with ANN for parameters of bit side force and the results are compared with exact analysis. Back Propagation Neural network (BPN) is used to approximation of bit side forces. Resultant low relative error value of the test indicates the usability of the BPN in this area.Keywords: Artificial Neural Network, BHA, Horizontal Well, Stabilizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978281 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of a Digital-Noiseless, Ultra-High-Speed Image Sensor
Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi
Abstract:
Since 2004, we have been developing an in-situ storage image sensor (ISIS) that captures more than 100 consecutive images at a frame rate of 10 Mfps with ultra-high sensitivity as well as the video camera for use with this ISIS. Currently, basic research is continuing in an attempt to increase the frame rate up to 100 Mfps and above. In order to suppress electro-magnetic noise at such high frequency, a digital-noiseless imaging transfer scheme has been developed utilizing solely sinusoidal driving voltages. This paper presents highly efficient-yet-accurate expressions to estimate attenuation as well as phase delay of driving voltages through RC networks of an ultra-high-speed image sensor. Elmore metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE data, we found a simple expression that significantly improves the accuracy of the approximation. Similarly, another simple closed-form model to estimate phase delay through fundamental RC networks is also obtained. Estimation error of both expressions is much less than previous works, only less 2% for most of the cases . The framework of this analysis can be extended to address similar issues of other VLSI structures.
Keywords: Dimensional Analysis, ISIS, Digital-noiseless, RC network, Attenuation, Phase Delay, Elmore model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454280 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)
Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton
Abstract:
Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.Keywords: Cold-start, expectation propagation, multi-armed bandits, Thompson sampling, variational inference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552279 Variational EM Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we propose the variational EM inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multiclass. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.
Keywords: Bayesian rule, Gaussian process classification model with multiclass, Gaussian process prior, human action classification, laplace approximation, variational EM algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758278 Upgraded Rough Clustering and Outlier Detection Method on Yeast Dataset by Entropy Rough K-Means Method
Authors: P. Ashok, G. M. Kadhar Nawaz
Abstract:
Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.
Keywords: Clustering, Entropy, Outlier, Rough K-Means, validity index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412277 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129276 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging
Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan
Abstract:
In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493275 On the Invariant Uniform Roe Algebra as Crossed Product
Authors: Kankeyanathan Kannan
Abstract:
The uniform Roe C*-algebra (also called uniform translation)C^*- algebra provides a link between coarse geometry and C^*- algebra theory. The uniform Roe algebra has a great importance in geometry, topology and analysis. We consider some of the elementary concepts associated with coarse spaces.
Keywords: Invariant Approximation Property, Uniform Roe algebras.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736274 Best Coapproximation in Fuzzy Anti-n-Normed Spaces
Authors: J. Kavikumar, N. S. Manian, M. B. K. Moorthy
Abstract:
The main purpose of this paper is to consider the new kind of approximation which is called as t-best coapproximation in fuzzy n-normed spaces. The set of all t-best coapproximation define the t-coproximinal, t-co-Chebyshev and F-best coapproximation and then prove several theorems pertaining to this sets.
Keywords: Fuzzy-n-normed space, best coapproximation, co-proximinal, co-Chebyshev, F-best coapproximation, orthogonality
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630273 Computable Function Representations Using Effective Chebyshev Polynomial
Authors: Mohammed A. Abutheraa, David Lester
Abstract:
We show that Chebyshev Polynomials are a practical representation of computable functions on the computable reals. The paper presents error estimates for common operations and demonstrates that Chebyshev Polynomial methods would be more efficient than Taylor Series methods for evaluation of transcendental functions.
Keywords: Approximation Theory, Chebyshev Polynomial, Computable Functions, Computable Real Arithmetic, Integration, Numerical Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3087272 ZMP Based Reference Generation for Biped Walking Robots
Authors: Kemalettin Erbatur, Özer Koca, Evrim Taşkıran, Metin Yılmaz, Utku Seven
Abstract:
Recent fifteen years witnessed fast improvements in the field of humanoid robotics. The human-like robot structure is more suitable to human environment with its supreme obstacle avoidance properties when compared with wheeled service robots. However, the walking control for bipedal robots is a challenging task due to their complex dynamics. Stable reference generation plays a very important role in control. Linear Inverted Pendulum Model (LIPM) and the Zero Moment Point (ZMP) criterion are applied in a number of studies for stable walking reference generation of biped walking robots. This paper follows this main approach too. We propose a natural and continuous ZMP reference trajectory for a stable and human-like walk. The ZMP reference trajectories move forward under the sole of the support foot when the robot body is supported by a single leg. Robot center of mass trajectory is obtained from predefined ZMP reference trajectories by a Fourier series approximation method. The Gibbs phenomenon problem common with Fourier approximations of discontinuous functions is avoided by employing continuous ZMP references. Also, these ZMP reference trajectories possess pre-assigned single and double support phases, which are very useful in experimental tuning work. The ZMP based reference generation strategy is tested via threedimensional full-dynamics simulations of a 12-degrees-of-freedom biped robot model. Simulation results indicate that the proposed reference trajectory generation technique is successful.Keywords: Biped robot, Linear Inverted Pendulum Model, Zero Moment Point, Fourier series approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631271 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis
Authors: Isao Taguchi, Yasuo Sugai
Abstract:
This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430270 Design and Implementation of a 10-bit SAR ADC with A Programmable Reference
Authors: Hasmayadi Abdul Majid, Yuzman Yusoff, Noor Shelida Salleh
Abstract:
This paper presents the development of a single-ended 38.5 kS/s 10-bit programmable reference SAR ADC which is realized in MIMOS’s 0.35 µm CMOS process. The design uses a resistive DAC, a dynamic comparator with pre-amplifier and a SAR digital logic to create 10 effective bits ADC. A programmable reference circuitry allows the ADC to operate with different input range from 0.6 V to 2.1 V. The ADC consumed less than 7.5 mW power with a 3 V supply.
Keywords: Successive Approximation Register Analog-to- Digital Converter, SAR ADC, Resistive DAC, Programmable Reference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117269 An Iterative Updating Method for Damped Gyroscopic Systems
Authors: Yongxin Yuan
Abstract:
The problem of updating damped gyroscopic systems using measured modal data can be mathematically formulated as following two problems. Problem I: Given Ma ∈ Rn×n, Λ = diag{λ1, ··· , λp} ∈ Cp×p, X = [x1, ··· , xp] ∈ Cn×p, where p<n and both Λ and X are closed under complex conjugation in the sense that λ2j = λ¯2j−1 ∈ C, x2j = ¯x2j−1 ∈ Cn for j = 1, ··· , l, and λk ∈ R, xk ∈ Rn for k = 2l + 1, ··· , p, find real-valued symmetric matrices D,K and a real-valued skew-symmetric matrix G (that is, GT = −G) such that MaXΛ2 + (D + G)XΛ + KX = 0. Problem II: Given real-valued symmetric matrices Da, Ka ∈ Rn×n and a real-valued skew-symmetric matrix Ga, find (D, ˆ G, ˆ Kˆ ) ∈ SE such that Dˆ −Da2+Gˆ−Ga2+Kˆ −Ka2 = min(D,G,K)∈SE (D− Da2 + G − Ga2 + K − Ka2), where SE is the solution set of Problem I and · is the Frobenius norm. This paper presents an iterative algorithm to solve Problem I and Problem II. By using the proposed iterative method, a solution of Problem I can be obtained within finite iteration steps in the absence of roundoff errors, and the minimum Frobenius norm solution of Problem I can be obtained by choosing a special kind of initial matrices. Moreover, the optimal approximation solution (D, ˆ G, ˆ Kˆ ) of Problem II can be obtained by finding the minimum Frobenius norm solution of a changed Problem I. A numerical example shows that the introduced iterative algorithm is quite efficient.
Keywords: Model updating, iterative algorithm, gyroscopic system, partially prescribed spectral data, optimal approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441268 Moment Generating Functions of Observed Gaps between Hypopnea Using Saddlepoint Approximations
Authors: Nur Zakiah Mohd Saat, Abdul Aziz Jemain
Abstract:
Saddlepoint approximations is one of the tools to obtain an expressions for densities and distribution functions. We approximate the densities of the observed gaps between the hypopnea events using the Huzurbazar saddlepoint approximation. We demonstrate the density of a maximum likelihood estimator in exponential families.Keywords: Exponential, maximum likehood estimators, observed gap, Saddlepoint approximations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298267 Approximation of Sturm-Liouville Problems by Exponentially Weighted Legendre-Gauss Tau Method
Authors: Mohamed K. El Daou
Abstract:
We construct an exponentially weighted Legendre- Gauss Tau method for solving differential equations with oscillatory solutions. The proposed method is applied to Sturm-Liouville problems. Numerical examples illustrating the efficiency and the high accuracy of our results are presented.
Keywords: Oscillatory functions, Sturm-Liouville problems, legendre polynomial, gauss points.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400266 Conjugate Gradient Algorithm for the Symmetric Arrowhead Solution of Matrix Equation AXB=C
Authors: Minghui Wang, Luping Xu, Juntao Zhang
Abstract:
Based on the conjugate gradient (CG) algorithm, the constrained matrix equation AXB=C and the associate optimal approximation problem are considered for the symmetric arrowhead matrix solutions in the premise of consistency. The convergence results of the method are presented. At last, a numerical example is given to illustrate the efficiency of this method.Keywords: Iterative method, symmetric arrowhead matrix, conjugate gradient algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409265 Arc Length of Rational Bezier Curves and Use for CAD Reparametrization
Authors: Maharavo Randrianarivony
Abstract:
The length of a given rational B'ezier curve is efficiently estimated. Since a rational B'ezier function is nonlinear, it is usually impossible to evaluate its length exactly. The length is approximated by using subdivision and the accuracy of the approximation n is investigated. In order to improve the efficiency, adaptivity is used with some length estimator. A rigorous theoretical analysis of the rate of convergence of n to is given. The required number of subdivisions to attain a prescribed accuracy is also analyzed. An application to CAD parametrization is briefly described. Numerical results are reported to supplement the theory.Keywords: Adaptivity, Length, Parametrization, Rational Bezier
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1798264 Stroke Extraction and Approximation with Interpolating Lagrange Curves
Authors: Bence Kővári, ZSolt Kertész
Abstract:
This paper proposes a stroke extraction method for use in off-line signature verification. After giving a brief overview of the current ongoing researches an algorithm is introduced for detecting and following strokes in static images of signatures. Problems like the handling of junctions and variations in line width and line intensity are discussed in detail. Results are validated by both using an existing on-line signature database and by employing image registration methods.
Keywords: Stroke extraction, spline fitting, off-line signatureverification, image registration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976263 Contaminant Transport in Soil from a Point Source
Authors: S. A. Nta, M. J. Ayotamuno, A. H. Igoni, R. N. Okparanma
Abstract:
The work sought to understand the pattern of movement of contaminant from a continuous point source through soil. The soil used was sandy-loam in texture. The contaminant used was municipal solid waste landfill leachate, introduced as a point source through an entry point located at the center of top layer of the soil tank. Analyses were conducted after maturity periods of 50 and 80 days. The maximum change in chemical concentration was observed on soil samples at a radial distance of 0.25 m. Finite element approximation based model was used to assess the future prediction, management and remediation in the polluted area. The actual field data collected for the case study were used to calibrate the modeling and thus simulated the flow pattern of the pollutants through soil. MATLAB R2015a was used to visualize the flow of pollutant through the soil. Dispersion coefficient at 0.25 and 0.50 m radial distance from the point of application of leachate shows a measure of the spreading of a flowing leachate due to the nature of the soil medium, with its interconnected channels distributed at random in all directions. Surface plots of metals on soil after maturity period of 80 days shows a functional relationship between a designated dependent variable (Y), and two independent variables (X and Z). Comparison of measured and predicted profile transport along the depth after 50 and 80 days of leachate application and end of the experiment shows that there were no much difference between the predicted and measured concentrations as they were all lying close to each other. For the analysis of contaminant transport, finite difference approximation based model was very effective in assessing the future prediction, management and remediation in the polluted area. The experiment gave insight into the most likely pattern of movement of contaminant as a result of continuous percolations of the leachate on soil. This is important for contaminant movement prediction and subsequent remediation of such soils.
Keywords: Contaminant, dispersion, point or leaky source, surface plot, soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 532262 Sinc-Galerkin Method for the Solution of Problems in Calculus of Variations
Authors: M. Zarebnia, N. Aliniya
Abstract:
In this paper, a numerical solution based on sinc functions is used for finding the solution of boundary value problems which arise from the problems of calculus of variations. This approximation reduce the problems to an explicit system of algebraic equations. Some numerical examples are also given to illustrate the accuracy and applicability of the presented method.Keywords: Calculus of variation; Sinc functions; Galerkin; Numerical method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959261 An Approximation Method for Three Quark Systems in the Hyper-Spherical Approach
Authors: B. Rezaei, G. R. Boroun, M. Abdolmaleki
Abstract:
The bound state energy of three quark systems is studied in the framework of a non- relativistic spin independent phenomenological model. The hyper- spherical coordinates are considered for the solution this system. According to Jacobi coordinate, we determined the bound state energy for (uud) and (ddu) quark systems, as quarks are flavorless mass, and it is restrict that choice potential at low and high range in nucleon bag for a bound state.
Keywords: Adiabatic expansion, grand angular momentum, binding energy, perturbation, baryons.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433260 Non-Polynomial Spline Method for the Solution of Problems in Calculus of Variations
Authors: M. Zarebnia, M. Hoshyar, M. Sedaghati
Abstract:
In this paper, a numerical solution based on nonpolynomial cubic spline functions is used for finding the solution of boundary value problems which arise from the problems of calculus of variations. This approximation reduce the problems to an explicit system of algebraic equations. Some numerical examples are also given to illustrate the accuracy and applicability of the presented method.Keywords: Calculus of variation; Non-polynomial spline functions; Numerical method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982259 Numerical Approximation to the Performance of CUSUM Charts for EMA (1) Process
Authors: K. Petcharat, Y. Areepong, S. Sukparungsri, G. Mititelu
Abstract:
These paper, we approximate the average run length (ARL) for CUSUM chart when observation are an exponential first order moving average sequence (EMA1). We used Gauss-Legendre numerical scheme for integral equations (IE) method for approximate ARL0 and ARL1, where ARL in control and out of control, respectively. We compared the results from IE method and exact solution such that the two methods perform good agreement.Keywords: Cumulative Sum Chart, Moving Average Observation, Average Run Length, Numerical Approximations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2163