Search results for: error estimates
2518 Generalization of Tau Approximant and Error Estimate of Integral Form of Tau Methods for Some Class of Ordinary Differential Equations
Authors: A. I. Ma’ali, R. B. Adeniyi, A. Y. Badeggi, U. Mohammed
Abstract:
An error estimation of the integrated formulation of the Lanczos tau method for some class of ordinary differential equations was reported. This paper is concern with the generalization of tau approximants and their corresponding error estimates for some class of ordinary differential equations (ODEs) characterized by m + s =3 (i.e for m =1, s=2; m=2, s=1; and m=3, s=0) where m and s are the order of differential equations and number of overdetermination, respectively. The general result obtained were validated with some numerical examples.Keywords: approximant, error estimate, tau method, overdetermination
Procedia PDF Downloads 6072517 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis
Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera
Abstract:
Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.Keywords: log-linear model, multi spectral, residuals, spatial error model
Procedia PDF Downloads 2982516 Numerical Analysis of a Reaction Diffusion System of Lambda-Omega Type
Authors: Hassan J. Al Salman, Ahmed A. Al Ghafli
Abstract:
In this study, we consider a nonlinear in time finite element approximation of a reaction diffusion system of lambda-omega type. We use a fixed-point theorem to prove existence of the approximations at each time level. Then, we derive some essential stability estimates and discuss the uniqueness of the approximations. In addition, we employ Nochetto mathematical framework to prove an optimal error bound in time for d= 1, 2 and 3 space dimensions. Finally, we present some numerical experiments to verify the obtained theoretical results.Keywords: reaction diffusion system, finite element approximation, stability estimates, error bound
Procedia PDF Downloads 4302515 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 3972514 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.Keywords: aggregate data, combined-level data, individual patient data, meta-analysis
Procedia PDF Downloads 3752513 The Effect of Non-Normality on CB-SEM and PLS-SEM Path Estimates
Authors: Z. Jannoo, B. W. Yap, N. Auchoybur, M. A. Lazim
Abstract:
The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are non-normal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and non-normality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model with one endogenous and four exogenous variables. Each latent variable has three indicators. For normal distributions, CB-SEM estimates were found to be inaccurate for small sample size while PLS-SEM could produce the path estimates. Meanwhile, for a larger sample size, CB-SEM estimates have lower variability compared to PLS-SEM. Under non-normality, CB-SEM path estimates were inaccurate for small sample size. However, CB-SEM estimates are more accurate than those of PLS-SEM for sample size of 50 and above. The PLS-SEM estimates are not accurate unless sample size is very large.Keywords: CB-SEM, Monte Carlo simulation, normality conditions, non-normality, PLS-SEM
Procedia PDF Downloads 4112512 Effects of Manufacture and Assembly Errors on the Output Error of Globoidal Cam Mechanisms
Authors: Shuting Ji, Yueming Zhang, Jing Zhao
Abstract:
The output error of the globoidal cam mechanism can be considered as a relevant indicator of mechanism performance, because it determines kinematic and dynamical behavior of mechanical transmission. Based on the differential geometry and the rigid body transformations, the mathematical model of surface geometry of the globoidal cam is established. Then we present the analytical expression of the output error (including the transmission error and the displacement error along the output axis) by considering different manufacture and assembly errors. The effects of the center distance error, the perpendicular error between input and output axes and the rotational angle error of the globoidal cam on the output error are systematically analyzed. A globoidal cam mechanism which is widely used in automatic tool changer of CNC machines is applied for illustration. Our results show that the perpendicular error and the rotational angle error have little effects on the transmission error but have great effects on the displacement error along the output axis. This study plays an important role in the design, manufacture and assembly of the globoidal cam mechanism.Keywords: globoidal cam mechanism, manufacture error, transmission error, automatic tool changer
Procedia PDF Downloads 5742511 A Posteriori Analysis of the Spectral Element Discretization of Heat Equation
Authors: Chor Nejmeddine, Ines Ben Omrane, Mohamed Abdelwahed
Abstract:
In this paper, we present a posteriori analysis of the discretization of the heat equation by spectral element method. We apply Euler's implicit scheme in time and spectral method in space. We propose two families of error indicators, both of which are built from the residual of the equation and we prove that they satisfy some optimal estimates. We present some numerical results which are coherent with the theoretical ones.Keywords: heat equation, spectral elements discretization, error indicators, Euler
Procedia PDF Downloads 3072510 Critical Accounting Estimates and Transparency in Financial Reporting: An Observation Financial Reporting under US GAAP
Authors: Ahmed Shaik
Abstract:
Estimates are very critical in accounting and Financial Reporting cannot be complete without these estimates. There is a long list of accounting estimates that are required to be made to compute Net Income and to determine the value of assets and liabilities. To name a few, valuation of inventory, depreciation, valuation of goodwill, provision for bad debts and estimated warranties, etc. require the use of different valuation models and forecasts. Different business entities under the same industry may use different approaches to measure the value of financial items being reported in Income Statement and Balance Sheet. The disclosure notes do not provide enough details of the approach used by a business entity to arrive at the value of a financial item. Lack of details in the disclosure notes makes it difficult to compare the financial performance of one business entity with the other in the same industry. This paper is an attempt to identify the lack of enough information about accounting estimates in disclosure notes, the impact of the absence of details of accounting estimates on the comparability of financial data and financial analysis. An attempt is made to suggest the detailed disclosure while taking care of the cost and benefit of making such disclosure.Keywords: accounting estimates, disclosure notes, financial reporting, transparency
Procedia PDF Downloads 2002509 Co-Integration and Error Correction Mechanism of Supply Response of Sugarcane in Pakistan (1980-2012)
Authors: Himayatullah Khan
Abstract:
This study estimates supply response function of sugarcane in Pakistan from 1980-81 to 2012-13. The study uses co-integration approach and error correction mechanism. Sugarcane production, area and price series were tested for unit root using Augmented Dickey Fuller (ADF). The study found that these series were stationary at their first differenced level. Using the Augmented Engle-Granger test and Cointegrating Regression Durbin-Watson (CRDW) test, the study found that “production and price” and “area and price” were co-integrated suggesting that the two sets of time series had long-run or equilibrium relationship. The results of the error correction models for the two sets of series showed that there was disequilibrium in the short run there may be disequilibrium. The Engle-Granger residual may be thought of as the equilibrium error which can be used to tie the short-run behavior of the dependent variable to its long-run value. The Granger-Causality test results showed that log of price granger caused both the long of production and log of area whereas, the log of production and log of area Granger caused each other.Keywords: co-integration, error correction mechanism, Granger-causality, sugarcane, supply response
Procedia PDF Downloads 4362508 Simulation as a Problem-Solving Spotter for System Reliability
Authors: Wheyming Tina Song, Chi-Hao Hong, Peisyuan Lin
Abstract:
An important performance measure for stochastic manufacturing networks is the system reliability, defined as the probability that the production output meets or exceeds a specified demand. The system parameters include the capacity of each workstation and numbers of the conforming parts produced in each workstation. We establish that eighteen archival publications, containing twenty-one examples, provide incorrect values of the system reliability. The author recently published the Song Rule, which provides the correct analytical system-reliability value; it is, however, computationally inefficient for large networks. In this paper, we use Monte Carlo simulation (implemented in C and Flexsim) to provide estimates for the above-mentioned twenty-one examples. The simulation estimates are consistent with the analytical solution for small networks but is computationally efficient for large networks. We argue here for three advantages of Monte Carlo simulation: (1) understanding stochastic systems, (2) validating analytical results, and (3) providing estimates even when analytical and numerical approaches are overly expensive in computation. Monte Carlo simulation could have detected the published analysis errors.Keywords: Monte Carlo simulation, analytical results, leading digit rule, standard error
Procedia PDF Downloads 3632507 Mathematical and Numerical Analysis of a Reaction Diffusion System of Lambda-Omega Type
Authors: Hassan Al Salman, Ahmed Al Ghafli
Abstract:
In this study we consider a nonlinear in time finite element approximation of a reaction diffusion system of lambda-omega type. We use a fixed point theorem to prove existence of the approximations. Then, we derive some essential stability estimates and discuss the uniqueness of the approximations. Also, we prove an optimal error bound in time for d=1, 2 and 3 space dimensions. Finally, we present some numerical experiments to verify the theoretical results.Keywords: reaction diffusion system, finite element approximation, fixed point theorem, an optimal error bound
Procedia PDF Downloads 5342506 Relevancy Measures of Errors in Displacements of Finite Elements Analysis Results
Authors: A. B. Bolkhir, A. Elshafie, T. K. Yousif
Abstract:
This paper highlights the methods of error estimation in finite element analysis (FEA) results. It indicates that the modeling error could be eliminated by performing finite element analysis with successively finer meshes or by extrapolating response predictions from an orderly sequence of relatively low degree of freedom analysis results. In addition, the paper eliminates the round-off error by running the code at a higher precision. The paper provides application in finite element analysis results. It draws a conclusion based on results of application of methods of error estimation.Keywords: finite element analysis (FEA), discretization error, round-off error, mesh refinement, richardson extrapolation, monotonic convergence
Procedia PDF Downloads 4972505 Calibration of the Radical Installation Limit Error of the Accelerometer in the Gravity Gradient Instrument
Authors: Danni Cong, Meiping Wu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokuncai, Hao Qin
Abstract:
Gravity gradient instrument (GGI) is the core of the gravity gradiometer, so the structural error of the sensor has a great impact on the measurement results. In order not to affect the aimed measurement accuracy, limit error is required in the installation of the accelerometer. In this paper, based on the established measuring principle model, the radial installation limit error is calibrated, which is taken as an example to provide a method to calculate the other limit error of the installation under the premise of ensuring the accuracy of the measurement result. This method provides the idea for deriving the limit error of the geometry structure of the sensor, laying the foundation for the mechanical precision design and physical design.Keywords: gravity gradient sensor, radial installation limit error, accelerometer, uniaxial rotational modulation
Procedia PDF Downloads 4222504 High Capacity Reversible Watermarking through Interpolated Error Shifting
Authors: Hae-Yeoun Lee
Abstract:
Reversible watermarking that not only protects the copyright but also preserve the original quality of the digital content have been intensively studied. In particular, the demand for reversible watermarking has increased. In this paper, we propose a reversible watermarking scheme based on interpolation-error shifting and error precompensation. The intensity of a pixel is interpolated from the intensities of neighbouring pixels, and the difference histogram between the interpolated and the original intensities is obtained and modified to embed the watermark message. By restoring the difference histogram, the embedded watermark is extracted and the original image is recovered by compensating for the interpolation error. The overflow and underflow are prevented by error precompensation. To show the performance of the method, the proposed algorithm is compared with other methods using various test images.Keywords: reversible watermarking, high capacity, high quality, interpolated error shifting, error precompensation
Procedia PDF Downloads 3242503 Estimating Current Suicide Rates Using Google Trends
Authors: Ladislav Kristoufek, Helen Susannah Moat, Tobias Preis
Abstract:
Data on the number of people who have committed suicide tends to be reported with a substantial time lag of around two years. We examine whether online activity measured by Google searches can help us improve estimates of the number of suicide occurrences in England before official figures are released. Specifically, we analyse how data on the number of Google searches for the terms “depression” and “suicide” relate to the number of suicides between 2004 and 2013. We find that estimates drawing on Google data are significantly better than estimates using previous suicide data alone. We show that a greater number of searches for the term “depression” is related to fewer suicides, whereas a greater number of searches for the term “suicide” is related to more suicides. Data on suicide related search behaviour can be used to improve current estimates of the number of suicide occurrences.Keywords: nowcasting, search data, Google Trends, official statistics
Procedia PDF Downloads 3602502 Grating Scale Thermal Expansion Error Compensation for Large Machine Tools Based on Multiple Temperature Detection
Authors: Wenlong Feng, Zhenchun Du, Jianguo Yang
Abstract:
To decrease the grating scale thermal expansion error, a novel method which based on multiple temperature detections is proposed. Several temperature sensors are installed on the grating scale and the temperatures of these sensors are recorded. The temperatures of every point on the grating scale are calculated by interpolating between adjacent sensors. According to the thermal expansion principle, the grating scale thermal expansion error model can be established by doing the integral for the variations of position and temperature. A novel compensation method is proposed in this paper. By applying the established error model, the grating scale thermal expansion error is decreased by 90% compared with no compensation. The residual positioning error of the grating scale is less than 15um/10m and the accuracy of the machine tool is significant improved.Keywords: thermal expansion error of grating scale, error compensation, machine tools, integral method
Procedia PDF Downloads 3672501 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature
Authors: Chin-Yun Chen
Abstract:
Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.Keywords: numerical quadrature, error estimation, derivative free method, interval computation
Procedia PDF Downloads 4642500 Medical Error: Concept and Description According to Brazilian Physicians
Authors: Vitor S. Mendonca, Maria Luisa S. Schmidt
Abstract:
The Brazilian medical profession is viewed as being error-free, so healthcare professionals who commit an error are condemned there. Medical errors occur frequently in the Brazilian healthcare system, so identifying better options for handling this issue has become of interest primarily for physicians. The purpose of this study is to better understand the tensions involved in the fear of making an error due to the harm and risk this would represent for those involved. A qualitative study was performed by means of the narratives of the lived experiences of ten acting physicians in the State of Sao Paulo. The concept and characterization of errors were discussed, together with the fear of making an error, the near misses or error in itself, how to deal with errors and what to do to avoid them. The analysis indicates an excessive pressure in the medical profession for error-free practices, with a well-established physician-patient relationship to facilitate the management of medical errors. The error occurs, but a lack of information and discussion often leads to its concealment due to fear or possible judgment by society or peers. The establishment of programs that encourage appropriate medical conduct in the event of an error requires coherent answers for humanization in Brazilian medical science. It is necessary to improve the discussion about medical errors and disseminate models of communication and notification of errors in Brazil.Keywords: medical error, narrative, physician-patient relationship, qualitative research
Procedia PDF Downloads 1792499 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring
Authors: Daniel Fundi Murithi
Abstract:
Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring
Procedia PDF Downloads 1632498 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model
Authors: Navid Daryasafar, Nima Farshidfar
Abstract:
In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation
Procedia PDF Downloads 5412497 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini
Abstract:
A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.Keywords: central Italy, extreme events, rainfall data, underestimation errors
Procedia PDF Downloads 1912496 Variation of Refractive Errors among Right and Left Eyes in Jos, Plateau State, Nigeria
Authors: F. B. Masok, S. S Songdeg, R. R. Dawam
Abstract:
Vision is an important process for learning and communication as man depends greatly on vision to sense his environment. Prevalence and variation of refractive errors conducted between December 2010 and May 2011 in Jos, revealed that 735 (77.50%) out 950 subjects examined for refractive error had various refractive errors. Myopia was observed in 373 (49.79%) of the subjects, the error in the right eyes was 263 (55.60%) while the error in the left was 210(44.39%). The mean myopic error was found to be -1.54± 3.32. Hyperopia was observed in 385 (40.53%) of the sampled population comprising 203(52.73%) of the right eyes and 182(47.27%). The mean hyperopic error was found to be +1.74± 3.13. Astigmatism accounted for 359 (38.84%) of the subjects, out of which 193(53.76%) were in the right eyes while 168(46.79%) were in the left eyes. Presbyopia was found in 404(42.53%) of the subjects, of this figure, 164(40.59%) were in the right eyes while 240(59.41%) were in left eyes. The number of right eyes and left eyes with refractive errors was observed in some age groups to increase with age and later had its peak within 60 – 69 age groups. This pattern of refractive errors could be attributed to exposure to various forms of light particularly the ultraviolet rays (e.g rays from television and computer screen). There was no remarkable differences between the mean Myopic error and mean Hyperopic error in the right eyes and in the left eyes which suggest the right eye and the left eye are similar.Keywords: left eye, refractive errors, right eye, variation
Procedia PDF Downloads 4332495 Error Correction Method for 2D Ultra-Wideband Indoor Wireless Positioning System Using Logarithmic Error Model
Authors: Phornpat Chewasoonthorn, Surat Kwanmuang
Abstract:
Indoor positioning technologies have been evolved rapidly. They augment the Global Positioning System (GPS) which requires line-of-sight to the sky to track the location of people or objects. This study developed an error correction method for an indoor real-time location system (RTLS) based on an ultra-wideband (UWB) sensor from Decawave. Multiple stationary nodes (anchor) were installed throughout the workspace. The distance between stationary and moving nodes (tag) can be measured using a two-way-ranging (TWR) scheme. The result has shown that the uncorrected ranging error from the sensor system can be as large as 1 m. To reduce ranging error and thus increase positioning accuracy, This study purposes an online correction algorithm using the Kalman filter. The results from experiments have shown that the system can reduce ranging error down to 5 cm.Keywords: indoor positioning, ultra-wideband, error correction, Kalman filter
Procedia PDF Downloads 1602494 A Sequential Approach for Random-Effects Meta-Analysis
Authors: Samson Henry Dogo, Allan Clark, Elena Kulinskaya
Abstract:
The objective in meta-analysis is to combine results from several independent studies in order to create generalization and provide evidence based for decision making. But recent studies show that the magnitude of effect size estimates reported in many areas of research finding changed with year publication and this can impair the results and conclusions of meta-analysis. A number of sequential methods have been proposed for monitoring the effect size estimates in meta-analysis. However they are based on statistical theory applicable to fixed effect model (FEM). For random-effects model (REM), the analysis incorporates the heterogeneity variance, tau-squared and its estimation create complications. In this paper proposed the use of Gombay and Serbian (2005) truncated CUSUM-type test with asymptotically valid critical values for sequential monitoring of REM. Simulation results show that the test does not control the Type I error well, and is not recommended. Further work required to derive an appropriate test in this important area of application.Keywords: meta-analysis, random-effects model, sequential test, temporal changes in effect sizes
Procedia PDF Downloads 4692493 Uncertainty Estimation in Neural Networks through Transfer Learning
Authors: Ashish James, Anusha James
Abstract:
The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions.Keywords: uncertainty estimation, neural networks, transfer learning, regression
Procedia PDF Downloads 1372492 A Study on the Influence of Planet Pin Parallelism Error to Load Sharing Factor
Authors: Kyung Min Kang, Peng Mou, Dong Xiang, Yong Yang, Gang Shen
Abstract:
In this paper, planet pin parallelism error, which is one of manufacturing error of planet carrier, is employed as a main variable to influence planet load sharing factor. This error is categorize two group: (i) pin parallelism error with rotation on the axis perpendicular to the tangent of base circle of gear(x axis rotation in this paper) (ii) pin parallelism error with rotation on the tangent axis of base circle of gear(y axis rotation in this paper). For this study, the planetary gear system in 1.5MW wind turbine is applied and pure torsional rigid body model of this planetary gear is built using Solidworks and MSC.ADAMS. Based on quantified parallelism error and simulation model, dynamics simulation of planetary gear is carried out to obtain dynamic mesh load results with each type of error and load sharing factor is calculated with mesh load results. Load sharing factor formula and the suggestion for planetary reliability design is proposed with the conclusion of this study.Keywords: planetary gears, planet load sharing, MSC. ADAMS, parallelism error
Procedia PDF Downloads 4002491 Unequal Error Protection of VQ Image Transmission System
Authors: Khelifi Mustapha, A. Moulay lakhdar, I. Elawady
Abstract:
We will study the unequal error protection for VQ image. We have used the Reed Solomon (RS) Codes as Channel coding because they offer better performance in terms of channel error correction over a binary output channel. One such channel (binary input and output) should be considered if it is the case of the application layer, because it includes all the features of the layers located below and on the what it is usually not feasible to make changes.Keywords: vector quantization, channel error correction, Reed-Solomon channel coding, application
Procedia PDF Downloads 3652490 Aggregate Supply Response of Some Livestock Commodities in Algeria: Cointegration- Vector Error Correction Model Approach
Authors: Amine M. Benmehaia, Amine Oulmane
Abstract:
The supply response of agricultural commodities to changes in price incentives is an important issue for the success of any policy reform in the agricultural sector. This study aims to quantify the responsiveness of producers of some livestock commodities to price incentives in Algerian context. Time series analysis is used on annual data for a period of 52 years (1966-2018). Both co-integration and vector error correction model (VECM) are used through the Nerlove model of partial adjustment. The study attempts to determine the long-run and short-run relationships along with the magnitudes of disequilibria in the selected commodities. Results show that the short-run price elasticities are low in cow and sheep meat sectors (8.7 and 8% respectively), while their respective long-run elasticities are 16.5 and 10.5, whereas eggs and milk have very high short-run price elasticities (82 and 90% respectively) with long-run elasticities of 40 and 46 respectively. The error correction coefficient, reflecting the speed of adjustment towards the long-run equilibrium, is statistically significant and have the expected negative sign. Its estimates are 12.7 for cow meat, 33.5 for sheep meat, 46.7 for eggs and 8.4 for milk. It seems that cow meat and milk producers have a weak feedback of about 12.7% and 8.4% respectively of the previous year's disequilibrium from the long-run price elasticity, whereas sheep meat and eggs producers adjust to correct long run disequilibrium with a high speed of adjustment (33.5% and 46.7 % respectively). The implication of this is that much more in-depth research is needed to identify those factors that affect agricultural supply and to describe the effect of factors that shift supply in response to price incentives. This could provide valuable information for government in the use of appropriate policy measures.Keywords: Algeria, cointegration, livestock, supply response, vector error correction model
Procedia PDF Downloads 1422489 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics
Authors: Janne Engblom, Elias Oikarinen
Abstract:
A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.Keywords: dynamic model, fixed effects, panel data, price dynamics
Procedia PDF Downloads 1510