Search results for: moments estimator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 530

Search results for: moments estimator

530 Moment Estimators of the Parameters of Zero-One Inflated Negative Binomial Distribution

Authors: Rafid Saeed Abdulrazak Alshkaki

Abstract:

In this paper, zero-one inflated negative binomial distribution is considered, along with some of its structural properties, then its parameters were estimated using the method of moments. It is found that the method of moments to estimate the parameters of the zero-one inflated negative binomial models is not a proper method and may give incorrect conclusions.

Keywords: zero one inflated models, negative binomial distribution, moments estimator, non negative integer sampling

Procedia PDF Downloads 292
529 Practical Techniques of Improving State Estimator Solution

Authors: Kiamran Radjabli

Abstract:

State Estimator became an intrinsic part of Energy Management Systems (EMS). The SCADA measurements received from the field are processed by the State Estimator in order to accurately determine the actual operating state of the power systems and provide that information to other real-time network applications. All EMS vendors offer a State Estimator functionality in their baseline products. However, setting up and ensuring that State Estimator consistently produces a reliable solution often consumes a substantial engineering effort. This paper provides generic recommendations and describes a simple practical approach to efficient tuning of State Estimator, based on the working experience with major EMS software platforms and consulting projects in many electrical utilities of the USA.

Keywords: convergence, monitoring, state estimator, performance, troubleshooting, tuning, power systems

Procedia PDF Downloads 155
528 Ratio Type Estimators for the Estimation of Population Coefficient of Variation under Two-Stage Sampling

Authors: Muhammad Jabbar

Abstract:

In this paper we propose two ratio and ratio type exponential estimator for the estimation of population coefficient of variation using the auxiliary information under two-stage sampling. The properties of these estimators are derived up to first order of approximation. The efficiency conditions under which suggested estimator are more efficient, are obtained. Numerical and simulated studies are conducted to support the superiority of the estimators. Theoretically and numerically, we have found that our proposed estimator is always more efficient as compared to its competitor estimator.

Keywords: two-stage sampling, coefficient of variation, ratio type exponential estimator

Procedia PDF Downloads 527
527 Estimation of Stress-Strength Parameter for Burr Type XII Distribution Based on Progressive Type-II Censoring

Authors: A. M. Abd-Elfattah, M. H. Abu-Moussa

Abstract:

In this paper, the estimation of stress-strength parameter R = P(Y < X) is considered when X; Y the strength and stress respectively are two independent random variables of Burr Type XII distribution. The samples taken for X and Y are progressively censoring of type II. The maximum likelihood estimator (MLE) of R is obtained when the common parameter is unknown. But when the common parameter is known the MLE, uniformly minimum variance unbiased estimator (UMVUE) and the Bayes estimator of R = P(Y < X) are obtained. The exact con dence interval of R based on MLE is obtained. The performance of the proposed estimators is compared using the computer simulation.

Keywords: Burr Type XII distribution, progressive type-II censoring, stress-strength model, unbiased estimator, maximum-likelihood estimator, uniformly minimum variance unbiased estimator, confidence intervals, Bayes estimator

Procedia PDF Downloads 456
526 On the Performance of Improvised Generalized M-Estimator in the Presence of High Leverage Collinearity Enhancing Observations

Authors: Habshah Midi, Mohammed A. Mohammed, Sohel Rana

Abstract:

Multicollinearity occurs when two or more independent variables in a multiple linear regression model are highly correlated. The ridge regression is the commonly used method to rectify this problem. However, the ridge regression cannot handle the problem of multicollinearity which is caused by high leverage collinearity enhancing observation (HLCEO). Since high leverage points (HLPs) are responsible for inducing multicollinearity, the effect of HLPs needs to be reduced by using Generalized M estimator. The existing GM6 estimator is based on the Minimum Volume Ellipsoid (MVE) which tends to swamp some low leverage points. Hence an improvised GM (MGM) estimator is presented to improve the precision of the GM6 estimator. Numerical example and simulation study are presented to show how HLPs can cause multicollinearity. The numerical results show that our MGM estimator is the most efficient method compared to some existing methods.

Keywords: identification, high leverage points, multicollinearity, GM-estimator, DRGP, DFFITS

Procedia PDF Downloads 261
525 Robust Shrinkage Principal Component Parameter Estimator for Combating Multicollinearity and Outliers’ Problems in a Poisson Regression Model

Authors: Arum Kingsley Chinedu, Ugwuowo Fidelis Ifeanyi, Oranye Henrietta Ebele

Abstract:

The Poisson regression model (PRM) is a nonlinear model that belongs to the exponential family of distribution. PRM is suitable for studying count variables using appropriate covariates and sometimes experiences the problem of multicollinearity in the explanatory variables and outliers on the response variable. This study aims to address the problem of multicollinearity and outliers jointly in a Poisson regression model. We developed an estimator called the robust modified jackknife PCKL parameter estimator by combining the principal component estimator, modified jackknife KL and transformed M-estimator estimator to address both problems in a PRM. The superiority conditions for this estimator were established, and the properties of the estimator were also derived. The estimator inherits the characteristics of the combined estimators, thereby making it efficient in addressing both problems. And will also be of immediate interest to the research community and advance this study in terms of novelty compared to other studies undertaken in this area. The performance of the estimator (robust modified jackknife PCKL) with other existing estimators was compared using mean squared error (MSE) as a performance evaluation criterion through a Monte Carlo simulation study and the use of real-life data. The results of the analytical study show that the estimator outperformed other existing estimators compared with by having the smallest MSE across all sample sizes, different levels of correlation, percentages of outliers and different numbers of explanatory variables.

Keywords: jackknife modified KL, outliers, multicollinearity, principal component, transformed M-estimator.

Procedia PDF Downloads 64
524 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 165
523 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 580
522 Unit Root Tests Based On the Robust Estimator

Authors: Wararit Panichkitkosolkul

Abstract:

The unit root tests based on the robust estimator for the first-order autoregressive process are proposed and compared with the unit root tests based on the ordinary least squares (OLS) estimator. The percentiles of the null distributions of the unit root test are also reported. The empirical probabilities of Type I error and powers of the unit root tests are estimated via Monte Carlo simulation. Simulation results show that all unit root tests can control the probability of Type I error for all situations. The empirical power of the unit root tests based on the robust estimator are higher than the unit root tests based on the OLS estimator.

Keywords: autoregressive, ordinary least squares, type i error, power of the test, Monte Carlo simulation

Procedia PDF Downloads 287
521 Characteristic Function in Estimation of Probability Distribution Moments

Authors: Vladimir S. Timofeev

Abstract:

In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique, author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.

Keywords: characteristic function, distributional moments, robustness, outlier, statistical estimation problem, statistical simulation

Procedia PDF Downloads 503
520 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling

Authors: Muhammad Nouman Qureshi, Muhammad Hanif

Abstract:

Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.

Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation

Procedia PDF Downloads 236
519 On Estimating the Headcount Index by Using the Logistic Regression Estimator

Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz, Francisco J. Blanco-Encomienda

Abstract:

The problem of estimating a proportion has important applications in the field of economics, and in general, in many areas such as social sciences. A common application in economics is the estimation of the headcount index. In this paper, we define the general headcount index as a proportion. Furthermore, we introduce a new quantitative method for estimating the headcount index. In particular, we suggest to use the logistic regression estimator for the problem of estimating the headcount index. Assuming a real data set, results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the traditional estimator of the headcount index.

Keywords: poverty line, poor, risk of poverty, Monte Carlo simulations, sample

Procedia PDF Downloads 421
518 Face Recognition Using Discrete Orthogonal Hahn Moments

Authors: Fatima Akhmedova, Simon Liao

Abstract:

One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work, we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, non-redundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database.

Keywords: face recognition, Hahn moments, recognition-by-parts, time-lapse

Procedia PDF Downloads 374
517 Numerical Implementation and Testing of Fractioning Estimator Method for the Box-Counting Dimension of Fractal Objects

Authors: Abraham Terán Salcedo, Didier Samayoa Ochoa

Abstract:

This work presents a numerical implementation of a method for estimating the box-counting dimension of self-avoiding curves on a planar space, fractal objects captured on digital images; this method is named fractioning estimator. Classical methods of digital image processing, such as noise filtering, contrast manipulation, and thresholding, among others, are used in order to obtain binary images that are suitable for performing the necessary computations of the fractioning estimator. A user interface is developed for performing the image processing operations and testing the fractioning estimator on different captured images of real-life fractal objects. To analyze the results, the estimations obtained through the fractioning estimator are compared to the results obtained through other methods that are already implemented on different available software for computing and estimating the box-counting dimension.

Keywords: box-counting, digital image processing, fractal dimension, numerical method

Procedia PDF Downloads 82
516 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error

Authors: R. Sabre, W. Horrigue, J. C. Simon

Abstract:

This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.

Keywords: spectral density, stable processes, aliasing, periodogram

Procedia PDF Downloads 137
515 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution

Authors: Al Omari Mohammed Ahmed

Abstract:

This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.

Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring

Procedia PDF Downloads 440
514 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 164
513 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 383
512 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 150
511 Developing Variable Repetitive Group Sampling Control Chart Using Regression Estimator

Authors: Liaquat Ahmad, Muhammad Aslam, Muhammad Azam

Abstract:

In this article, we propose a control chart based on repetitive group sampling scheme for the location parameter. This charting scheme is based on the regression estimator; an estimator that capitalize the relationship between the variables of interest to provide more sensitive control than the commonly used individual variables. The control limit coefficients have been estimated for different sample sizes for less and highly correlated variables. The monitoring of the production process is constructed by adopting the procedure of the Shewhart’s x-bar control chart. Its performance is verified by the average run length calculations when the shift occurs in the average value of the estimator. It has been observed that the less correlated variables have rapid false alarm rate.

Keywords: average run length, control charts, process shift, regression estimators, repetitive group sampling

Procedia PDF Downloads 564
510 Capture-recapture to Estimate Completeness of Pulmonary Tuberculosis with Two Sources

Authors: Ratchadaporn Ungcharoen, Lily Ingsrisawang

Abstract:

Capture-recapture methods are popular techniques for indirect estimation the size of wildlife populations and the completeness of cases in epidemiology and social sciences. The aim of this study was to estimate the completeness of pulmonary tuberculosis cases confirmed by two sources of hospital registrations and surveillance systems in 2013 in Nakhon Pathom province, Thailand. Several estimators of population size were considered: the Lincoln-Petersen estimator, the Chapman estimator, the Chao’s lower bound estimator, the Zelterman’s estimator, etc. We focus on the Chapman and Chao’s lower bound estimators for estimating the completeness of pulmonary tuberculosis from two sources. The retrieved pulmonary tuberculosis data from two sources were analyzed and bootstrapped for 30 samples, with 241 observations from source 1 and 305 observations from source 2 per sample, for additional exploration of the completeness of pulmonary tuberculosis. The results from the original data show that the Chapman’s estimator gave the estimation of a total 360 (95% CI: 349-371) pulmonary tuberculosis cases, resulting in 57% estimated completeness cases. But the Chao’s lower bound estimator estimated the total of 365 (95% CI: 354-376) pulmonary tuberculosis cases and its estimated completeness cases was 55.9%. For the results from bootstrap samples, the Chapman and the Chao’s lower bound estimators gave an estimated 347 (95% CI: 309-385) and 353 (95% CI: 315-390) pulmonary tuberculosis cases, respectively. If for two sources recoding systems are available, record-linkage and capture-recapture analysis can be useful for estimating the completeness of different registration system. Both Chapman and Chao’s lower bound estimator approaches produce very close estimates.

Keywords: capture-recapture, Chao, Chapman, pulmonary tuberculosis

Procedia PDF Downloads 516
509 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution

Authors: Md. Rashidul Hasan, Atikur Rahman Baizid

Abstract:

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.

Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function

Procedia PDF Downloads 383
508 Sorting Fish by Hu Moments

Authors: J. M. Hernández-Ontiveros, E. E. García-Guerrero, E. Inzunza-González, O. R. López-Bonilla

Abstract:

This paper presents the implementation of an algorithm that identifies and accounts different fish species: Catfish, Sea bream, Sawfish, Tilapia, and Totoaba. The main contribution of the method is the fusion of the characteristics of invariance to the position, rotation and scale of the Hu moments, with the proper counting of fish. The identification and counting is performed, from an image under different noise conditions. From the experimental results obtained, it is inferred the potentiality of the proposed algorithm to be applied in different scenarios of aquaculture production.

Keywords: counting fish, digital image processing, invariant moments, pattern recognition

Procedia PDF Downloads 407
507 Estimation of Population Mean Using Characteristics of Poisson Distribution: An Application to Earthquake Data

Authors: Prayas Sharma

Abstract:

This paper proposed a generalized class of estimators, an exponential class of estimators based on the adaption of Sharma and Singh (2015) and Solanki and Singh (2013), and a simple difference estimator for estimating unknown population mean in the case of Poisson distributed population in simple random sampling without replacement. The expressions for mean square errors of the proposed classes of estimators are derived from the first order of approximation. It is shown that the adapted version of Solanki and Singh (2013), the exponential class of estimator, is always more efficient than the usual estimator, ratio, product, exponential ratio, and exponential product type estimators and equally efficient to simple difference estimator. Moreover, the adapted version of Sharma and Singh's (2015) estimator is always more efficient than all the estimators available in the literature. In addition, theoretical findings are supported by an empirical study to show the superiority of the constructed estimators over others with an application to earthquake data of Turkey.

Keywords: auxiliary attribute, point bi-serial, mean square error, simple random sampling, Poisson distribution

Procedia PDF Downloads 154
506 A Generalized Family of Estimators for Estimation of Unknown Population Variance in Simple Random Sampling

Authors: Saba Riaz, Syed A. Hussain

Abstract:

This paper is addressing the estimation method of the unknown population variance of the variable of interest. A new generalized class of estimators of the finite population variance has been suggested using the auxiliary information. To improve the precision of the proposed class, known population variance of the auxiliary variable has been used. Mathematical expressions for the biases and the asymptotic variances of the suggested class are derived under large sample approximation. Theoretical and numerical comparisons are made to investigate the performances of the proposed class of estimators. The empirical study reveals that the suggested class of estimators performs better than the usual estimator, classical ratio estimator, classical product estimator and classical linear regression estimator. It has also been found that the suggested class of estimators is also more efficient than some recently published estimators.

Keywords: study variable, auxiliary variable, finite population variance, bias, asymptotic variance, percent relative efficiency

Procedia PDF Downloads 224
505 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1506
504 Content-Based Color Image Retrieval Based on the 2-D Histogram and Statistical Moments

Authors: El Asnaoui Khalid, Aksasse Brahim, Ouanan Mohammed

Abstract:

In this paper, we are interested in the problem of finding similar images in a large database. For this purpose we propose a new algorithm based on a combination of the 2-D histogram intersection in the HSV space and statistical moments. The proposed histogram is based on a 3x3 window and not only on the intensity of the pixel. This approach can overcome the drawback of the conventional 1-D histogram which is ignoring the spatial distribution of pixels in the image, while the statistical moments are used to escape the effects of the discretisation of the color space which is intrinsic to the use of histograms. We compare the performance of our new algorithm to various methods of the state of the art and we show that it has several advantages. It is fast, consumes little memory and requires no learning. To validate our results, we apply this algorithm to search for similar images in different image databases.

Keywords: 2-D histogram, statistical moments, indexing, similarity distance, histograms intersection

Procedia PDF Downloads 455
503 A New Method to Estimate the Low Income Proportion: Monte Carlo Simulations

Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz

Abstract:

Estimation of a proportion has many applications in economics and social studies. A common application is the estimation of the low income proportion, which gives the proportion of people classified as poor into a population. In this paper, we present this poverty indicator and propose to use the logistic regression estimator for the problem of estimating the low income proportion. Various sampling designs are presented. Assuming a real data set obtained from the European Survey on Income and Living Conditions, Monte Carlo simulation studies are carried out to analyze the empirical performance of the logistic regression estimator under the various sampling designs considered in this paper. Results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the customary estimator under the various sampling designs considered in this paper. The stratified sampling design can also provide more accurate results.

Keywords: poverty line, risk of poverty, auxiliary variable, ratio method

Procedia PDF Downloads 455
502 Friction Estimation and Compensation for Steering Angle Control for Highly Automated Driving

Authors: Marcus Walter, Norbert Nitzsche, Dirk Odenthal, Steffen Müller

Abstract:

This contribution presents a friction estimator for industrial purposes which identifies Coulomb friction in a steering system. The estimator only needs a few, usually known, steering system parameters. Friction occurs on almost every mechanical system and has a negative influence on high-precision position control. This is demonstrated on a steering angle controller for highly automated driving. In this steering system the friction induces limit cycles which cause oscillating vehicle movement when the vehicle follows a given reference trajectory. When compensating the friction with the introduced estimator, limit cycles can be suppressed. This is demonstrated by measurements in a series vehicle.

Keywords: friction estimation, friction compensation, steering system, lateral vehicle guidance

Procedia PDF Downloads 514
501 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 333