Search results for: regression estimators
3276 Behind Fuzzy Regression Approach: An Exploration Study
Authors: Lavinia B. Dulla
Abstract:
The exploration study of the fuzzy regression approach attempts to present that fuzzy regression can be used as a possible alternative to classical regression. It likewise seeks to assess the differences and characteristics of simple linear regression and fuzzy regression using the width of prediction interval, mean absolute deviation, and variance of residuals. Based on the simple linear regression model, the fuzzy regression approach is worth considering as an alternative to simple linear regression when the sample size is between 10 and 20. As the sample size increases, the fuzzy regression approach is not applicable to use since the assumption regarding large sample size is already operating within the framework of simple linear regression. Nonetheless, it can be suggested for a practical alternative when decisions often have to be made on the basis of small data.Keywords: fuzzy regression approach, minimum fuzziness criterion, interval regression, prediction interval
Procedia PDF Downloads 2983275 Methods of Variance Estimation in Two-Phase Sampling
Authors: Raghunath Arnab
Abstract:
The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators
Procedia PDF Downloads 5883274 Refined Procedures for Second Order Asymptotic Theory
Authors: Gubhinder Kundhi, Paul Rilstone
Abstract:
Refined procedures for higher-order asymptotic theory for non-linear models are developed. These include a new method for deriving stochastic expansions of arbitrary order, new methods for evaluating the moments of polynomials of sample averages, a new method for deriving the approximate moments of the stochastic expansions; an application of these techniques to gather improved inferences with the weak instruments problem is considered. It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. In our application, finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.Keywords: edgeworth expansions, higher order asymptotics, saddlepoint expansions, weak instruments
Procedia PDF Downloads 2773273 Optimization of Machine Learning Regression Results: An Application on Health Expenditures
Authors: Songul Cinaroglu
Abstract:
Machine learning regression methods are recommended as an alternative to classical regression methods in the existence of variables which are difficult to model. Data for health expenditure is typically non-normal and have a heavily skewed distribution. This study aims to compare machine learning regression methods by hyperparameter tuning to predict health expenditure per capita. A multiple regression model was conducted and performance results of Lasso Regression, Random Forest Regression and Support Vector Machine Regression recorded when different hyperparameters are assigned. Lambda (λ) value for Lasso Regression, number of trees for Random Forest Regression, epsilon (ε) value for Support Vector Regression was determined as hyperparameters. Study results performed by using 'k' fold cross validation changed from 5 to 50, indicate the difference between machine learning regression results in terms of R², RMSE and MAE values that are statistically significant (p < 0.001). Study results reveal that Random Forest Regression (R² ˃ 0.7500, RMSE ≤ 0.6000 ve MAE ≤ 0.4000) outperforms other machine learning regression methods. It is highly advisable to use machine learning regression methods for modelling health expenditures.Keywords: machine learning, lasso regression, random forest regression, support vector regression, hyperparameter tuning, health expenditure
Procedia PDF Downloads 2263272 Parameters Estimation of Power Function Distribution Based on Selective Order Statistics
Authors: Moh'd Alodat
Abstract:
In this paper, we discuss the power function distribution and derive the maximum likelihood estimator of its parameter as well as the reliability parameter. We derive the large sample properties of the estimators based on the selective order statistic scheme. We conduct simulation studies to investigate the significance of the selective order statistic scheme in our setup and to compare the efficiency of the new proposed estimators.Keywords: fisher information, maximum likelihood estimator, power function distribution, ranked set sampling, selective order statistics sampling
Procedia PDF Downloads 4643271 A Comparison of Smoothing Spline Method and Penalized Spline Regression Method Based on Nonparametric Regression Model
Authors: Autcha Araveeporn
Abstract:
This paper presents a study about a nonparametric regression model consisting of a smoothing spline method and a penalized spline regression method. We also compare the techniques used for estimation and prediction of nonparametric regression model. We tried both methods with crude oil prices in dollars per barrel and the Stock Exchange of Thailand (SET) index. According to the results, it is concluded that smoothing spline method performs better than that of penalized spline regression method.Keywords: nonparametric regression model, penalized spline regression method, smoothing spline method, Stock Exchange of Thailand (SET)
Procedia PDF Downloads 4403270 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 4163269 The Effects of the Introduction of a One-day Waiting Period on Absences for Ordinary Illness of Public Employees
Authors: Mohamed Ali Ben Halima, Malik Koubi, Joseph Lanfranchi, Yohan Wloczysiak
Abstract:
This article assesses the consequences on the frequency and duration of ordinary sick leave of the January 2012 and 2018 reforms modifying the scope of sick leave reimbursement in the French civil service. These reforms introduce a one-day waiting period which removes the compensation for the first day of ordinary sick leave. In order to evaluate these reforms, we use an administrative database from the National Pension Fund for local public employees (FPT). The first important result of our data analysis is that the one-day waiting period was not introduced at the same time in the French Local Public Service establishments, or even never in some. This peculiarity allows for an identification strategy using a difference-in-differences method based on the definition at each date of groups of employees treated and not treated by the reform, since establishments that apply the one-day waiting period coexist with establishments that do not apply it. Two types of estimators are used for this evaluation: individual and time fixed effects estimators and DIDM estimators which correct for the biases of the Two Way Fixed Effects one. The results confirm that the change in the sick pay system decreases the probability of having at least one ordinary sick leave as well as the number and duration of these episodes. On the other hand, the estimates show that longer leave episodes are not less affected than shorter ones. Finally, the validity tests of the estimators support the results obtained for the second period of 2018-2019, but suggest estimation biases for the period 2012-2013. The extent to which the endogeneity of the choices of implementation of the reform at the local level impact these estimates needs to be further tested.Keywords: sick leave, one-day waiting period, territorial civil service, public policy evaluation
Procedia PDF Downloads 833268 Proficient Estimation Procedure for a Rare Sensitive Attribute Using Poisson Distribution
Authors: S. Suman, G. N. Singh
Abstract:
The present manuscript addresses the estimation procedure of population parameter using Poisson probability distribution when characteristic under study possesses a rare sensitive attribute. The generalized form of unrelated randomized response model is suggested in order to acquire the truthful responses from respondents. The resultant estimators have been proposed for two situations when the information on an unrelated rare non-sensitive characteristic is known as well as unknown. The properties of the proposed estimators are derived, and the measure of confidentiality of respondent is also suggested for respondents. Empirical studies are carried out in the support of discussed theory.Keywords: Poisson distribution, randomized response model, rare sensitive attribute, non-sensitive attribute
Procedia PDF Downloads 2663267 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models
Authors: Anastasiia Yu. Timofeeva
Abstract:
Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression
Procedia PDF Downloads 4163266 Capture-recapture to Estimate Completeness of Pulmonary Tuberculosis with Two Sources
Authors: Ratchadaporn Ungcharoen, Lily Ingsrisawang
Abstract:
Capture-recapture methods are popular techniques for indirect estimation the size of wildlife populations and the completeness of cases in epidemiology and social sciences. The aim of this study was to estimate the completeness of pulmonary tuberculosis cases confirmed by two sources of hospital registrations and surveillance systems in 2013 in Nakhon Pathom province, Thailand. Several estimators of population size were considered: the Lincoln-Petersen estimator, the Chapman estimator, the Chao’s lower bound estimator, the Zelterman’s estimator, etc. We focus on the Chapman and Chao’s lower bound estimators for estimating the completeness of pulmonary tuberculosis from two sources. The retrieved pulmonary tuberculosis data from two sources were analyzed and bootstrapped for 30 samples, with 241 observations from source 1 and 305 observations from source 2 per sample, for additional exploration of the completeness of pulmonary tuberculosis. The results from the original data show that the Chapman’s estimator gave the estimation of a total 360 (95% CI: 349-371) pulmonary tuberculosis cases, resulting in 57% estimated completeness cases. But the Chao’s lower bound estimator estimated the total of 365 (95% CI: 354-376) pulmonary tuberculosis cases and its estimated completeness cases was 55.9%. For the results from bootstrap samples, the Chapman and the Chao’s lower bound estimators gave an estimated 347 (95% CI: 309-385) and 353 (95% CI: 315-390) pulmonary tuberculosis cases, respectively. If for two sources recoding systems are available, record-linkage and capture-recapture analysis can be useful for estimating the completeness of different registration system. Both Chapman and Chao’s lower bound estimator approaches produce very close estimates.Keywords: capture-recapture, Chao, Chapman, pulmonary tuberculosis
Procedia PDF Downloads 5163265 Evaluation of Sensor Pattern Noise Estimators for Source Camera Identification
Authors: Benjamin Anderson-Sackaney, Amr Abdel-Dayem
Abstract:
This paper presents a comprehensive survey of recent source camera identification (SCI) systems. Then, the performance of various sensor pattern noise (SPN) estimators was experimentally assessed, under common photo response non-uniformity (PRNU) frameworks. The experiments used 1350 natural and 900 flat-field images, captured by 18 individual cameras. 12 different experiments, grouped into three sets, were conducted. The results were analyzed using the receiver operator characteristic (ROC) curves. The experimental results demonstrated that combining the basic SPN estimator with a wavelet-based filtering scheme provides promising results. However, the phase SPN estimator fits better with both patch-based (BM3D) and anisotropic diffusion (AD) filtering schemes.Keywords: sensor pattern noise, source camera identification, photo response non-uniformity, anisotropic diffusion, peak to correlation energy ratio
Procedia PDF Downloads 4413264 A Learning-Based EM Mixture Regression Algorithm
Authors: Yi-Cheng Tian, Miin-Shen Yang
Abstract:
The mixture likelihood approach to clustering is a popular clustering method where the expectation and maximization (EM) algorithm is the most used mixture likelihood method. In the literature, the EM algorithm had been used for mixture regression models. However, these EM mixture regression algorithms are sensitive to initial values with a priori number of clusters. In this paper, to resolve these drawbacks, we construct a learning-based schema for the EM mixture regression algorithm such that it is free of initializations and can automatically obtain an approximately optimal number of clusters. Some numerical examples and comparisons demonstrate the superiority and usefulness of the proposed learning-based EM mixture regression algorithm.Keywords: clustering, EM algorithm, Gaussian mixture model, mixture regression model
Procedia PDF Downloads 5103263 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling
Authors: Muhammad Nouman Qureshi, Muhammad Hanif
Abstract:
Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation
Procedia PDF Downloads 2383262 Multicollinearity and MRA in Sustainability: Application of the Raise Regression
Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez
Abstract:
Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.Keywords: multicollinearity, MRA, interaction, raise
Procedia PDF Downloads 1043261 Prediction of Energy Storage Areas for Static Photovoltaic System Using Irradiation and Regression Modelling
Authors: Kisan Sarda, Bhavika Shingote
Abstract:
This paper aims to evaluate regression modelling for prediction of Energy storage of solar photovoltaic (PV) system using Semi parametric regression techniques because there are some parameters which are known while there are some unknown parameters like humidity, dust etc. Here irradiation of solar energy is different for different places on the basis of Latitudes, so by finding out areas which give more storage we can implement PV systems at those places and our need of energy will be fulfilled. This regression modelling is done for daily, monthly and seasonal prediction of solar energy storage. In this, we have used R modules for designing the algorithm. This algorithm will give the best comparative results than other regression models for the solar PV cell energy storage.Keywords: semi parametric regression, photovoltaic (PV) system, regression modelling, irradiation
Procedia PDF Downloads 3813260 Channel Estimation/Equalization with Adaptive Modulation and Coding over Multipath Faded Channels for WiMAX
Authors: B. Siva Kumar Reddy, B. Lakshmi
Abstract:
WiMAX has adopted an Adaptive Modulation and Coding (AMC) in OFDM to endure higher data rates and error free transmission. AMC schemes employ the Channel State Information (CSI) to efficiently utilize the channel and maximize the throughput and for better spectral efficiency. This CSI has given to the transmitter by the channel estimators. In this paper, LSE (Least Square Error) and MMSE (Minimum Mean square Error) estimators are suggested and BER (Bit Error Rate) performance has been analyzed. Channel equalization is also integrated with with AMC-OFDM system and presented with Constant Modulus Algorithm (CMA) and Least Mean Square (LMS) algorithms with convergence rates analysis. Simulation results proved that increment in modulation scheme size causes to improvement in throughput along with BER value. There is a trade-off among modulation size, throughput, BER value and spectral efficiency. Results also reported the requirement of channel estimation and equalization in high data rate systems.Keywords: AMC, CSI, CMA, OFDM, OFDMA, WiMAX
Procedia PDF Downloads 3933259 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.Keywords: regression, piecewise, Bayesian, reversible Jump MCMC
Procedia PDF Downloads 5213258 Application Difference between Cox and Logistic Regression Models
Authors: Idrissa Kayijuka
Abstract:
The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio
Procedia PDF Downloads 4553257 Stock Market Prediction by Regression Model with Social Moods
Authors: Masahiro Ohmura, Koh Kakusho, Takeshi Okadome
Abstract:
This paper presents a regression model with autocorrelated errors in which the inputs are social moods obtained by analyzing the adjectives in Twitter posts using a document topic model. The regression model predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.Keywords: stock market prediction, social moods, regression model, DJIA
Procedia PDF Downloads 5483256 Model-Based Software Regression Test Suite Reduction
Authors: Shiwei Deng, Yang Bao
Abstract:
In this paper, we present a model-based regression test suite reducing approach that uses EFSM model dependence analysis and probability-driven greedy algorithm to reduce software regression test suites. The approach automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. The EFSM dependence analysis is performed for each elementary modification to reduce the regression test suite, and then the probability-driven greedy algorithm is adopted to select the minimum set of test cases from the reduced regression test suite that cover all interaction patterns. Our initial experience shows that the approach may significantly reduce the size of regression test suites.Keywords: dependence analysis, EFSM model, greedy algorithm, regression test
Procedia PDF Downloads 4273255 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.Keywords: piecewise regression, bayesian, reversible jump MCMC, segmentation
Procedia PDF Downloads 3733254 A Fuzzy Linear Regression Model Based on Dissemblance Index
Authors: Shih-Pin Chen, Shih-Syuan You
Abstract:
Fuzzy regression models are useful for investigating the relationship between explanatory variables and responses in fuzzy environments. To overcome the deficiencies of previous models and increase the explanatory power of fuzzy data, the graded mean integration (GMI) representation is applied to determine representative crisp regression coefficients. A fuzzy regression model is constructed based on the modified dissemblance index (MDI), which can precisely measure the actual total error. Compared with previous studies based on the proposed MDI and distance criterion, the results from commonly used test examples show that the proposed fuzzy linear regression model has higher explanatory power and forecasting accuracy.Keywords: dissemblance index, fuzzy linear regression, graded mean integration, mathematical programming
Procedia PDF Downloads 4393253 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model
Authors: Nureni O. Adeboye, Dawud A. Agunbiade
Abstract:
This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity
Procedia PDF Downloads 1413252 The Theory behind Logistic Regression
Authors: Jan Henrik Wosnitza
Abstract:
The logistic regression has developed into a standard approach for estimating conditional probabilities in a wide range of applications including credit risk prediction. The article at hand contributes to the current literature on logistic regression fourfold: First, it is demonstrated that the binary logistic regression automatically meets its model assumptions under very general conditions. This result explains, at least in part, the logistic regression's popularity. Second, the requirement of homoscedasticity in the context of binary logistic regression is theoretically substantiated. The variances among the groups of defaulted and non-defaulted obligors have to be the same across the level of the aggregated default indicators in order to achieve linear logits. Third, this article sheds some light on the question why nonlinear logits might be superior to linear logits in case of a small amount of data. Fourth, an innovative methodology for estimating correlations between obligor-specific log-odds is proposed. In order to crystallize the key ideas, this paper focuses on the example of credit risk prediction. However, the results presented in this paper can easily be transferred to any other field of application.Keywords: correlation, credit risk estimation, default correlation, homoscedasticity, logistic regression, nonlinear logistic regression
Procedia PDF Downloads 4263251 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution
Authors: Md. Rashidul Hasan, Atikur Rahman Baizid
Abstract:
The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function
Procedia PDF Downloads 3843250 Establishment of the Regression Uncertainty of the Critical Heat Flux Power Correlation for an Advanced Fuel Bundle
Authors: L. Q. Yuan, J. Yang, A. Siddiqui
Abstract:
A new regression uncertainty analysis methodology was applied to determine the uncertainties of the critical heat flux (CHF) power correlation for an advanced 43-element bundle design, which was developed by Canadian Nuclear Laboratories (CNL) to achieve improved economics, resource utilization and energy sustainability. The new methodology is considered more appropriate than the traditional methodology in the assessment of the experimental uncertainty associated with regressions. The methodology was first assessed using both the Monte Carlo Method (MCM) and the Taylor Series Method (TSM) for a simple linear regression model, and then extended successfully to a non-linear CHF power regression model (CHF power as a function of inlet temperature, outlet pressure and mass flow rate). The regression uncertainty assessed by MCM agrees well with that by TSM. An equation to evaluate the CHF power regression uncertainty was developed and expressed as a function of independent variables that determine the CHF power.Keywords: CHF experiment, CHF correlation, regression uncertainty, Monte Carlo Method, Taylor Series Method
Procedia PDF Downloads 4163249 Non-Parametric Regression over Its Parametric Couterparts with Large Sample Size
Authors: Jude Opara, Esemokumo Perewarebo Akpos
Abstract:
This paper is on non-parametric linear regression over its parametric counterparts with large sample size. Data set on anthropometric measurement of primary school pupils was taken for the analysis. The study used 50 randomly selected pupils for the study. The set of data was subjected to normality test, and it was discovered that the residuals are not normally distributed (i.e. they do not follow a Gaussian distribution) for the commonly used least squares regression method for fitting an equation into a set of (x,y)-data points using the Anderson-Darling technique. The algorithms for the nonparametric Theil’s regression are stated in this paper as well as its parametric OLS counterpart. The use of a programming language software known as “R Development” was used in this paper. From the analysis, the result showed that there exists a significant relationship between the response and the explanatory variable for both the parametric and non-parametric regression. To know the efficiency of one method over the other, the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) are used, and it is discovered that the nonparametric regression performs better than its parametric regression counterparts due to their lower values in both the AIC and BIC. The study however recommends that future researchers should study a similar work by examining the presence of outliers in the data set, and probably expunge it if detected and re-analyze to compare results.Keywords: Theil’s regression, Bayesian information criterion, Akaike information criterion, OLS
Procedia PDF Downloads 3053248 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 1553247 A Bivariate Inverse Generalized Exponential Distribution and Its Applications in Dependent Competing Risks Model
Authors: Fatemah A. Alqallaf, Debasis Kundu
Abstract:
The aim of this paper is to introduce a bivariate inverse generalized exponential distribution which has a singular component. The proposed bivariate distribution can be used when the marginals have heavy-tailed distributions, and they have non-monotone hazard functions. Due to the presence of the singular component, it can be used quite effectively when there are ties in the data. Since it has four parameters, it is a very flexible bivariate distribution, and it can be used quite effectively for analyzing various bivariate data sets. Several dependency properties and dependency measures have been obtained. The maximum likelihood estimators cannot be obtained in closed form, and it involves solving a four-dimensional optimization problem. To avoid that, we have proposed to use an EM algorithm, and it involves solving only one non-linear equation at each `E'-step. Hence, the implementation of the proposed EM algorithm is very straight forward in practice. Extensive simulation experiments and the analysis of one data set have been performed. We have observed that the proposed bivariate inverse generalized exponential distribution can be used for modeling dependent competing risks data. One data set has been analyzed to show the effectiveness of the proposed model.Keywords: Block and Basu bivariate distributions, competing risks, EM algorithm, Marshall-Olkin bivariate exponential distribution, maximum likelihood estimators
Procedia PDF Downloads 143