Search results for: RP-based estimator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 189

Search results for: RP-based estimator

129 Analysis of Path Nonparametric Truncated Spline Maximum Cubic Order in Farmers Loyalty Modeling

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

Path analysis tests the relationship between variables through cause and effect. Before conducting further tests on path analysis, the assumption of linearity must be met. If the shape of the relationship is not linear and the shape of the curve is unknown, then use a nonparametric approach, one of which is a truncated spline. The purpose of this study is to estimate the function and get the best model on the nonparametric truncated spline path of linear, quadratic, and cubic orders with 1 and 2-knot points and determine the significance of the best function estimator in modeling farmer loyalty through the jackknife resampling method. This study uses secondary data through questionnaires to farmers in Sumbawa Regency who use SP-36 subsidized fertilizer products as many as 100 respondents. Based on the results of the analysis, it is known that the best-truncated spline nonparametric path model is the quadratic order of 2 knots with a coefficient of determination of 85.50%; the significance of the best-truncated spline nonparametric path estimator shows that all exogenous variables have a significant effect on endogenous variables.

Keywords: nonparametric path analysis, farmer loyalty, jackknife resampling, truncated spline

Procedia PDF Downloads 46
128 Home Range and Spatial Interaction Modelling of Black Bears

Authors: Fekadu L. Bayisa, Elvan Ceyhan, Todd D. Steury

Abstract:

Interaction between individuals within the same species is an important component of population dynamics. An interaction can be either static (based on spatial overlap) or dynamic (based on movement interactions). Using GPS collar data, we can quantify both static and dynamic interactions between black bears. The goal of this work is to determine the level of black bear interactions using the 95% and 50% home ranges, as well as to model black bear spatial interactions, which could be attraction, avoidance/repulsion, or a lack of interaction at all, to gain new insights and improve our understanding of ecological processes. Recent methodological developments in home range estimation, inhomogeneous multitype/cross-type summary statistics, and envelope testing methods are explored to study the nature of black bear interactions. Our findings, in general, indicate that the black bears of one type in our data set tend to cluster around another type.

Keywords: autocorrelated kernel density estimator, cross-type summary function, inhomogeneous multitype Poisson process, kernel density estimator, minimum convex polygon, pointwise and global envelope tests

Procedia PDF Downloads 81
127 Carbohydrate Intake Estimation in Type I Diabetic Patients Described by UVA/Padova Model

Authors: David A. Padilla, Rodolfo Villamizar

Abstract:

In recent years, closed loop control strategies have been developed in order to establish a healthy glucose profile in type 1 diabetic mellitus (T1DM) patients. However, the controller itself is unable to define a suitable reference trajectory for glucose. In this paper, a control strategy Is proposed where the shape of the reference trajectory is generated bases in the amount of carbohydrates present during the digestive process, due to the effect of carbohydrate intake. Since there no exists a sensor to measure the amount of carbohydrates consumed, an estimator is proposed. Thus this paper presents the entire process of designing a carbohydrate estimator, which allows estimate disturbance for a predictive controller (MPC) in a T1MD patient, the estimation will be used to establish a profile of reference and improve the response of the controller by providing the estimated information of ingested carbohydrates. The dynamics of the diabetic model used are due to the equations described by the UVA/Padova model of the T1DMS simulator, the system was developed and simulated in Simulink, taking into account the noise and limitations of the glucose control system actuators.

Keywords: estimation, glucose control, predictive controller, MPC, UVA/Padova

Procedia PDF Downloads 261
126 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 264
125 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 344
124 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: confidence interval, handwriting, kernel density estimator, KDE, logistic regression LoR, repeatability, reproducibility

Procedia PDF Downloads 124
123 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 150
122 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 588
121 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium

Authors: Janne Engblom, Elias Oikarinen

Abstract:

The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.

Keywords: dynamic model, panel data, cross-sectional dependence, interaction model

Procedia PDF Downloads 251
120 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 143
119 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix

Authors: Tsui-Tsai Lin

Abstract:

Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.

Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration

Procedia PDF Downloads 265
118 Stereo Motion Tracking

Authors: Yudhajit Datta, Hamsi Iyer, Jonathan Bandi, Ankit Sethia

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: kalman filter, stereo vision, motion tracking, matlab, object tracking, camera calibration, computer vision system toolbox

Procedia PDF Downloads 327
117 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform

Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier

Abstract:

The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.

Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing

Procedia PDF Downloads 196
116 Household's Willingness to Pay for Safe Non-Timber Forest Products at Morikouali-Ye Community Forest in Cameroon

Authors: Eke Balla Sophie Michelle

Abstract:

Forest provides a wide range of environmental goods and services among which, biodiversity or consumption goods and constitute public goods. Despite the importance of non-timber forest products (NTFPs) in sustaining livelihood and poverty smoothening in rural communities, they are highly depleted and poorly conserved. Yokadouma is a town where NTFPs is a renewable resource in active exploitation. It has been found that such exploitation is done in the same conditions as other localities that have experienced a rapid depletion of their NTFPs in destination to cities across Cameroon, Central Africa, and overseas. Given these realities, it is necessary to access the consequences of this overexploitation through negative effects on both the population and the environment. Therefore, to enhance participatory conservation initiatives, this study determines the household’s willingness to pay in community forest (CF) of Morikouali-ye, eastern region of Cameroon, for sustainable exploitation of NTFPs using contingent valuation method (CVM) through two approaches, one parametric (Logit model) and the other non-parametric (estimator of the Turnbull lower bound). The results indicate that five species are the most collected in the study area: Irvingia gabonensis, the Ricinodendron heudelotii, Gnetum, the Jujube and bark, their sale contributes significantly to 41 % of total household income. The average willingness to pay through the Logit model and the Turnbull estimator is 6845.2861 FCFA and 4940 FCFA respectively per household per year with a social cost of degradation estimated at 3237820.3253 FCFA years. The probability to pay increases with income, gender, number of women in the household, age, the commercial activity of NTFPs and decreases with the concept of sustainable development.

Keywords: non timber forest product, contingent valuation method, willingness to pay, sustainable development

Procedia PDF Downloads 446
115 Trends of Seasonal and Annual Rainfall in the South-Central Climatic Zone of Bangladesh Using Mann-Kendall Trend Test

Authors: M. T. Islam, S. H. Shakif, R. Hasan, S. H. Kobi

Abstract:

Investigation of rainfall trends is crucial considering climate change, food security, and the economy of a particular region. This research aims to study seasonal and annual precipitation trends and their abrupt changes over time in the south-central climatic zone of Bangladesh using monthly time series data of 50 years (1970-2019). A trend-free pre-whitening method has been employed to make necessary adjustments for autocorrelations in the rainfall data. Trends in rainfall and their intensity have been observed using the non-parametric Mann-Kendall test and Theil-Sen estimator. Significant changes and fluctuation points in the data series have been detected using the sequential Mann-Kendall test at the 95% confidence limit. The study findings show that most of the rainfall stations in the study area have a decreasing precipitation pattern throughout all seasons. The maximum decline in the rainfall intensity has been found for the Tangail station (-8.24 mm/year) during monsoon. Madaripur and Chandpur stations have shown slight positive trends in post-monsoon rainfall. In terms of annual precipitation, a negative rainfall pattern has been identified in each station, with a maximum decrement (-) of 14.48 mm/year at Chandpur. However, all the trends are statistically non-significant within the 95% confidence interval, and their monotonic association with time ranges from very weak to weak. From the sequential Mann-Kendall test, the year of changing points for annual and seasonal downward precipitation trends occur mostly after the 90s for Dhaka and Barishal stations. For Chandpur, the fluctuation points arrive after the mid-70s in most cases.

Keywords: trend analysis, Mann-Kendall test, Theil-Sen estimator, sequential Mann-Kendall test, rainfall trend

Procedia PDF Downloads 80
114 Parameters Estimation of Multidimensional Possibility Distributions

Authors: Sergey Sorokin, Irina Sorokina, Alexander Yazenin

Abstract:

We present a solution to the Maxmin u/E parameters estimation problem of possibility distributions in m-dimensional case. Our method is based on geometrical approach, where minimal area enclosing ellipsoid is constructed around the sample. Also we demonstrate that one can improve results of well-known algorithms in fuzzy model identification task using Maxmin u/E parameters estimation.

Keywords: possibility distribution, parameters estimation, Maxmin u\E estimator, fuzzy model identification

Procedia PDF Downloads 470
113 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature

Authors: Chin-Yun Chen

Abstract:

Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.

Keywords: numerical quadrature, error estimation, derivative free method, interval computation

Procedia PDF Downloads 463
112 Moment Estimators of the Parameters of Zero-One Inflated Negative Binomial Distribution

Authors: Rafid Saeed Abdulrazak Alshkaki

Abstract:

In this paper, zero-one inflated negative binomial distribution is considered, along with some of its structural properties, then its parameters were estimated using the method of moments. It is found that the method of moments to estimate the parameters of the zero-one inflated negative binomial models is not a proper method and may give incorrect conclusions.

Keywords: zero one inflated models, negative binomial distribution, moments estimator, non negative integer sampling

Procedia PDF Downloads 294
111 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 152
110 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 380
109 Chaotic Behavior in Monetary Systems: Comparison among Different Types of Taylor Rule

Authors: Reza Moosavi Mohseni, Wenjun Zhang, Jiling Cao

Abstract:

The aim of the present study is to detect the chaotic behavior in monetary economic relevant dynamical system. The study employs three different forms of Taylor rules: current, forward, and backward looking. The result suggests the existence of the chaotic behavior in all three systems. In addition, the results strongly represent that using expectations especially rational expectation hypothesis can increase complexity of the system and leads to more chaotic behavior.

Keywords: taylor rule, monetary system, chaos theory, lyapunov exponent, GMM estimator

Procedia PDF Downloads 528
108 An Overview of Posterior Fossa Associated Pathologies and Segmentation

Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets

Abstract:

Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.

Keywords: chiari, posterior fossa, segmentation, volumetric

Procedia PDF Downloads 106
107 The Unscented Kalman Filter Implementation for the Sensorless Speed Control of a Permanent Magnet Synchronous Motor

Authors: Justas Dilys

Abstract:

ThispaperaddressestheimplementationandoptimizationofanUnscentedKalmanFilter(UKF) for the Permanent Magnet Synchronous Motor (PMSM) sensorless control using an ARM Cortex- M3 microcontroller. A various optimization levels based on arithmetic calculation reduction was implemented in ARM Cortex-M3 microcontroller. The execution time of UKF estimator was up to 90µs without loss of accuracy. Moreover, simulation studies on the Unscented Kalman filters are carried out using Matlab to explore the usability of the UKF in a sensorless PMSMdrive.

Keywords: unscented kalman filter, ARM, PMSM, implementation

Procedia PDF Downloads 167
106 Evaluating the Probability of Foreign Tourists' Return to the City of Mashhad, Iran

Authors: Mohammad Rahim Rahnama, Amir Ali Kharazmi, Safiye Rokni

Abstract:

The tourism industry will be the most important unlimited, sustainable source of income after the oil and automotive industries by 2020 and not only countries, but cities are striving to apprehend its various facets. In line with this objective, the present descriptive-analytical study, through survey and using a questionnaire, seeks to evaluate the probability of tourists’ return and their recommendation to their countrymen to travel to Mashhad, Iran. The population under study is a sample of 384 foreign tourists who, in 2016, arrived at Mashhad, the second metropolis in Iran and its biggest religious city. The Kaplan-Meier estimator was used to analyze the data. Twenty-six percent of the tourists are female and 74% are male. On average, each tourist has had 3.02 trips abroad and 2.1 trips to Mashhad. Tourists from 14 different countries have arrived at Mashhad. Kuwait (15.9%), Armenia (15.6%), and Iraq (10.9%) were the countries where most tourists originated. Seventy-six percent of the tourists traveled with family and 90% of the tourists arrived at Mashhad via airplane. Major purposes of tourists’ trip include pilgrimage (27.9%), treatment (22.1%) followed by pilgrimage and treatment combined (35.4%). Major issues for tourists, in the order of priority, include quality of goods and services (30.2%), shopping (18%), and inhabitants’ treatment of foreigners (15.9%). Main tourist attractions, in addition to the Holy Shrine of Imam Reza, include Torqabeh and Shandiz (Torqabeh 40.9% and Shandiz 29.9%), Neyshabour (18.2%) followed by Kalat, 4.4%. The average willingness to return among tourists is 3.13, which is higher than the mean 3, indicating satisfaction with the stay in Mashhad. Similarly, the average for tourists’ recommending to their countrymen to visit Mashhad is 3.42, which is also an indicator of tourists’ satisfaction with their presence in Mashhad. According to the findings of the Kaplan-Meier estimator, an increase in the number of tourists’ trips to Mashhad, and an increase in the number of tourists’ foreign trips, reduces the probability of recommending a trip to Mashhad by tourists. Similarly, willingness to return is higher among those who stayed at a relatives’ home compared with other patterns of residence (hotels, self-catering accommodation, and pilgrim houses). Therefore, addressing the issues raised by tourists is essential for their return and their recommendation to others to travel to Mashhad.

Keywords: international tourist, probability of return, satisfaction, Mashhad

Procedia PDF Downloads 170
105 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 148
104 Convergence Analysis of Reactive Power Based Schemes Used in Sensorless Control of Induction Motors

Authors: N. Ben Si Ali, N. Benalia, N. Zerzouri

Abstract:

Many electronic drivers for the induction motor control are based on sensorless technologies. Speed and torque control is usually attained by application of a speed or position sensor which requires the additional mounting space, reduce the reliability and increase the cost. This paper seeks to analyze dynamical performances and sensitivity to motor parameter changes of reactive power based technique used in sensorless control of induction motors. Validity of theoretical results is verified by simulation.

Keywords: adaptive observers, model reference adaptive system, RP-based estimator, sensorless control, stability analysis

Procedia PDF Downloads 546
103 Computational Tool for Surface Electromyography Analysis; an Easy Way for Non-Engineers

Authors: Fabiano Araujo Soares, Sauro Emerick Salomoni, Joao Paulo Lima da Silva, Igor Luiz Moura, Adson Ferreira da Rocha

Abstract:

This paper presents a tool developed in the Matlab platform. It was developed to simplify the analysis of surface electromyography signals (S-EMG) in a way accessible to users that are not familiarized with signal processing procedures. The tool receives data by commands in window fields and generates results as graphics and excel tables. The underlying math of each S-EMG estimator is presented. Setup window and result graphics are presented. The tool was presented to four non-engineer users and all of them managed to appropriately use it after a 5 minutes instruction period.

Keywords: S-EMG estimators, electromyography, surface electromyography, ARV, RMS, MDF, MNF, CV

Procedia PDF Downloads 558
102 Hybrid SVM/DBN Model for Arabic Isolated Words Recognition

Authors: Elyes Zarrouk, Yassine Benayed, Faiez Gargouri

Abstract:

This paper presents a new hybrid model for isolated Arabic words recognition. To do this, we apply Support Vectors Machine (SVM) as an estimator of posterior probabilities within the Dynamic Bayesian networks (DBN). This paper deals a comparative study between DBN and SVM/DBN systems for multi-dialect isolated Arabic words. Performance using SVM/DBN is found to exceed that of DBNs trained on an identical task, giving higher recognition accuracy for four different Arabic dialects. In fact, the average of recognition rates for the four dialects with SVM/DBN was 87.67% while 83.01% with DBN.

Keywords: dynamic Bayesian networks, hybrid models, supports vectors machine, Arabic isolated words

Procedia PDF Downloads 560
101 Estimating The Population Mean by Using Stratified Double Extreme Ranked Set Sample

Authors: Mahmoud I. Syam, Kamarulzaman Ibrahim, Amer I. Al-Omari

Abstract:

Stratified double extreme ranked set sampling (SDERSS) method is introduced and considered for estimating the population mean. The SDERSS is compared with the simple random sampling (SRS), stratified ranked set sampling (SRSS) and stratified simple set sampling (SSRS). It is shown that the SDERSS estimator is an unbiased of the population mean and more efficient than the estimators using SRS, SRSS and SSRS when the underlying distribution of the variable of interest is symmetric or asymmetric.

Keywords: double extreme ranked set sampling, extreme ranked set sampling, ranked set sampling, stratified double extreme ranked set sampling

Procedia PDF Downloads 456
100 Bayesian Reliability of Weibull Regression with Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

In the Bayesian, we developed an approach by using non-informative prior with covariate and obtained by using Gauss quadrature method to estimate the parameters of the covariate and reliability function of the Weibull regression distribution with Type-I censored data. The maximum likelihood seen that the estimators obtained are not available in closed forms, although they can be solved it by using Newton-Raphson methods. The comparison criteria are the MSE and the performance of these estimates are assessed using simulation considering various sample size, several specific values of shape parameter. The results show that Bayesian with non-informative prior is better than Maximum Likelihood Estimator.

Keywords: non-informative prior, Bayesian method, type-I censoring, Gauss quardature

Procedia PDF Downloads 503