Search results for: Bayes estimators.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 119

Search results for: Bayes estimators.

89 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: Piecewise, Bayesian, reversible jump MCMC, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
88 Bayesian Inference for Phase Unwrapping Using Conjugate Gradient Method in One and Two Dimensions

Authors: Yohei Saika, Hiroki Sakaematsu, Shota Akiyama

Abstract:

We investigated statistical performance of Bayesian inference using maximum entropy and MAP estimation for several models which approximated wave-fronts in remote sensing using SAR interferometry. Using Monte Carlo simulation for a set of wave-fronts generated by assumed true prior, we found that the method of maximum entropy realized the optimal performance around the Bayes-optimal conditions by using model of the true prior and the likelihood representing optical measurement due to the interferometer. Also, we found that the MAP estimation regarded as a deterministic limit of maximum entropy almost achieved the same performance as the Bayes-optimal solution for the set of wave-fronts. Then, we clarified that the MAP estimation perfectly carried out phase unwrapping without using prior information, and also that the MAP estimation realized accurate phase unwrapping using conjugate gradient (CG) method, if we assumed the model of the true prior appropriately.

Keywords: Bayesian inference using maximum entropy, MAP estimation using conjugate gradient method, SAR interferometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
87 Predicting Application Layer DDoS Attacks Using Machine Learning Algorithms

Authors: S. Umarani, D. Sharmila

Abstract:

A Distributed Denial of Service (DDoS) attack is a major threat to cyber security. It originates from the network layer or the application layer of compromised/attacker systems which are connected to the network. The impact of this attack ranges from the simple inconvenience to use a particular service to causing major failures at the targeted server. When there is heavy traffic flow to a target server, it is necessary to classify the legitimate access and attacks. In this paper, a novel method is proposed to detect DDoS attacks from the traces of traffic flow. An access matrix is created from the traces. As the access matrix is multi dimensional, Principle Component Analysis (PCA) is used to reduce the attributes used for detection. Two classifiers Naive Bayes and K-Nearest neighborhood are used to classify the traffic as normal or abnormal. The performance of the classifier with PCA selected attributes and actual attributes of access matrix is compared by the detection rate and False Positive Rate (FPR).

Keywords: Distributed Denial of Service (DDoS) attack, Application layer DDoS, DDoS Detection, K- Nearest neighborhood classifier, Naive Bayes Classifier, Principle Component Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5221
86 A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression

Authors: Nazrina Aziz, Dong Q. Wang

Abstract:

There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.

Keywords: Buckley-James estimators, censored regression, censored data, diagnostic analysis, product-limit estimator, renovated Cook's Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
85 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 411
84 A Formal Approach for Proof Constructions in Cryptography

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this article we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (σ-algebras, probability spaces and conditional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes- Formula. Besides, we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this article shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in cryptographic research, if the corresponding basic mathematical knowledge is available in a database.

Keywords: prime numbers, primality tests, (conditional) probabilitydistributions, formal proof system, higher-order logic, formalverification, Bayes' Formula, Miller-Rabin primality test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1422
83 Computer Verification in Cryptography

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.

Keywords: prime numbers, primality tests, (conditional) proba¬bility distributions, formal proof system, higher-order logic, formal verification, Bayes' Formula, Miller-Rabin primality test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2141
82 Statistical Measures and Optimization Algorithms for Gene Selection in Lung and Ovarian Tumor

Authors: C. Gunavathi, K. Premalatha

Abstract:

Microarray technology is universally used in the study of disease diagnosis using gene expression levels. The main shortcoming of gene expression data is that it includes thousands of genes and a small number of samples. Abundant methods and techniques have been proposed for tumor classification using microarray gene expression data. Feature or gene selection methods can be used to mine the genes that directly involve in the classification and to eliminate irrelevant genes. In this paper statistical measures like T-Statistics, Signal-to-Noise Ratio (SNR) and F-Statistics are used to rank the genes. The ranked genes are used for further classification. Particle Swarm Optimization (PSO) algorithm and Shuffled Frog Leaping (SFL) algorithm are used to find the significant genes from the top-m ranked genes. The Naïve Bayes Classifier (NBC) is used to classify the samples based on the significant genes. The proposed work is applied on Lung and Ovarian datasets. The experimental results show that the proposed method achieves 100% accuracy in all the three datasets and the results are compared with previous works.

Keywords: Microarray, T-Statistics, Signal-to-Noise Ratio, FStatistics, Particle Swarm Optimization, Shuffled Frog Leaping, Naïve Bayes Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
81 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home

Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.

Keywords: Situation-awareness, Smart home, IoT, Machine learning, Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810
80 Breast Cancer Survivability Prediction via Classifier Ensemble

Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia

Abstract:

This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.

Keywords: Classifier ensemble, breast cancer survivability, data mining, SEER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
79 Estimation of the Mean of the Selected Population

Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal

Abstract:

Two normal populations with different means and same variance are considered, where the variance is known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the mehod of Monte-Carlo simulation and their performances are analysed with the help of graphs.

Keywords: Estimation after selection, Brewster-Zidek technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
78 Bayes Net Classifiers for Prediction of Renal Graft Status and Survival Period

Authors: Jiakai Li, Gursel Serpen, Steven Selman, Matt Franchetti, Mike Riesen, Cynthia Schneider

Abstract:

This paper presents the development of a Bayesian belief network classifier for prediction of graft status and survival period in renal transplantation using the patient profile information prior to the transplantation. The objective was to explore feasibility of developing a decision making tool for identifying the most suitable recipient among the candidate pool members. The dataset was compiled from the University of Toledo Medical Center Hospital patients as reported to the United Network Organ Sharing, and had 1228 patient records for the period covering 1987 through 2009. The Bayes net classifiers were developed using the Weka machine learning software workbench. Two separate classifiers were induced from the data set, one to predict the status of the graft as either failed or living, and a second classifier to predict the graft survival period. The classifier for graft status prediction performed very well with a prediction accuracy of 97.8% and true positive values of 0.967 and 0.988 for the living and failed classes, respectively. The second classifier to predict the graft survival period yielded a prediction accuracy of 68.2% and a true positive rate of 0.85 for the class representing those instances with kidneys failing during the first year following transplantation. Simulation results indicated that it is feasible to develop a successful Bayesian belief network classifier for prediction of graft status, but not the graft survival period, using the information in UNOS database.

Keywords: Bayesian network classifier, renal transplantation, graft survival period, United Network for Organ Sharing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
77 Likelihood Estimation for Stochastic Epidemics with Heterogeneous Mixing Populations

Authors: Yilun Shang

Abstract:

We consider a heterogeneously mixing SIR stochastic epidemic process in populations described by a general graph. Likelihood theory is developed to facilitate statistic inference for the parameters of the model under complete observation. We show that these estimators are asymptotically Gaussian unbiased estimates by using a martingale central limit theorem.

Keywords: statistic inference, maximum likelihood, epidemicmodel, heterogeneous mixing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
76 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
75 Moment Generating Functions of Observed Gaps between Hypopnea Using Saddlepoint Approximations

Authors: Nur Zakiah Mohd Saat, Abdul Aziz Jemain

Abstract:

Saddlepoint approximations is one of the tools to obtain an expressions for densities and distribution functions. We approximate the densities of the observed gaps between the hypopnea events using the Huzurbazar saddlepoint approximation. We demonstrate the density of a maximum likelihood estimator in exponential families.

Keywords: Exponential, maximum likehood estimators, observed gap, Saddlepoint approximations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1253
74 Revealing Nonlinear Couplings between Oscillators from Time Series

Authors: B.P. Bezruchko, D.A. Smirnov

Abstract:

Quantitative characterization of nonlinear directional couplings between stochastic oscillators from data is considered. We suggest coupling characteristics readily interpreted from a physical viewpoint and their estimators. An expression for a statistical significance level is derived analytically that allows reliable coupling detection from a relatively short time series. Performance of the technique is demonstrated in numerical experiments.

Keywords: Nonlinear time series analysis, directional couplings, coupled oscillators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1221
73 MRAS Based Speed Sensorless Control of Induction Motor Drives

Authors: Nadia Bensiali, Nadia Benalia, Amar Omeiri

Abstract:

The recent trend in field oriented control (FOC) is towards the use of sensorless techniques that avoid the use of speed sensor and flux sensor. Sensors are replaced by estimators or observers to minimise the cost and increase the reliability. In this paper an anlyse of perfomance of a MRAS used in sensorless control of induction motors and sensitvity to machine parameters change are studied.

Keywords: Induction motor drive, adaptive observer, MRAS, stability analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
72 Moment Estimators of the Parameters of Zero-One Inflated Negative Binomial Distribution

Authors: Rafid Saeed Abdulrazak Alshkaki

Abstract:

In this paper, zero-one inflated negative binomial distribution is considered, along with some of its structural properties, then its parameters were estimated using the method of moments. It is found that the method of moments to estimate the parameters of the zero-one inflated negative binomial models is not a proper method and may give incorrect conclusions.

Keywords: Zero one inflated models, negative binomial distribution, moments estimator, non-negative integer sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1042
71 On the Parameter of the Burr Type X under Bayesian Principles

Authors: T. N. Sindhu, M. Aslam

Abstract:

A comprehensive Bayesian analysis has been carried out in the context of informative and non-informative priors for the shape parameter of the Burr type X distribution under different symmetric and asymmetric loss functions. Elicitation of hyperparameter through prior predictive approach is also discussed. Also we derive the expression for posterior predictive distributions, predictive intervals and the credible Intervals. As an illustration, comparisons of these estimators are made through simulation study.

Keywords: Credible Intervals, Loss Functions, Posterior Predictive Distributions, Predictive Intervals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
70 Clustered Signatures for Modeling and Recognizing 3D Rigid Objects

Authors: H. B. Darbandi, M. R. Ito, J. Little

Abstract:

This paper describes a probabilistic method for three-dimensional object recognition using a shared pool of surface signatures. This technique uses flatness, orientation, and convexity signatures that encode the surface of a free-form object into three discriminative vectors, and then creates a shared pool of data by clustering the signatures using a distance function. This method applies the Bayes-s rule for recognition process, and it is extensible to a large collection of three-dimensional objects.

Keywords: Object recognition, modeling, classification, computer vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1241
69 A Comparison of the Nonparametric Regression Models using Smoothing Spline and Kernel Regression

Authors: Dursun Aydin

Abstract:

This paper study about using of nonparametric models for Gross National Product data in Turkey and Stanford heart transplant data. It is discussed two nonparametric techniques called smoothing spline and kernel regression. The main goal is to compare the techniques used for prediction of the nonparametric regression models. According to the results of numerical studies, it is concluded that smoothing spline regression estimators are better than those of the kernel regression.

Keywords: Kernel regression, Nonparametric models, Prediction, Smoothing spline.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3058
68 A New Version of Unscented Kalman Filter

Authors: S. A. Banani, M. A. Masnadi-Shirazi

Abstract:

This paper presents a new algorithm which yields a nonlinear state estimator called iterated unscented Kalman filter. This state estimator makes use of both statistical and analytical linearization techniques in different parts of the filtering process. It outperforms the other three nonlinear state estimators: unscented Kalman filter (UKF), extended Kalman filter (EKF) and iterated extended Kalman filter (IEKF) when there is severe nonlinearity in system equation and less nonlinearity in measurement equation. The algorithm performance has been verified by illustrating some simulation results.

Keywords: Extended Kalman Filter, Iterated EKF, Nonlinearstate estimator, Unscented Kalman Filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2820
67 Effective Context Lossless Image Coding Approach Based on Adaptive Prediction

Authors: Grzegorz Ulacha, Ryszard Stasiński

Abstract:

In the paper an effective context based lossless coding technique is presented. Three principal and few auxiliary contexts are defined. The predictor adaptation technique is an improved CoBALP algorithm, denoted CoBALP+. Cumulated predictor error combining 8 bias estimators is calculated. It is shown experimentally that indeed, the new technique is time-effective while it outperforms the well known methods having reasonable time complexity, and is inferior only to extremely computationally complex ones.

Keywords: Adaptive prediction, context coding, image losslesscoding, prediction error bias correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312
66 Estimating the Population Mean by Using Stratified Double Extreme Ranked Set Sample

Authors: Mahmoud I. Syam, Kamarulzaman Ibrahim, Amer I. Al-Omari

Abstract:

Stratified double extreme ranked set sampling (SDERSS) method is introduced and considered for estimating the population mean. The SDERSS is compared with the simple random sampling (SRS), stratified ranked set sampling (SRSS) and stratified simple set sampling (SSRS). It is shown that the SDERSS estimator is an unbiased of the population mean and more efficient than the estimators using SRS, SRSS and SSRS when the underlying distribution of the variable of interest is symmetric or asymmetric.

Keywords: Double extreme ranked set sampling, Extreme ranked set sampling, Ranked set sampling, Stratified double extreme ranked set sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2291
65 Efficient Frontier - Comparing Different Volatility Estimators

Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković

Abstract:

Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.

Keywords: Variance, lower semi-variance, range-based volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2536
64 Maximum Likelihood Estimation of Burr Type V Distribution under Left Censored Samples

Authors: N. Feroze, M. Aslam

Abstract:

The paper deals with the maximum likelihood estimation of the parameters of the Burr type V distribution based on left censored samples. The maximum likelihood estimators (MLE) of the parameters have been derived and the Fisher information matrix for the parameters of the said distribution has been obtained explicitly. The confidence intervals for the parameters have also been discussed. A simulation study has been conducted to investigate the performance of the point and interval estimates.

Keywords: Fisher information matrix, confidence intervals, censoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
63 Musical Instrument Classification Using Embedded Hidden Markov Models

Authors: Ehsan Amid, Sina Rezaei Aghdam

Abstract:

In this paper, a novel method for recognition of musical instruments in a polyphonic music is presented by using an embedded hidden Markov model (EHMM). EHMM is a doubly embedded HMM structure where each state of the external HMM is an independent HMM. The classification is accomplished for two different internal HMM structures where GMMs are used as likelihood estimators for the internal HMMs. The results are compared to those achieved by an artificial neural network with two hidden layers. Appropriate classification accuracies were achieved both for solo instrument performance and instrument combinations which demonstrates that the new approach outperforms the similar classification methods by means of the dynamic of the signal.

Keywords: hidden Markov model (HMM), embedded hidden Markov models (EHMM), MFCC, musical instrument.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
62 Improving Classification Accuracy with Discretization on Datasets Including Continuous Valued Features

Authors: Mehmet Hacibeyoglu, Ahmet Arslan, Sirzat Kahramanli

Abstract:

This study analyzes the effect of discretization on classification of datasets including continuous valued features. Six datasets from UCI which containing continuous valued features are discretized with entropy-based discretization method. The performance improvement between the dataset with original features and the dataset with discretized features is compared with k-nearest neighbors, Naive Bayes, C4.5 and CN2 data mining classification algorithms. As the result the classification accuracies of the six datasets are improved averagely by 1.71% to 12.31%.

Keywords: Data mining classification algorithms, entropy-baseddiscretization method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2419
61 On the outlier Detection in Nonlinear Regression

Authors: Hossein Riazoshams, Midi Habshah, Jr., Mohamad Bakri Adam

Abstract:

The detection of outliers is very essential because of their responsibility for producing huge interpretative problem in linear as well as in nonlinear regression analysis. Much work has been accomplished on the identification of outlier in linear regression, but not in nonlinear regression. In this article we propose several outlier detection techniques for nonlinear regression. The main idea is to use the linear approximation of a nonlinear model and consider the gradient as the design matrix. Subsequently, the detection techniques are formulated. Six detection measures are developed that combined with three estimation techniques such as the Least-Squares, M and MM-estimators. The study shows that among the six measures, only the studentized residual and Cook Distance which combined with the MM estimator, consistently capable of identifying the correct outliers.

Keywords: Nonlinear Regression, outliers, Gradient, LeastSquare, M-estimate, MM-estimate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3111
60 Robust Variogram Fitting Using Non-Linear Rank-Based Estimators

Authors: Hazem M. Al-Mofleh, John E. Daniels, Joseph W. McKean

Abstract:

In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.

Keywords: Asymptotic relative efficiency, non-linear rank-based, robust, rank estimates, variogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539