Search results for: Maximum Likelihood Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9295

Search results for: Maximum Likelihood Method

9265 Frequency Offset Estimation Schemes Based On ML for OFDM Systems in Non-Gaussian Noise Environments

Authors: Keunhong Chae, Seokho Yoon

Abstract:

In this paper, frequency offset (FO) estimation schemes robust to the non-Gaussian noise environments are proposed for orthogonal frequency division multiplexing (OFDM) systems. First, a maximum-likelihood (ML) estimation scheme in non-Gaussian noise environments is proposed, and then, the complexity of the ML estimation scheme is reduced by employing a reduced set of candidate values. In numerical results, it is demonstrated that the proposed schemes provide a significant performance improvement over the conventional estimation scheme in non-Gaussian noise environments while maintaining the performance similar to the estimation performance in Gaussian noise environments.

Keywords: Frequency offset estimation, maximum-likelihood, non-Gaussian noise environment, OFDM, training symbol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
9264 The Long Run Relationship between Exports and Imports in South Africa: Evidence from Cointegration Analysis

Authors: Sagaren Pillay

Abstract:

This study empirically examines the long run equilibrium relationship between South Africa’s exports and imports using quarterly data from 1985 to 2012. The theoretical framework used for the study is based on Johansen’s Maximum Likelihood cointegration technique which tests for both the existence and number of cointegration vectors that exists. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant cointegrating relationship is found to exist between exports and imports. The study models this unique linear and lagged relationship using a Vector Error Correction Model (VECM). The findings of the study confirm the existence of a long run equilibrium relationship between exports and imports.

Keywords: Cointegration lagged, linear, maximum likelihood, vector error correction model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2732
9263 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image

Authors: Yohei Saika, Yuji Haraguchi

Abstract:

We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.

Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
9262 Performance of Hybrid-MIMO Receiver Scheme in Cognitive Radio Network

Authors: Tanapong Khomyat, Peerapong Uthansakul, Monthippa Uthansakul

Abstract:

In this paper, we evaluate the performance of the Hybrid-MIMO Receiver Scheme (HMRS) in Cognitive Radio network (CR-network). We investigate the efficiency of the proposed scheme which the energy level and user number of primary user are varied according to the characteristic of CR-network. HMRS can allow users to transmit either Space-Time Block Code (STBC) or Spatial-Multiplexing (SM) streams simultaneously by using Successive Interference Cancellation (SIC) and Maximum Likelihood Detection (MLD). From simulation, the results indicate that the interference level effects to the performance of HMRS. Moreover, the exact closed-form capacity of the proposed scheme is derived and compared with STBC scheme.

Keywords: Hybrid-MIMO, Cognitive radio network (CRnetwork), Symbol Error Rate (SER), Successive interference cancellation (SIC), Maximum likelihood detection (MLD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
9261 A Novel Estimation Method for Integer Frequency Offset in Wireless OFDM Systems

Authors: Taeung Yoon, Youngpo Lee, Chonghan Song, Na Young Ha, Seokho Yoon

Abstract:

Ren et al. presented an efficient carrier frequency offset (CFO) estimation method for orthogonal frequency division multiplexing (OFDM), which has an estimation range as large as the bandwidth of the OFDM signal and achieves high accuracy without any constraint on the structure of the training sequence. However, its detection probability of the integer frequency offset (IFO) rapidly varies according to the fractional frequency offset (FFO) change. In this paper, we first analyze the Ren-s method and define two criteria suitable for detection of IFO. Then, we propose a novel method for the IFO estimation based on the maximum-likelihood (ML) principle and the detection criteria defined in this paper. The simulation results demonstrate that the proposed method outperforms the Ren-s method in terms of the IFO detection probability irrespective of a value of the FFO.

Keywords: Orthogonal frequency division multiplexing, integer frequency offset, estimation, training symbol

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410
9260 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
9259 Numerical Optimization within Vector of Parameters Estimation in Volatility Models

Authors: J. Arneric, A. Rozga

Abstract:

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).

Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2604
9258 Further Thoughtson a Sequential Life Testing Approach Using an Inverse Weibull Model

Authors: D. I. De Souza, G. P. Azevedo, D. R. Fonseca

Abstract:

In this paper we will develop further the sequential life test approach presented in a previous article by [1] using an underlying two parameter Inverse Weibull sampling distribution. The location parameter or minimum life will be considered equal to zero. Once again we will provide rules for making one of the three possible decisions as each observation becomes available; that is: accept the null hypothesis H0; reject the null hypothesis H0; or obtain additional information by making another observation. The product being analyzed is a new electronic component. There is little information available about the possible values the parameters of the corresponding Inverse Weibull underlying sampling distribution could have.To estimate the shape and the scale parameters of the underlying Inverse Weibull model we will use a maximum likelihood approach for censored failure data. A new example will further develop the proposed sequential life testing approach.

Keywords: Sequential Life Testing, Inverse Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
9257 Moment Generating Functions of Observed Gaps between Hypopnea Using Saddlepoint Approximations

Authors: Nur Zakiah Mohd Saat, Abdul Aziz Jemain

Abstract:

Saddlepoint approximations is one of the tools to obtain an expressions for densities and distribution functions. We approximate the densities of the observed gaps between the hypopnea events using the Huzurbazar saddlepoint approximation. We demonstrate the density of a maximum likelihood estimator in exponential families.

Keywords: Exponential, maximum likehood estimators, observed gap, Saddlepoint approximations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1253
9256 Statistical Analysis-Driven Risk Assessment of Criteria Air Pollutants: A Sulfur Dioxide Case Study

Authors: Ehsan Bashiri

Abstract:

A 7-step method (with 25 sub-steps) to assess risk of air pollutants is introduced. These steps are: pre-considerations, sampling, statistical analysis, exposure matrix and likelihood, doseresponse matrix and likelihood, total risk evaluation, and discussion of findings. All mentioned words and expressions are wellunderstood; however, almost all steps have been modified, improved, and coupled in such a way that a comprehensive method has been prepared. Accordingly, the SADRA (Statistical Analysis-Driven Risk Assessment) emphasizes extensive and ongoing application of analytical statistics in traditional risk assessment models. A Sulfur Dioxide case study validates the claim and provides a good illustration for this method.

Keywords: Criteria air pollutants, Matrix of risk, Riskassessment, Statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
9255 Unscented Grid Filtering and Smoothing for Nonlinear Time Series Analysis

Authors: Nikolay Nikolaev, Evgueni Smirnov

Abstract:

This paper develops an unscented grid-based filter and a smoother for accurate nonlinear modeling and analysis of time series. The filter uses unscented deterministic sampling during both the time and measurement updating phases, to approximate directly the distributions of the latent state variable. A complementary grid smoother is also made to enable computing of the likelihood. This helps us to formulate an expectation maximisation algorithm for maximum likelihood estimation of the state noise and the observation noise. Empirical investigations show that the proposed unscented grid filter/smoother compares favourably to other similar filters on nonlinear estimation tasks.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284
9254 Frequency Estimation Using Analytic Signal via Wavelet Transform

Authors: Sudipta Majumdar, Akansha Singh

Abstract:

Frequency estimation of a sinusoid in white noise using maximum entropy power spectral estimation has been shown to be very sensitive to initial sinusoidal phase. This paper presents use of wavelet transform to find an analytic signal for frequency estimation using maximum entropy method (MEM) and compared the results with frequency estimation using analytic signal by Hilbert transform method and frequency estimation using real data together with MEM. The presented method shows the improved estimation precision and antinoise performance.

Keywords: Frequency estimation, analytic signal, maximum entropy method, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
9253 On the Maximum Theorem: A Constructive Analysis

Authors: Yasuhito Tanaka

Abstract:

We examine the maximum theorem by Berge from the point of view of Bishop style constructive mathematics. We will show an approximate version of the maximum theorem and the maximum theorem for functions with sequentially locally at most one maximum.

Keywords: Maximum theorem, Constructive mathematics, Sequentially locally at most one maximum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927
9252 Generalized Maximum Entropy Method for Cosmic Source Localization

Authors: Youssef Khmou, Said Safi, Miloud Frikel

Abstract:

The Maximum entropy principle in spectral analysis was used as an estimator of Direction of Arrival (DoA) of electromagnetic or acoustic sources impinging on an array of sensors, indeed the maximum entropy operator is very efficient when the signals of the radiating sources are ergodic and complex zero mean random processes which is the case for cosmic sources. In this paper, we present basic review of the maximum entropy method (MEM) which consists of rank one operator but not a projector, and we elaborate a new operator which is full rank and sum of all possible projectors. Two dimensional Simulation results based on Monte Carlo trials prove the resolution power of the new operator where the MEM presents some erroneous fluctuations.

Keywords: Maximum entropy, Cosmic source, Localization, operator, projector, azimuth, elevation, DoA, circular array.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2084
9251 Inference of Stress-Strength Model for a Lomax Distribution

Authors: H. Panahi, S. Asadi

Abstract:

In this paper, the estimation of the stress-strength parameter R = P(Y < X), when X and Y are independent and both are Lomax distributions with the common scale parameters but different shape parameters is studied. The maximum likelihood estimator of R is derived. Assuming that the common scale parameter is known, the bayes estimator and exact confidence interval of R are discussed. Simulation study to investigate performance of the different proposed methods has been carried out.

Keywords: Stress-Strength model; maximum likelihoodestimator; Bayes estimator; Lomax distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
9250 Navigation Patterns Mining Approach based on Expectation Maximization Algorithm

Authors: Norwati Mustapha, Manijeh Jalali, Abolghasem Bozorgniya, Mehrdad Jalali

Abstract:

Web usage mining algorithms have been widely utilized for modeling user web navigation behavior. In this study we advance a model for mining of user-s navigation pattern. The model makes user model based on expectation-maximization (EM) algorithm.An EM algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. The experimental results represent that by decreasing the number of clusters, the log likelihood converges toward lower values and probability of the largest cluster will be decreased while the number of the clusters increases in each treatment.

Keywords: Web Usage Mining, Expectation maximization, navigation pattern mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
9249 DPSO Based SEPIC Converter in PV System under Partial Shading Condition

Authors: K. Divya, G. Sugumaran

Abstract:

This paper proposes an improved Maximum Power Point Tracking of PhotoVoltaic system using Deterministic Partical Swarm Optimization technique. This method has the ability to track the maximum power under varying environmental conditions i.e. partial shading conditions. The advantage of this method, particles moves in the restricted value of velocity to achieve the maximum power. SEPIC converter is employed to boost up the voltage of PV system. To estimate the value of the proposed method, MATLAB simulation carried out under partial shading condition.

Keywords: DPSO, Partial shading condition, P&O, PV, SEPIC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2234
9248 Speaker Independent Quranic Recognizer Basedon Maximum Likelihood Linear Regression

Authors: Ehab Mourtaga, Ahmad Sharieh, Mousa Abdallah

Abstract:

An automatic speech recognition system for the formal Arabic language is needed. The Quran is the most formal spoken book in Arabic, it is spoken all over the world. In this research, an automatic speech recognizer for Quranic based speakerindependent was developed and tested. The system was developed based on the tri-phone Hidden Markov Model and Maximum Likelihood Linear Regression (MLLR). The MLLR computes a set of transformations which reduces the mismatch between an initial model set and the adaptation data. It uses the regression class tree, as well as, estimates a set of linear transformations for the mean and variance parameters of a Gaussian mixture HMM system. The 30th Chapter of the Quran, with five of the most famous readers of the Quran, was used for the training and testing of the data. The chapter includes about 2000 distinct words. The advantages of using the Quranic verses as the database in this developed recognizer are the uniqueness of the words and the high level of orderliness between verses. The level of accuracy from the tested data ranged 68 to 85%.

Keywords: Hidden Markov Model (HMM), MaximumLikelihood Linear Regression (MLLR), Quran, Regression ClassTree, Speech Recognition, Speaker-independent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
9247 A New Brazilian Friction-Resistant Low Alloy High Strength Steel – A Life Testing Approach

Authors: D. I. De Souza, G. P. Azevedo, R. Rocha

Abstract:

In this paper we will develop a sequential life test approach applied to a modified low alloy-high strength steel part used in highway overpasses in Brazil.We will consider two possible underlying sampling distributions: the Normal and theInverse Weibull models. The minimum life will be considered equal to zero. We will use the two underlying models to analyze a fatigue life test situation, comparing the results obtained from both.Since a major chemical component of this low alloy-high strength steel part has been changed, there is little information available about the possible values that the parameters of the corresponding Normal and Inverse Weibull underlying sampling distributions could have. To estimate the shape and the scale parameters of these two sampling models we will use a maximum likelihood approach for censored failure data. We will also develop a truncation mechanism for the Inverse Weibull and Normal models. We will provide rules to truncate a sequential life testing situation making one of the two possible decisions at the moment of truncation; that is, accept or reject the null hypothesis H0. An example will develop the proposed truncated sequential life testing approach for the Inverse Weibull and Normal models.

Keywords: Sequential life testing, normal and inverse Weibull models, maximum likelihood approach, truncation mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
9246 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733
9245 Odor Discrimination Using Neural Decoding of Olfactory Bulbs in Rats

Authors: K.-J. You, H.J. Lee, Y. Lang, C. Im, C.S. Koh, H.-C. Shin

Abstract:

This paper presents a novel method for inferring the odor based on neural activities observed from rats- main olfactory bulbs. Multi-channel extra-cellular single unit recordings were done by micro-wire electrodes (tungsten, 50μm, 32 channels) implanted in the mitral/tufted cell layers of the main olfactory bulb of anesthetized rats to obtain neural responses to various odors. Neural response as a key feature was measured by substraction of neural firing rate before stimulus from after. For odor inference, we have developed a decoding method based on the maximum likelihood (ML) estimation. The results have shown that the average decoding accuracy is about 100.0%, 96.0%, 84.0%, and 100.0% with four rats, respectively. This work has profound implications for a novel brain-machine interface system for odor inference.

Keywords: biomedical signal processing, neural engineering, olfactory, neural decoding, BMI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1566
9244 Modelling Extreme Temperature in Malaysia Using Generalized Extreme Value Distribution

Authors: Husna Hasan, Norfatin Salam, Mohd Bakri Adam

Abstract:

Extreme temperature of several stations in Malaysia is modelled by fitting the monthly maximum to the Generalized Extreme Value (GEV) distribution. The Mann-Kendall (MK) test suggests a non-stationary model. Two models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. Results show that half of the stations favour a model which is linear for the location parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.

Keywords: Extreme temperature, extreme value, return level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2782
9243 Maximum Power Point Tracking Based on Estimated Power for PV Energy Conversion System

Authors: Zainab Almukhtar, Adel Merabet

Abstract:

In this paper, a method for maximum power point tracking of a photovoltaic energy conversion system is presented. This method is based on using the difference between the power from the solar panel and an estimated power value to control the DC-DC converter of the photovoltaic system. The difference is continuously compared with a preset error permitted value. If the power difference is more than the error, the estimated power is multiplied by a factor and the operation is repeated until the difference is less or equal to the threshold error. The difference in power will be used to trigger a DC-DC boost converter in order to raise the voltage to where the maximum power point is achieved. The proposed method was experimentally verified through a PV energy conversion system driven by the OPAL-RT real time controller. The method was tested on varying radiation conditions and load requirements, and the Photovoltaic Panel was operated at its maximum power in different conditions of irradiation.

Keywords: Control system, power error, solar panel, MPPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
9242 A Hybrid Approach Using Particle Swarm Optimization and Simulated Annealing for N-queen Problem

Authors: Vahid Mohammadi Saffarzadeh, Pourya Jafarzadeh, Masoud Mazloom

Abstract:

This paper presents a hybrid approach for solving nqueen problem by combination of PSO and SA. PSO is a population based heuristic method that sometimes traps in local maximum. To solve this problem we can use SA. Although SA suffer from many iterations and long time convergence for solving some problems, By good adjusting initial parameters such as temperature and the length of temperature stages SA guarantees convergence. In this article we use discrete PSO (due to nature of n-queen problem) to achieve a good local maximum. Then we use SA to escape from local maximum. The experimental results show that our hybrid method in comparison of SA method converges to result faster, especially for high dimensions n-queen problems.

Keywords: PSO, SA, N-queen, CSP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
9241 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.

Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917
9240 A Combined Approach of a Sequential Life Testing and an Accelerated Life Testing Applied to a Low-Alloy High Strength Steel Component

Authors: D. I. De Souza, D. R. Fonseca, G. P. Azevedo

Abstract:

Sometimes the amount of time available for testing could be considerably less than the expected lifetime of the component. To overcome such a problem, there is the accelerated life-testing alternative aimed at forcing components to fail by testing them at much higher-than-intended application conditions. These models are known as acceleration models. One possible way to translate test results obtained under accelerated conditions to normal using conditions could be through the application of the “Maxwell Distribution Law.” In this paper we will apply a combined approach of a sequential life testing and an accelerated life testing to a low alloy high-strength steel component used in the construction of overpasses in Brazil. The underlying sampling distribution will be three-parameter Inverse Weibull model. To estimate the three parameters of the Inverse Weibull model we will use a maximum likelihood approach for censored failure data. We will be assuming a linear acceleration condition. To evaluate the accuracy (significance) of the parameter values obtained under normal conditions for the underlying Inverse Weibull model we will apply to the expected normal failure times a sequential life testing using a truncation mechanism. An example will illustrate the application of this procedure.

Keywords: Sequential Life Testing, Accelerated Life Testing, Underlying Three-Parameter Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
9239 Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station

Authors: Siti Aisyah Zakaria, Nor Azrita Mohd Amin, Noor Fadhilah Ahmad Radi, Nasrul Hamidin

Abstract:

Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.

Keywords: Extreme value theory, generalized extreme value distribution, ground-level ozone, return level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
9238 Effect of Different BER Performance Comparison of MAP and ML Detection

Authors: Naveed Ur Rehman, Rehan Jamil, Irfan Jamil

Abstract:

In this paper, we regard as a coded transmission over a frequency-selective channel. We plan to study analytically the convergence of the turbo-detector using a maximum a posteriori (MAP) equalizer and a MAP decoder. We demonstrate that the densities of the maximum likelihood (ML) exchanged during the iterations are e-symmetric and output-symmetric. Under the Gaussian approximation, this property allows to execute a one-dimensional scrutiny of the turbo-detector. By deriving the analytical terminology of the ML distributions under the Gaussian approximation, we confirm that the bit error rate (BER) performance of the turbo-detector converges to the BER performance of the coded additive white Gaussian noise (AWGN) channel at high signal to noise ratio (SNR), for any frequency selective channel.

Keywords: MAP, ML, SNR, Decoder, BER, Coded transmission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215
9237 A Comparison of Marginal and Joint Generalized Quasi-likelihood Estimating Equations Based On the Com-Poisson GLM: Application to Car Breakdowns Data

Authors: N. Mamode Khan, V. Jowaheer

Abstract:

In this paper, we apply and compare two generalized estimating equation approaches to the analysis of car breakdowns data in Mauritius. Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observation as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use the two types of quasi-likelihood estimation approaches to estimate the parameters of the model: marginal and joint generalized quasi-likelihood estimating equation approaches. Under-dispersion parameter is estimated to be around 2.14 justifying the appropriateness of Com-Poisson distribution in modelling underdispersed count responses recorded in this study.

Keywords: Breakdowns, under-dispersion, com-poisson, generalized linear model, marginal quasi-likelihood estimation, joint quasi-likelihood estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
9236 A Novel Method to Evaluate Line Loadability for Distribution Systems with Realistic Loads

Authors: K. Nagaraju, S. Sivanagaraju, T. Ramana, V. Ganesh

Abstract:

This paper presents a simple method for estimation of additional load as a factor of the existing load that may be drawn before reaching the point of line maximum loadability of radial distribution system (RDS) with different realistic load models at different substation voltages. The proposed method involves a simple line loadability index (LLI) that gives a measure of the proximity of the present state of a line in the distribution system. The LLI can use to assess voltage instability and the line loading margin. The proposed method also compares with the existing method of maximum loadability index [10]. The simulation results show that the LLI can identify not only the weakest line/branch causing system instability but also the system voltage collapse point when it is near one. This feature enables us to set an index threshold to monitor and predict system stability on-line so that a proper action can be taken to prevent the system from collapse. To demonstrate the validity of the proposed algorithm, computer simulations are carried out on two bus and 69 bus RDS.

Keywords: line loadability index, line loading margin, maximum line loadability, system stability, radial distribution system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915