Search results for: Marshall-Olkin bivariate exponential distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5367

Search results for: Marshall-Olkin bivariate exponential distribution

5337 Transformations between Bivariate Polynomial Bases

Authors: Dimitris Varsamis, Nicholas Karampetakis

Abstract:

It is well known that any interpolating polynomial P(x,y) on the vector space Pn,m of two-variable polynomials with degree less than n in terms of x and less than m in terms of y has various representations that depends on the basis of Pn,m that we select i.e. monomial, Newton and Lagrange basis etc. The aim of this paper is twofold: a) to present transformations between the coordinates of the polynomial P(x,y) in the aforementioned basis and b) to present transformations between these bases.

Keywords: bivariate interpolation polynomial, polynomial basis, transformations, interpolating polynomial

Procedia PDF Downloads 374
5336 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals

Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam

Abstract:

The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study

Procedia PDF Downloads 297
5335 Stability of Hybrid Systems

Authors: Kreangkri Ratchagit

Abstract:

This paper is concerned with exponential stability of switched linear systems with interval time-varying delays. The time delay is any continuous function belonging to a given interval, in which the lower bound of delay is not restricted to zero. By constructing a suitable augmented Lyapunov-Krasovskii functional combined with Leibniz-Newton’s formula, a switching rule for the exponential stability of switched linear systems with interval time-varying delays and new delay-dependent sufficient conditions for the exponential stability of the systems are first established in terms of LMIs. Finally, some examples are exploited to illustrate the effectiveness of the proposed schemes.

Keywords: exponential stability, hybrid systems, timevarying delays, Lyapunov-Krasovskii functional, Leibniz-Newton’s formula

Procedia PDF Downloads 427
5334 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 503
5333 Regression for Doubly Inflated Multivariate Poisson Distributions

Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta

Abstract:

Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.

Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios

Procedia PDF Downloads 134
5332 Estimation of Population Mean under Random Non-Response in Two-Occasion Successive Sampling

Authors: M. Khalid, G. N. Singh

Abstract:

In this paper, we have considered the problems of estimation for the population mean on current (second) occasion in two-occasion successive sampling under random non-response situations. Some modified exponential type estimators have been proposed and their properties are studied under the assumptions that the number of sampling unit follows a discrete distribution due to random non-response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.

Keywords: modified exponential estimator, successive sampling, random non-response, auxiliary variable, bias, mean square error

Procedia PDF Downloads 331
5331 A Bayesian Model with Improved Prior in Extreme Value Problems

Authors: Eva L. Sanjuán, Jacinto Martín, M. Isabel Parra, Mario M. Pizarro

Abstract:

In Extreme Value Theory, inference estimation for the parameters of the distribution is made employing a small part of the observation values. When block maxima values are taken, many data are discarded. We developed a new Bayesian inference model to seize all the information provided by the data, introducing informative priors and using the relations between baseline and limit parameters. Firstly, we studied the accuracy of the new model for three baseline distributions that lead to a Gumbel extreme distribution: Exponential, Normal and Gumbel. Secondly, we considered mixtures of Normal variables, to simulate practical situations when data do not adjust to pure distributions, because of perturbations (noise).

Keywords: bayesian inference, extreme value theory, Gumbel distribution, highly informative prior

Procedia PDF Downloads 169
5330 Functioning of Public Distribution System and Calories Intake in the State of Maharashtra

Authors: Balasaheb Bansode, L. Ladusingh

Abstract:

The public distribution system is an important component of food security. It is a massive welfare program undertaken by Government of India and implemented by state government since India being a federal state; for achieving multiple objectives like eliminating hunger, reduction in malnutrition and making food consumption affordable. This program reaches at the community level through the various agencies of the government. The paper focuses on the accessibility of PDS at household level and how the present policy framework results in exclusion and inclusion errors. It tries to explore the sanctioned food grain quantity received by differentiated ration cards according to income criterion at household level, and also it has highlighted on the type of corruption in food distribution that is generated by the PDS system. The data used is of secondary nature from NSSO 68 round conducted in 2012. Bivariate and multivariate techniques have been used to understand the working and consumption of food for this paper.

Keywords: calories intake, entitle food quantity, poverty aliviation through PDS, target error

Procedia PDF Downloads 310
5329 Nonparametric Copula Approximations

Authors: Serge Provost, Yishan Zang

Abstract:

Copulas are currently utilized in finance, reliability theory, machine learning, signal processing, geodesy, hydrology and biostatistics, among several other fields of scientific investigation. It follows from Sklar's theorem that the joint distribution function of a multidimensional random vector can be expressed in terms of its associated copula and marginals. Since marginal distributions can easily be determined by making use of a variety of techniques, we address the problem of securing the distribution of the copula. This will be done by using several approaches. For example, we will obtain bivariate least-squares approximations of the empirical copulas, modify the kernel density estimation technique and propose a criterion for selecting appropriate bandwidths, differentiate linearized empirical copulas, secure Bernstein polynomial approximations of suitable degrees, and apply a corollary to Sklar's result. Illustrative examples involving actual observations will be presented. The proposed methodologies will as well be applied to a sample generated from a known copula distribution in order to validate their effectiveness.

Keywords: copulas, Bernstein polynomial approximation, least-squares polynomial approximation, kernel density estimation, density approximation

Procedia PDF Downloads 45
5328 A Comparative Study of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) and Extreme Value Theory (EVT) Model in Modeling Value-at-Risk (VaR)

Authors: Longqing Li

Abstract:

The paper addresses the inefficiency of the classical model in measuring the Value-at-Risk (VaR) using a normal distribution or a Student’s t distribution. Specifically, the paper focuses on the one day ahead Value-at-Risk (VaR) of major stock market’s daily returns in US, UK, China and Hong Kong in the most recent ten years under 95% confidence level. To improve the predictable power and search for the best performing model, the paper proposes using two leading alternatives, Extreme Value Theory (EVT) and a family of GARCH models, and compares the relative performance. The main contribution could be summarized in two aspects. First, the paper extends the GARCH family model by incorporating EGARCH and TGARCH to shed light on the difference between each in estimating one day ahead Value-at-Risk (VaR). Second, to account for the non-normality in the distribution of financial markets, the paper applies Generalized Error Distribution (GED), instead of the normal distribution, to govern the innovation term. A dynamic back-testing procedure is employed to assess the performance of each model, a family of GARCH and the conditional EVT. The conclusion is that Exponential GARCH yields the best estimate in out-of-sample one day ahead Value-at-Risk (VaR) forecasting. Moreover, the discrepancy of performance between the GARCH and the conditional EVT is indistinguishable.

Keywords: Value-at-Risk, Extreme Value Theory, conditional EVT, backtesting

Procedia PDF Downloads 300
5327 Modeling Intelligent Threats: Case of Continuous Attacks on a Specific Target

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez

Abstract:

In this paper, we treat a model that falls in the area of protecting targeted systems from intelligent threats including terrorism. We introduce the concept of system survivability, in the context of continuous attacks, as the probability that a system under attack will continue operation up to some fixed time t. We define a constant attack rate (CAR) process as an attack on a targeted system that follows an exponential distribution. We consider the superposition of several CAR processes. From the attacker side, we determine the optimal attack strategy that minimizes the system survivability. We also determine the optimal strengthening strategy that maximizes the system survivability under limited defensive resources. We use operations research techniques to identify optimal strategies of each antagonist. Our results may be used as interesting starting points to develop realistic protection strategies against intentional attacks.

Keywords: CAR processes, defense/attack strategies, exponential failure, survivability

Procedia PDF Downloads 366
5326 Effect of Parameters for Exponential Loads on Voltage Transmission Line with Compensation

Authors: Benalia Nadia, Bensiali Nadia, Zerzouri Noura

Abstract:

This paper presents an analysis of the effects of parameters np and nq for exponential load on the transmission line voltage profile, transferred power and transmission losses for different shunt compensation size. For different values for np and nq in which active and reactive power vary with it is terminal voltages as in exponential form, variations of the load voltage for different sizes of shunt capacitors are simulated with a simple two-bus power system using Matlab SimPowerSystems Toolbox. It is observed that the compensation level is significantly affected by the voltage sensitivities of loads.

Keywords: static load model, shunt compensation, transmission system, exponentiel load model

Procedia PDF Downloads 342
5325 Generalization of Tsallis Entropy from a Q-Deformed Arithmetic

Authors: J. Juan Peña, J. Morales, J. García-Ravelo, J. García-Martínes

Abstract:

It is known that by introducing alternative forms of exponential and logarithmic functions, the Tsallis entropy Sq is itself a generalization of Shannon entropy S. In this work, from a deformation through a scaling function applied to the differential operator, it is possible to generate a q-deformed calculus as well as a q-deformed arithmetic, which not only allows generalizing the exponential and logarithmic functions but also any other standard function. The updated q-deformed differential operator leads to an updated integral operator under which the functions are integrated together with a weight function. For each differentiable function, it is possible to identify its q-deformed partner, which is useful to generalize other algebraic relations proper of the original functions. As an application of this proposal, in this work, a generalization of exponential and logarithmic functions is studied in such a way that their relationship with the thermodynamic functions, particularly the entropy, allows us to have a q-deformed expression of these. As a result, from a particular scaling function applied to the differential operator, a q-deformed arithmetic is obtained, leading to the generalization of the Tsallis entropy.

Keywords: q-calculus, q-deformed arithmetic, entropy, exponential functions, thermodynamic functions

Procedia PDF Downloads 36
5324 A Class of Third Derivative Four-Step Exponential Fitting Numerical Integrator for Stiff Differential Equations

Authors: Cletus Abhulimen, L. A. Ukpebor

Abstract:

In this paper, we construct a class of four-step third derivative exponential fitting integrator of order six for the numerical integration of stiff initial-value problems of the type: y’= f(x,y); y(x₀) =y₀. The implicit method has free parameters which allow it to be fitted automatically to exponential functions. For the purpose of effective implementation of the proposed method, we adopted the techniques of splitting the method into predictor and corrector schemes. The numerical analysis of the stability of the new method was discussed; the results show that the method is A-stable. Finally, numerical examples are presented, to show the efficiency and accuracy of the new method.

Keywords: third derivative four-step, exponentially fitted, a-stable, stiff differential equations

Procedia PDF Downloads 229
5323 Generalized Hyperbolic Functions: Exponential-Type Quantum Interactions

Authors: Jose Juan Peña, J. Morales, J. García-Ravelo

Abstract:

In the search of potential models applied in the theoretical treatment of diatomic molecules, some of them have been constructed by using standard hyperbolic functions as well as from the so-called q-deformed hyperbolic functions (sc q-dhf) for displacing and modifying the shape of the potential under study. In order to transcend the scope of hyperbolic functions, in this work, a kind of generalized q-deformed hyperbolic functions (g q-dhf) is presented. By a suitable transformation, through the q deformation parameter, it is shown that these g q-dhf can be expressed in terms of their corresponding standard ones besides they can be reduced to the sc q-dhf. As a useful application of the proposed approach, and considering a class of exactly solvable multi-parameter exponential-type potentials, some new q-deformed quantum interactions models that can be used as interesting alternative in quantum physics and quantum states are presented. Furthermore, due that quantum potential models are conditioned on the q-dependence of the parameters that characterize to the exponential-type potentials, it is shown that many specific cases of q-deformed potentials are obtained as particular cases from the proposal.

Keywords: diatomic molecules, exponential-type potentials, hyperbolic functions, q-deformed potentials

Procedia PDF Downloads 158
5322 Forecasting Models for Steel Demand Uncertainty Using Bayesian Methods

Authors: Watcharin Sangma, Onsiri Chanmuang, Pitsanu Tongkhow

Abstract:

A forecasting model for steel demand uncertainty in Thailand is proposed. It consists of trend, autocorrelation, and outliers in a hierarchical Bayesian frame work. The proposed model uses a cumulative Weibull distribution function, latent first-order autocorrelation, and binary selection, to account for trend, time-varying autocorrelation, and outliers, respectively. The Gibbs sampling Markov Chain Monte Carlo (MCMC) is used for parameter estimation. The proposed model is applied to steel demand index data in Thailand. The root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE) criteria are used for model comparison. The study reveals that the proposed model is more appropriate than the exponential smoothing method.

Keywords: forecasting model, steel demand uncertainty, hierarchical Bayesian framework, exponential smoothing method

Procedia PDF Downloads 324
5321 A Study of Population Growth Models and Future Population of India

Authors: Sheena K. J., Jyoti Badge, Sayed Mohammed Zeeshan

Abstract:

A Comparative Study of Exponential and Logistic Population Growth Models in India India is the second most populous city in the world, just behind China, and is going to be in the first place by next year. The Indian population has remarkably at higher rate than the other countries from the past 20 years. There were many scientists and demographers who has formulated various models of population growth in order to study and predict the future population. Some of the models are Fibonacci population growth model, Exponential growth model, Logistic growth model, Lotka-Volterra model, etc. These models have been effective in the past to an extent in predicting the population. However, it is essential to have a detailed comparative study between the population models to come out with a more accurate one. Having said that, this research study helps to analyze and compare the two population models under consideration - exponential and logistic growth models, thereby identifying the most effective one. Using the census data of 2011, the approximate population for 2016 to 2031 are calculated for 20 Indian states using both the models, compared and recorded the data with the actual population. On comparing the results of both models, it is found that logistic population model is more accurate than the exponential model, and using this model, we can predict the future population in a more effective way. This will give an insight to the researchers about the effective models of population and how effective these population models are in predicting the future population.

Keywords: population growth, population models, exponential model, logistic model, fibonacci model, lotka-volterra model, future population prediction, demographers

Procedia PDF Downloads 94
5320 Analytical Solution of Specific Energy Equation in Exponential Channels

Authors: Abdulrahman Abdulrahman

Abstract:

The specific energy equation has many applications in practical channels, such as exponential channels. In this paper, the governing equation of alternate depth ratio for exponential channels, in general, was investigated towards obtaining analytical solution for the alternate depth ratio in three exponential channel shapes, viz., rectangular, triangular, and parabolic channels. The alternate depth ratio for rectangular channels is quadratic; hence it is very simple to solve. While for parabolic and triangular channels, the alternate depth ratio is cubic and quartic equations, respectively, analytical solution for these equations may be achieved easily for a given Froud number. Different examples are solved to prove the efficiency of the proposed solution. Such analytical solution can be easily used in natural rivers and most of practical channels.

Keywords: alternate depth, analytical solution, specific energy, parabolic channel, rectangular channel, triangular channel, open channel flow

Procedia PDF Downloads 164
5319 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 228
5318 Forecasting Cancers Cases in Algeria Using Double Exponential Smoothing Method

Authors: Messis A., Adjebli A., Ayeche R., Talbi M., Tighilet K., Louardiane M.

Abstract:

Cancers are the second cause of death worldwide. Prevalence and incidence of cancers is getting increased by aging and population growth. This study aims to predict and modeling the evolution of breast, Colorectal, Lung, Bladder and Prostate cancers over the period of 2014-2019. In this study, data were analyzed using time series analysis with double exponential smoothing method to forecast the future pattern. To describe and fit the appropriate models, Minitab statistical software version 17 was used. Between 2014 and 2019, the overall trend in the raw number of new cancer cases registered has been increasing over time; the change in observations over time has been increasing. Our forecast model is validated since we have good prediction for the period 2020 and data not available for 2021 and 2022. Time series analysis showed that the double exponential smoothing is an efficient tool to model the future data on the raw number of new cancer cases.

Keywords: cancer, time series, prediction, double exponential smoothing

Procedia PDF Downloads 57
5317 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 299
5316 Exponential Spline Solution for Singularly Perturbed Boundary Value Problems with an Uncertain-But-Bounded Parameter

Authors: Waheed Zahra, Mohamed El-Beltagy, Ashraf El Mhlawy, Reda Elkhadrawy

Abstract:

In this paper, we consider singular perturbation reaction-diffusion boundary value problems, which contain a small uncertain perturbation parameter. To solve these problems, we propose a numerical method which is based on an exponential spline and Shishkin mesh discretization. While interval analysis principle is used to deal with the uncertain parameter, sensitivity analysis has been conducted using different methods. Numerical results are provided to show the applicability and efficiency of our method, which is ε-uniform convergence of almost second order.

Keywords: singular perturbation problem, shishkin mesh, two small parameters, exponential spline, interval analysis, sensitivity analysis

Procedia PDF Downloads 248
5315 Forecasting Unemployment Rate in Selected European Countries Using Smoothing Methods

Authors: Ksenija Dumičić, Anita Čeh Časni, Berislav Žmuk

Abstract:

The aim of this paper is to select the most accurate forecasting method for predicting the future values of the unemployment rate in selected European countries. In order to do so, several forecasting techniques adequate for forecasting time series with trend component, were selected, namely: double exponential smoothing (also known as Holt`s method) and Holt-Winters` method which accounts for trend and seasonality. The results of the empirical analysis showed that the optimal model for forecasting unemployment rate in Greece was Holt-Winters` additive method. In the case of Spain, according to MAPE, the optimal model was double exponential smoothing model. Furthermore, for Croatia and Italy the best forecasting model for unemployment rate was Holt-Winters` multiplicative model, whereas in the case of Portugal the best model to forecast unemployment rate was Double exponential smoothing model. Our findings are in line with European Commission unemployment rate estimates.

Keywords: European Union countries, exponential smoothing methods, forecast accuracy unemployment rate

Procedia PDF Downloads 348
5314 A Comparative Analysis of Geometric and Exponential Laws in Modelling the Distribution of the Duration of Daily Precipitation

Authors: Mounia El Hafyani, Khalid El Himdi

Abstract:

Precipitation is one of the key variables in water resource planning. The importance of modeling wet and dry durations is a crucial pointer in engineering hydrology. The objective of this study is to model and analyze the distribution of wet and dry durations. For this purpose, the daily rainfall data from 1967 to 2017 of the Moroccan city of Kenitra’s station are used. Three models are implemented for the distribution of wet and dry durations, namely the first-order Markov chain, the second-order Markov chain, and the truncated negative binomial law. The adherence of the data to the proposed models is evaluated using Chi-square and Kolmogorov-Smirnov tests. The Akaike information criterion is applied to assess the most effective model distribution. We go further and study the law of the number of wet and dry days among k consecutive days. The calculation of this law is done through an algorithm that we have implemented based on conditional laws. We complete our work by comparing the observed moments of the numbers of wet/dry days among k consecutive days to the calculated moment of the three estimated models. The study shows the effectiveness of our approach in modeling wet and dry durations of daily precipitation.

Keywords: Markov chain, rainfall, truncated negative binomial law, wet and dry durations

Procedia PDF Downloads 94
5313 A Proposed Mechanism for Skewing Symmetric Distributions

Authors: M. T. Alodat

Abstract:

In this paper, we propose a mechanism for skewing any symmetric distribution. The new distribution is called the deflation-inflation distribution (DID). We discuss some statistical properties of the DID such moments, stochastic representation, log-concavity. Also we fit the distribution to real data and we compare it to normal distribution and Azzlaini's skew normal distribution. Numerical results show that the DID fits the the tree ring data better than the other two distributions.

Keywords: normal distribution, moments, Fisher information, symmetric distributions

Procedia PDF Downloads 636
5312 Analytical Downlink Effective SINR Evaluation in LTE Networks

Authors: Marwane Ben Hcine, Ridha Bouallegue

Abstract:

The aim of this work is to provide an original analytical framework for downlink effective SINR evaluation in LTE networks. The classical single carrier SINR performance evaluation is extended to multi-carrier systems operating over frequency selective channels. Extension is achieved by expressing the link outage probability in terms of the statistics of the effective SINR. For effective SINR computation, the exponential effective SINR mapping (EESM) method is used on this work. Closed-form expression for the link outage probability is achieved assuming a log skew normal approximation for single carrier case. Then we rely on the lognormal approximation to express the exponential effective SINR distribution as a function of the mean and standard deviation of the SINR of a generic subcarrier. Achieved formulas is easily computable and can be obtained for a user equipment (UE) located at any distance from its serving eNodeB. Simulations show that the proposed framework provides results with accuracy within 0.5 dB.

Keywords: LTE, OFDMA, effective SINR, log skew normal approximation

Procedia PDF Downloads 337
5311 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 234
5310 Effect of Magnetic Field on Unsteady MHD Poiseuille Flow of a Third Grade Fluid Under Exponential Decaying Pressure Gradient with Ohmic Heating

Authors: O. W. Lawal, L. O. Ahmed, Y. K. Ali

Abstract:

The unsteady MHD Poiseuille flow of a third grade fluid between two parallel horizontal nonconducting porous plates is studied with heat transfer. The two plates are fixed but maintained at different constant temperature with the Joule and viscous dissipation taken into consideration. The fluid motion is produced by a sudden uniform exponential decaying pressure gradient and external uniform magnetic field that is perpendicular to the plates. The momentum and energy equations governing the flow are solved numerically using Maple program. The effects of magnetic field and third grade fluid parameters on velocity and temperature profile are examined through several graphs.

Keywords: exponential decaying pressure gradient, MHD flow, Poiseuille flow, third grade fluid

Procedia PDF Downloads 449
5309 Influence of Recombination of Free and Trapped Charge Carriers on the Efficiency of Conventional and Inverted Organic Solar Cells

Authors: Hooman Mehdizadeh Rad, Jai Singh

Abstract:

Organic solar cells (OSCs) have been actively investigated in the last two decades due to their several merits such as simple fabrication process, low-cost manufacturing, and lightweight. In this paper, using the optical transfer matrix method (OTMM) and solving the drift-diffusion equations processes of recombination are studied in inverted and conventional bulk heterojunction (BHJ) OSCs. Two types of recombination processes are investigated: 1) recombination of free charge carriers using the Langevin theory and 2) of trapped charge carriers in the tail states with exponential energy distribution. These recombination processes are incorporated in simulating the current- voltage characteristics of both conventional and inverted BHJ OSCs. The results of this simulation produces a higher power conversion efficiency in the inverted structure in comparison with conventional structure, which agrees well with the experimental results.

Keywords: conventional organic solar cells, exponential tail state recombination, inverted organic solar cells, Langevin recombination

Procedia PDF Downloads 158
5308 Passenger Flow Characteristics of Seoul Metropolitan Subway Network

Authors: Kang Won Lee, Jung Won Lee

Abstract:

Characterizing the network flow is of fundamental importance to understand the complex dynamics of networks. And passenger flow characteristics of the subway network are very relevant for an effective transportation management in urban cities. In this study, passenger flow of Seoul metropolitan subway network is investigated and characterized through statistical analysis. Traditional betweenness centrality measure considers only topological structure of the network and ignores the transportation factors. This paper proposes a weighted betweenness centrality measure that incorporates monthly passenger flow volume. We apply the proposed measure on the Seoul metropolitan subway network involving 493 stations and 16 lines. Several interesting insights about the network are derived from the new measures. Using Kolmogorov-Smirnov test, we also find out that monthly passenger flow between any two stations follows a power-law distribution and other traffic characteristics such as congestion level and throughflow traffic follow exponential distribution.

Keywords: betweenness centrality, correlation coefficient, power-law distribution, Korea traffic DB

Procedia PDF Downloads 263