World Academy of Science, Engineering and Technology
[Mathematical and Computational Sciences]
Online ISSN : 1307-6892
986 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer
Abstract:
This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML
Procedia PDF Downloads 129985 Design of In-House Test Method for Assuring Packing Quality of Bottled Spirits
Authors: S. Ananthakrishnan, U. H. Acharya
Abstract:
Whether shopping in a retail location or via the internet, consumers expect to receive their products intact. When products arrive damaged or over-packaged, the result can be customer dissatisfaction and increased cost for retailers and manufacturers. The packaging performance depends on both the transport situation and the packaging design. During transportation, the packaged products are subjected to the variation in vibration levels from transport vehicles that vary in frequency and acceleration while moving to their destinations. Spirits manufactured by this Company were being transported to various parts of the country by road. There were instances of package breaking and customer complaints. The vibration experienced on a straight road at some speed may not be same as the vibration experienced by the same vehicle on a curve at the same speed. This vibration may negatively affect the product or packing. Hence, it was necessary to conduct a physical road test to understand the effect of vibration in the packaged products. The field transit trial has to be done before the transportations, which results in high investment. The company management was interested in developing an in-house test environment which would adequately represent the transit conditions. With the objective to develop an in-house test condition that can accurately simulate the mechanical loading scenario prevailing during the storage, handling and transportation of the products a brainstorming was done with the concerned people to identify the critical factors affecting vibration rate. Position of corrugated box, the position of bottle and speed of vehicle were identified as factors affecting the vibration rate. Several packing scenarios were identified by Design of Experiment methodology and simulated in the in-house test facility. Each condition was observed for 30 minutes, which was equivalent to 1000 km. The achieved vibration level was considered as the response. The average achieved in the simulated experiments was near to the third quartile (Q3) of the actual data. Thus, we were able to address around three-fourth of the actual phenomenon. Most of the cases in transit could be reproduced. The recommended test condition could generate a vibration level ranging from 9g to 15g as against a maximum of only 7g that was being generated earlier. Thus, the Company was able to test the packaged cartons satisfactorily in the house itself before transporting to the destinations, assuring itself that the breakages of the bottles will not happen.Keywords: ANOVA, Corrugated box, DOE, Quartile
Procedia PDF Downloads 125984 The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models
Authors: H. C. Chinwenyi, H. D. Ibrahim, F. A. Ahmed
Abstract:
In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.Keywords: equivalent martingale measure, European put option, girsanov theorem, martingales, monte carlo method, option price valuation formula
Procedia PDF Downloads 132983 Modeling the Compound Interest Dynamics Using Fractional Differential Equations
Authors: Muath Awadalla, Maen Awadallah
Abstract:
Banking sector covers different activities including lending money to customers. However, it is commonly known that customers pay money they have borrowed including an added amount called interest. Compound interest rate is an approach used in determining the interest to be paid. The instant compounded amount to be paid by a debtor is obtained through a differential equation whose main parameters are the rate and the time. The rate used by banks in a country is often defined by the government of the said country. In Switzerland, for instance, a negative rate was once applied. In this work, a new approach of modeling the compound interest is proposed using Hadamard fractional derivative. As a result, it appears that depending on the fraction value used in derivative the amount to be paid by a debtor might either be higher or lesser than the amount determined using the classical approach.Keywords: compound interest, fractional differential equation, hadamard fractional derivative, optimization
Procedia PDF Downloads 126982 Role of Additional Food Resources in an Ecosystem with Two Discrete Delays
Authors: Ankit Kumar, Balram Dubey
Abstract:
This study proposes a three dimensional prey-predator model with additional food, provided to predator individuals, including gestation delay in predators and delay in supplying the additional food to predators. It is assumed that the interaction between prey and predator is followed by Holling type-II functional response. We discussed the steady states and their local and global asymptotic behavior for the non-delayed system. Hopf-bifurcation phenomenon with respect to different parameters has also been studied. We obtained a range of predator’s tendency factor on provided additional food, in which the periodic solutions occur in the system. We have shown that oscillations can be controlled from the system by increasing the tendency factor. Moreover, the existence of periodic solutions via Hopf-bifurcation is shown with respect to both the delays. Our analysis shows that both delays play an important role in governing the dynamics of the system. It changes the stability behavior into instability behavior. The direction and stability of Hopf-bifurcation are also investigated through the normal form theory and the center manifold theorem. Lastly, some numerical simulations and graphical illustrations have been carried out to validate our analytical findings.Keywords: additional food, gestation delay, Hopf-bifurcation, prey-predator
Procedia PDF Downloads 129981 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems
Authors: Riadh Zorgati, Thomas Triboulet
Abstract:
In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix
Procedia PDF Downloads 133980 Nonparametric Quantile Regression for Multivariate Spatial Data
Authors: S. H. Arnaud Kanga, O. Hili, S. Dabo-Niang
Abstract:
Spatial prediction is an issue appealing and attracting several fields such as agriculture, environmental sciences, ecology, econometrics, and many others. Although multiple non-parametric prediction methods exist for spatial data, those are based on the conditional expectation. This paper took a different approach by examining a non-parametric spatial predictor of the conditional quantile. The study especially observes the stationary multidimensional spatial process over a rectangular domain. Indeed, the proposed quantile is obtained by inverting the conditional distribution function. Furthermore, the proposed estimator of the conditional distribution function depends on three kernels, where one of them controls the distance between spatial locations, while the other two control the distance between observations. In addition, the almost complete convergence and the convergence in mean order q of the kernel predictor are obtained when the sample considered is alpha-mixing. Such approach of the prediction method gives the advantage of accuracy as it overcomes sensitivity to extreme and outliers values.Keywords: conditional quantile, kernel, nonparametric, stationary
Procedia PDF Downloads 154979 Co-Integration Model for Predicting Inflation Movement in Nigeria
Authors: Salako Rotimi, Oshungade Stephen, Ojewoye Opeyemi
Abstract:
The maintenance of price stability is one of the macroeconomic challenges facing Nigeria as a nation. This paper attempts to build a co-integration multivariate time series model for inflation movement in Nigeria using data extracted from the abstract of statistics of the Central Bank of Nigeria (CBN) from 2008 to 2017. The Johansen cointegration test suggests at least one co-integration vector describing the long run relationship between Consumer Price Index (CPI), Food Price Index (FPI) and Non-Food Price Index (NFPI). All three series show increasing pattern, which indicates a sign of non-stationary in each of the series. Furthermore, model predictability was established with root-mean-square-error, mean absolute error, mean average percentage error, and Theil’s unbiased statistics for n-step forecasting. The result depicts that the long run coefficient of a consumer price index (CPI) has a positive long-run relationship with the food price index (FPI) and non-food price index (NFPI).Keywords: economic, inflation, model, series
Procedia PDF Downloads 244978 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 137977 Subclass of Close-To-Convex Harmonic Mappings
Authors: Jugal K. Prajapat, Manivannan M.
Abstract:
In this article we have studied a class of sense preserving harmonic mappings in the unit disk D. Let B⁰H (α, β) denote the class of sense-preserving harmonic mappings f=h+g ̅ in the open unit disk D and satisfying the condition |z h״(z)+α (h׳(z)-1) | ≤ β - |z g″(z)+α g′(z)| (α > -1, β > 0). We have proved that B⁰H (α, β) is close-to-convex in D. We also prove that the functions in B⁰H (α, β) are stable harmonic univalent, stable harmonic starlike and stable harmonic convex in D for different values of its parameters. Further, the coefficient estimates, growth results, area theorem, boundary behavior, convolution and convex combination properties of the class B⁰H (α, β) of harmonic mapping are obtained.Keywords: analytic, univalent, starlike, convex and close-to-convex
Procedia PDF Downloads 175976 Analysis of the Unreliable M/G/1 Retrial Queue with Impatient Customers and Server Vacation
Authors: Fazia Rahmoune, Sofiane Ziani
Abstract:
Retrial queueing systems have been extensively used to stochastically model many problems arising in computer networks, telecommunication, telephone systems, among others. In this work, we consider a $M/G/1$ retrial queue with an unreliable server with random vacations and two types of primary customers, persistent and impatient. This model involves the unreliability of the server, which can be subject to physical breakdowns and takes into account the correctives maintenances for restoring the service when a failure occurs. On the other hand, we consider random vacations, which can model the preventives maintenances for improving system performances and preventing breakdowns. We give the necessary and sufficient stability condition of the system. Then, we obtain the joint probability distribution of the server state and the number of customers in orbit and derive the more useful performance measures analytically. Moreover, we also analyze the busy period of the system. Finally, we derive the stability condition and the generating function of the stationary distribution of the number of customers in the system when there is no vacations and impatient customers, and when there is no vacations, server failures and impatient customers.Keywords: modeling, retrial queue, unreliable server, vacation, stochastic analysis
Procedia PDF Downloads 185975 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects
Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi
Abstract:
Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC
Procedia PDF Downloads 137974 Analysis of Chatterjea Type F-Contraction in F-Metric Space and Application
Authors: Awais Asif
Abstract:
This article investigates fixed point theorems of Chatterjea type F-contraction in the setting of F-metric space. We relax the conditions of F-contraction and define modified F-contraction for two mappings. The study provides fixed point results for both single-valued and multivalued mappings. The results are further extended to common fixed point theorems for two mappings. Moreover, to discuss the applicability of our results, an application is provided, which shows the role of our results in finding the solution to functional equations in dynamic programming. Our results generalize and extend the existing results in the literature.Keywords: Chatterjea type F-contraction, F-cauchy sequence, F-convergent, multi valued mappings
Procedia PDF Downloads 143973 Several Spectrally Non-Arbitrary Ray Patterns of Order 4
Authors: Ling Zhang, Feng Liu
Abstract:
A matrix is called a ray pattern matrix if its entries are either 0 or a ray in complex plane which originates from 0. A ray pattern A of order n is called spectrally arbitrary if the complex matrices in the ray pattern class of A give rise to all possible nth degree complex polynomial. Otherwise, it is said to be spectrally non-arbitrary ray pattern. We call that a spectrally arbitrary ray pattern A of order n is minimally spectrally arbitrary if any nonzero entry of A is replaced, then A is not spectrally arbitrary. In this paper, we find that is not spectrally arbitrary when n equals to 4 for any θ which is greater than or equal to 0 and less than or equal to n. In this article, we give several ray patterns A(θ) of order n that are not spectrally arbitrary for some θ which is greater than or equal to 0 and less than or equal to n. by using the nilpotent-Jacobi method. One example is given in our paper.Keywords: spectrally arbitrary, nilpotent matrix , ray patterns, sign patterns
Procedia PDF Downloads 183972 Positive Bias and Length Bias in Deep Neural Networks for Premises Selection
Authors: Jiaqi Huang, Yuheng Wang
Abstract:
Premises selection, the task of selecting a set of axioms for proving a given conjecture, is a major bottleneck in automated theorem proving. An array of deep-learning-based methods has been established for premises selection, but a perfect performance remains challenging. Our study examines the inaccuracy of deep neural networks in premises selection. Through training network models using encoded conjecture and axiom pairs from the Mizar Mathematical Library, two potential biases are found: the network models classify more premises as necessary than unnecessary, referred to as the ‘positive bias’, and the network models perform better in proving conjectures that paired with more axioms, referred to as ‘length bias’. The ‘positive bias’ and ‘length bias’ discovered could inform the limitation of existing deep neural networks.Keywords: automated theorem proving, premises selection, deep learning, interpreting deep learning
Procedia PDF Downloads 183971 A Hybrid Model of Structural Equation Modelling-Artificial Neural Networks: Prediction of Influential Factors on Eating Behaviors
Authors: Maryam Kheirollahpour, Mahmoud Danaee, Amir Faisal Merican, Asma Ahmad Shariff
Abstract:
Background: The presence of nonlinearity among the risk factors of eating behavior causes a bias in the prediction models. The accuracy of estimation of eating behaviors risk factors in the primary prevention of obesity has been established. Objective: The aim of this study was to explore the potential of a hybrid model of structural equation modeling (SEM) and Artificial Neural Networks (ANN) to predict eating behaviors. Methods: The Partial Least Square-SEM (PLS-SEM) and a hybrid model (SEM-Artificial Neural Networks (SEM-ANN)) were applied to evaluate the factors affecting eating behavior patterns among university students. 340 university students participated in this study. The PLS-SEM analysis was used to check the effect of emotional eating scale (EES), body shape concern (BSC), and body appreciation scale (BAS) on different categories of eating behavior patterns (EBP). Then, the hybrid model was conducted using multilayer perceptron (MLP) with feedforward network topology. Moreover, Levenberg-Marquardt, which is a supervised learning model, was applied as a learning method for MLP training. The Tangent/sigmoid function was used for the input layer while the linear function applied for the output layer. The coefficient of determination (R²) and mean square error (MSE) was calculated. Results: It was proved that the hybrid model was superior to PLS-SEM methods. Using hybrid model, the optimal network happened at MPLP 3-17-8, while the R² of the model was increased by 27%, while, the MSE was decreased by 9.6%. Moreover, it was found that which one of these factors have significantly affected on healthy and unhealthy eating behavior patterns. The p-value was reported to be less than 0.01 for most of the paths. Conclusion/Importance: Thus, a hybrid approach could be suggested as a significant methodological contribution from a statistical standpoint, and it can be implemented as software to be able to predict models with the highest accuracy.Keywords: hybrid model, structural equation modeling, artificial neural networks, eating behavior patterns
Procedia PDF Downloads 155970 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model
Authors: Nureni O. Adeboye, Dawud A. Agunbiade
Abstract:
This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity
Procedia PDF Downloads 141969 Rank of Semigroup: Generating Sets and Cases Revealing Limitations of the Concept of Independence
Authors: Zsolt Lipcsey, Sampson Marshal Imeh
Abstract:
We investigate a certain characterisation for rank of a semigroup by Howie and Ribeiro (1999), to ascertain the relevance of the concept of independence. There are cases where the concept of independence fails to be useful for this purpose. One would expect the basic element to be the maximal independent subset of a given semigroup. However, we construct examples for semigroups where finite basis exist and the basis is larger than the number of independent elements.Keywords: generating sets, independent set, rank, cyclic semigroup, basis, commutative
Procedia PDF Downloads 189968 Monotonicity of the Jensen Functional for f-Divergences via the Zipf-Mandelbrot Law
Authors: Neda Lovričević, Đilda Pečarić, Josip Pečarić
Abstract:
The Jensen functional in its discrete form is brought in relation to the Csiszar divergence functional, this time via its monotonicity property. This approach presents a generalization of the previously obtained results that made use of interpolating Jensen-type inequalities. Thus the monotonicity property is integrated with the Zipf-Mandelbrot law and applied to f-divergences for probability distributions that originate from the Csiszar divergence functional: Kullback-Leibler divergence, Hellinger distance, Bhattacharyya distance, chi-square divergence, total variation distance. The Zipf-Mandelbrot and the Zipf law are widely used in various scientific fields and interdisciplinary and here the focus is on the aspect of the mathematical inequalities.Keywords: Jensen functional, monotonicity, Csiszar divergence functional, f-divergences, Zipf-Mandelbrot law
Procedia PDF Downloads 142967 Multi-Objective Optimization of Combined System Reliability and Redundancy Allocation Problem
Authors: Vijaya K. Srivastava, Davide Spinello
Abstract:
This paper presents established 3n enumeration procedure for mixed integer optimization problems for solving multi-objective reliability and redundancy allocation problem subject to design constraints. The formulated problem is to find the optimum level of unit reliability and the number of units for each subsystem. A number of illustrative examples are provided and compared to indicate the application of the superiority of the proposed method.Keywords: integer programming, mixed integer programming, multi-objective optimization, Reliability Redundancy Allocation
Procedia PDF Downloads 171966 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate
Procedia PDF Downloads 124965 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate
Procedia PDF Downloads 152964 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate
Procedia PDF Downloads 175963 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates
Authors: Serge B. Provost
Abstract:
Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.Keywords: density estimation, log-density, polynomial adjustments, sample moments
Procedia PDF Downloads 165962 Identifying Psychosocial, Autonomic, and Pain Sensitivity Risk Factors of Chronic Temporomandibular Disorder by Using Ridge Logistic Regression and Bootstrapping
Authors: Haolin Li, Eric Bair, Jane Monaco, Quefeng Li
Abstract:
The temporomandibular disorder (TMD) is a series of musculoskeletal disorders ranging from jaw pain to chronic debilitating pain, and the risk factors for the onset and maintenance of TMD are still unclear. Prior researches have shown that the potential risk factors for chronic TMD are related to psychosocial factors, autonomic functions, and pain sensitivity. Using data from the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study’s baseline case-control study, we examine whether the risk factors identified by prior researches are still statistically significant after taking all of the risk measures into account in one single model, and we also compare the relative influences of the risk factors in three different perspectives (psychosocial factors, autonomic functions, and pain sensitivity) on the chronic TMD. The statistical analysis is conducted by using ridge logistic regression and bootstrapping, in which the performance of the algorithms has been assessed using extensive simulation studies. The results support most of the findings of prior researches that there are many psychosocial and pain sensitivity measures that have significant associations with chronic TMD. However, it is surprising that most of the risk factors of autonomic functions have not presented significant associations with chronic TMD, as described by a prior research.Keywords: autonomic function, OPPERA study, pain sensitivity, psychosocial measures, temporomandibular disorder
Procedia PDF Downloads 187961 Detecting Local Clusters of Childhood Malnutrition in the Island Province of Marinduque, Philippines Using Spatial Scan Statistic
Authors: Novee Lor C. Leyso, Maylin C. Palatino
Abstract:
Under-five malnutrition continues to persist in the Philippines, particularly in the island Province of Marinduque, with prevalence of some forms of malnutrition even worsening in recent years. Local spatial cluster detection provides a spatial perspective in understanding this phenomenon as key in analyzing patterns of geographic variation, identification of community-appropriate programs and interventions, and focused targeting on high-risk areas. Using data from a province-wide household-based census conducted in 2014–2016, this study aimed to determine and evaluate spatial clusters of under-five malnutrition, across the province and within each municipality at the individual level using household location. Malnutrition was defined as weight-for-age z-score that fall outside the 2 standard deviations from the median of the WHO reference population. The Kulldorff’s elliptical spatial scan statistic in binomial model was used to locate clusters with high-risk of malnutrition, while adjusting for age and membership to government conditional cash transfer program as proxy for socio-economic status. One large significant cluster of under-five malnutrition was found southwest of the province, in which living in these areas at least doubles the risk of malnutrition. Additionally, at least one significant cluster were identified within each municipality—mostly located along the coastal areas. All these indicate apparent geographical variations across and within municipalities in the province. There were also similarities and disparities in the patterns of risk of malnutrition in each cluster across municipalities, and even within municipality, suggesting underlying causes at work that warrants further investigation. Therefore, community-appropriate programs and interventions should be identified and should be focused on high-risk areas to maximize limited government resources. Further studies are also recommended to determine factors affecting variations in childhood malnutrition considering the evidence of spatial clustering found in this study.Keywords: Binomial model, Kulldorff’s elliptical spatial scan statistic, Philippines, under-five malnutrition
Procedia PDF Downloads 140960 Regular or Irregular: An Investigation of Medicine Consumption Pattern with Poisson Mixture Model
Authors: Lichung Jen, Yi Chun Liu, Kuan-Wei Lee
Abstract:
Fruitful data has been accumulated in database nowadays and is commonly used as support for decision-making. In the healthcare industry, hospital, for instance, ordering pharmacy inventory is one of the key decision. With large drug inventory, the current cost increases and its expiration dates might lead to future issue, such as drug disposal and recycle. In contrast, underestimating demand of the pharmacy inventory, particularly standing drugs, affects the medical treatment and possibly hospital reputation. Prescription behaviour of hospital physicians is one of the critical factor influencing this decision, particularly irregular prescription behaviour. If a drug’s usage amount in the month is irregular and less than the regular usage, it may cause the trend of subsequent stockpiling. On the contrary, if a drug has been prescribed often than expected, it may result in insufficient inventory. We proposed a hierarchical Bayesian mixture model with two components to identify physicians’ regular/irregular prescription patterns with probabilities. Heterogeneity of hospital is considered in our proposed hierarchical Bayes model. The result suggested that modeling the prescription patterns of physician is beneficial for estimating the order quantity of medication and pharmacy inventory management of the hospital. Managerial implication and future research are discussed.Keywords: hierarchical Bayesian model, poission mixture model, medicines prescription behavior, irregular behavior
Procedia PDF Downloads 127959 A Study on the False Alarm Rates of MEWMA and MCUSUM Control Charts When the Parameters Are Estimated
Authors: Umar Farouk Abbas, Danjuma Mustapha, Hamisu Idi
Abstract:
It is now a known fact that quality is an important issue in manufacturing industries. A control chart is an integrated and powerful tool in statistical process control (SPC). The mean µ and standard deviation σ parameters are estimated. In general, the multivariate exponentially weighted moving average (MEWMA) and multivariate cumulative sum (MCUSUM) are used in the detection of small shifts in joint monitoring of several correlated variables; the charts used information from past data which makes them sensitive to small shifts. The aim of the paper is to compare the performance of Shewhart xbar, MEWMA, and MCUSUM control charts in terms of their false rates when parameters are estimated with autocorrelation. A simulation was conducted in R software to generate the average run length (ARL) values of each of the charts. After the analysis, the results show that a comparison of the false alarm rates of the charts shows that MEWMA chart has lower false alarm rates than the MCUSUM chart at various levels of parameter estimated to the number of ARL0 (in control) values. Also noticed was that the sample size has an advert effect on the false alarm of the control charts.Keywords: average run length, MCUSUM chart, MEWMA chart, false alarm rate, parameter estimation, simulation
Procedia PDF Downloads 221958 Definition of Service Angle of Android’S Robot Hand by Method of Small Movements of Gripper’S Axis Synthesis by Speed Vector
Authors: Valeriy Nebritov
Abstract:
The paper presents a generalized method for determining the service solid angle based on the assigned gripper axis orientation with a stationary grip center. Motion synthesis in this work is carried out in the vector of velocities. As an example, a solid angle of the android robot arm is determined, this angle being formed by the longitudinal axis of a gripper. The nature of the method is based on the study of sets of configuration positions, defining the end point positions of the unit radius sphere sweep, which specifies the service solid angle. From this the spherical curve specifying the shape of the desired solid angle was determined. The results of the research can be used in the development of control systems of autonomous android robots.Keywords: android robot, control systems, motion synthesis, service angle
Procedia PDF Downloads 196957 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression
Authors: Anne M. Denton, Rahul Gomes, David W. Franzen
Abstract:
High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression
Procedia PDF Downloads 129