Abstracts | Mathematical and Computational Sciences
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1395

World Academy of Science, Engineering and Technology

[Mathematical and Computational Sciences]

Online ISSN : 1307-6892

975 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects

Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi

Abstract:

Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.

Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC

Procedia PDF Downloads 109
974 Analysis of Chatterjea Type F-Contraction in F-Metric Space and Application

Authors: Awais Asif

Abstract:

This article investigates fixed point theorems of Chatterjea type F-contraction in the setting of F-metric space. We relax the conditions of F-contraction and define modified F-contraction for two mappings. The study provides fixed point results for both single-valued and multivalued mappings. The results are further extended to common fixed point theorems for two mappings. Moreover, to discuss the applicability of our results, an application is provided, which shows the role of our results in finding the solution to functional equations in dynamic programming. Our results generalize and extend the existing results in the literature.

Keywords: Chatterjea type F-contraction, F-cauchy sequence, F-convergent, multi valued mappings

Procedia PDF Downloads 115
973 Several Spectrally Non-Arbitrary Ray Patterns of Order 4

Authors: Ling Zhang, Feng Liu

Abstract:

A matrix is called a ray pattern matrix if its entries are either 0 or a ray in complex plane which originates from 0. A ray pattern A of order n is called spectrally arbitrary if the complex matrices in the ray pattern class of A give rise to all possible nth degree complex polynomial. Otherwise, it is said to be spectrally non-arbitrary ray pattern. We call that a spectrally arbitrary ray pattern A of order n is minimally spectrally arbitrary if any nonzero entry of A is replaced, then A is not spectrally arbitrary. In this paper, we find that is not spectrally arbitrary when n equals to 4 for any θ which is greater than or equal to 0 and less than or equal to n. In this article, we give several ray patterns A(θ) of order n that are not spectrally arbitrary for some θ which is greater than or equal to 0 and less than or equal to n. by using the nilpotent-Jacobi method. One example is given in our paper.

Keywords: spectrally arbitrary, nilpotent matrix , ray patterns, sign patterns

Procedia PDF Downloads 151
972 Positive Bias and Length Bias in Deep Neural Networks for Premises Selection

Authors: Jiaqi Huang, Yuheng Wang

Abstract:

Premises selection, the task of selecting a set of axioms for proving a given conjecture, is a major bottleneck in automated theorem proving. An array of deep-learning-based methods has been established for premises selection, but a perfect performance remains challenging. Our study examines the inaccuracy of deep neural networks in premises selection. Through training network models using encoded conjecture and axiom pairs from the Mizar Mathematical Library, two potential biases are found: the network models classify more premises as necessary than unnecessary, referred to as the ‘positive bias’, and the network models perform better in proving conjectures that paired with more axioms, referred to as ‘length bias’. The ‘positive bias’ and ‘length bias’ discovered could inform the limitation of existing deep neural networks.

Keywords: automated theorem proving, premises selection, deep learning, interpreting deep learning

Procedia PDF Downloads 149
971 A Hybrid Model of Structural Equation Modelling-Artificial Neural Networks: Prediction of Influential Factors on Eating Behaviors

Authors: Maryam Kheirollahpour, Mahmoud Danaee, Amir Faisal Merican, Asma Ahmad Shariff

Abstract:

Background: The presence of nonlinearity among the risk factors of eating behavior causes a bias in the prediction models. The accuracy of estimation of eating behaviors risk factors in the primary prevention of obesity has been established. Objective: The aim of this study was to explore the potential of a hybrid model of structural equation modeling (SEM) and Artificial Neural Networks (ANN) to predict eating behaviors. Methods: The Partial Least Square-SEM (PLS-SEM) and a hybrid model (SEM-Artificial Neural Networks (SEM-ANN)) were applied to evaluate the factors affecting eating behavior patterns among university students. 340 university students participated in this study. The PLS-SEM analysis was used to check the effect of emotional eating scale (EES), body shape concern (BSC), and body appreciation scale (BAS) on different categories of eating behavior patterns (EBP). Then, the hybrid model was conducted using multilayer perceptron (MLP) with feedforward network topology. Moreover, Levenberg-Marquardt, which is a supervised learning model, was applied as a learning method for MLP training. The Tangent/sigmoid function was used for the input layer while the linear function applied for the output layer. The coefficient of determination (R²) and mean square error (MSE) was calculated. Results: It was proved that the hybrid model was superior to PLS-SEM methods. Using hybrid model, the optimal network happened at MPLP 3-17-8, while the R² of the model was increased by 27%, while, the MSE was decreased by 9.6%. Moreover, it was found that which one of these factors have significantly affected on healthy and unhealthy eating behavior patterns. The p-value was reported to be less than 0.01 for most of the paths. Conclusion/Importance: Thus, a hybrid approach could be suggested as a significant methodological contribution from a statistical standpoint, and it can be implemented as software to be able to predict models with the highest accuracy.

Keywords: hybrid model, structural equation modeling, artificial neural networks, eating behavior patterns

Procedia PDF Downloads 121
970 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 121
969 Rank of Semigroup: Generating Sets and Cases Revealing Limitations of the Concept of Independence

Authors: Zsolt Lipcsey, Sampson Marshal Imeh

Abstract:

We investigate a certain characterisation for rank of a semigroup by Howie and Ribeiro (1999), to ascertain the relevance of the concept of independence. There are cases where the concept of independence fails to be useful for this purpose. One would expect the basic element to be the maximal independent subset of a given semigroup. However, we construct examples for semigroups where finite basis exist and the basis is larger than the number of independent elements.

Keywords: generating sets, independent set, rank, cyclic semigroup, basis, commutative

Procedia PDF Downloads 166
968 Monotonicity of the Jensen Functional for f-Divergences via the Zipf-Mandelbrot Law

Authors: Neda Lovričević, Đilda Pečarić, Josip Pečarić

Abstract:

The Jensen functional in its discrete form is brought in relation to the Csiszar divergence functional, this time via its monotonicity property. This approach presents a generalization of the previously obtained results that made use of interpolating Jensen-type inequalities. Thus the monotonicity property is integrated with the Zipf-Mandelbrot law and applied to f-divergences for probability distributions that originate from the Csiszar divergence functional: Kullback-Leibler divergence, Hellinger distance, Bhattacharyya distance, chi-square divergence, total variation distance. The Zipf-Mandelbrot and the Zipf law are widely used in various scientific fields and interdisciplinary and here the focus is on the aspect of the mathematical inequalities.

Keywords: Jensen functional, monotonicity, Csiszar divergence functional, f-divergences, Zipf-Mandelbrot law

Procedia PDF Downloads 115
967 Multi-Objective Optimization of Combined System Reliability and Redundancy Allocation Problem

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents established 3n enumeration procedure for mixed integer optimization problems for solving multi-objective reliability and redundancy allocation problem subject to design constraints. The formulated problem is to find the optimum level of unit reliability and the number of units for each subsystem. A number of illustrative examples are provided and compared to indicate the application of the superiority of the proposed method.

Keywords: integer programming, mixed integer programming, multi-objective optimization, Reliability Redundancy Allocation

Procedia PDF Downloads 142
966 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 102
965 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 128
964 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 144
963 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 132
962 Identifying Psychosocial, Autonomic, and Pain Sensitivity Risk Factors of Chronic Temporomandibular Disorder by Using Ridge Logistic Regression and Bootstrapping

Authors: Haolin Li, Eric Bair, Jane Monaco, Quefeng Li

Abstract:

The temporomandibular disorder (TMD) is a series of musculoskeletal disorders ranging from jaw pain to chronic debilitating pain, and the risk factors for the onset and maintenance of TMD are still unclear. Prior researches have shown that the potential risk factors for chronic TMD are related to psychosocial factors, autonomic functions, and pain sensitivity. Using data from the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study’s baseline case-control study, we examine whether the risk factors identified by prior researches are still statistically significant after taking all of the risk measures into account in one single model, and we also compare the relative influences of the risk factors in three different perspectives (psychosocial factors, autonomic functions, and pain sensitivity) on the chronic TMD. The statistical analysis is conducted by using ridge logistic regression and bootstrapping, in which the performance of the algorithms has been assessed using extensive simulation studies. The results support most of the findings of prior researches that there are many psychosocial and pain sensitivity measures that have significant associations with chronic TMD. However, it is surprising that most of the risk factors of autonomic functions have not presented significant associations with chronic TMD, as described by a prior research.

Keywords: autonomic function, OPPERA study, pain sensitivity, psychosocial measures, temporomandibular disorder

Procedia PDF Downloads 156
961 Detecting Local Clusters of Childhood Malnutrition in the Island Province of Marinduque, Philippines Using Spatial Scan Statistic

Authors: Novee Lor C. Leyso, Maylin C. Palatino

Abstract:

Under-five malnutrition continues to persist in the Philippines, particularly in the island Province of Marinduque, with prevalence of some forms of malnutrition even worsening in recent years. Local spatial cluster detection provides a spatial perspective in understanding this phenomenon as key in analyzing patterns of geographic variation, identification of community-appropriate programs and interventions, and focused targeting on high-risk areas. Using data from a province-wide household-based census conducted in 2014–2016, this study aimed to determine and evaluate spatial clusters of under-five malnutrition, across the province and within each municipality at the individual level using household location. Malnutrition was defined as weight-for-age z-score that fall outside the 2 standard deviations from the median of the WHO reference population. The Kulldorff’s elliptical spatial scan statistic in binomial model was used to locate clusters with high-risk of malnutrition, while adjusting for age and membership to government conditional cash transfer program as proxy for socio-economic status. One large significant cluster of under-five malnutrition was found southwest of the province, in which living in these areas at least doubles the risk of malnutrition. Additionally, at least one significant cluster were identified within each municipality—mostly located along the coastal areas. All these indicate apparent geographical variations across and within municipalities in the province. There were also similarities and disparities in the patterns of risk of malnutrition in each cluster across municipalities, and even within municipality, suggesting underlying causes at work that warrants further investigation. Therefore, community-appropriate programs and interventions should be identified and should be focused on high-risk areas to maximize limited government resources. Further studies are also recommended to determine factors affecting variations in childhood malnutrition considering the evidence of spatial clustering found in this study.

Keywords: Binomial model, Kulldorff’s elliptical spatial scan statistic, Philippines, under-five malnutrition

Procedia PDF Downloads 105
960 Regular or Irregular: An Investigation of Medicine Consumption Pattern with Poisson Mixture Model

Authors: Lichung Jen, Yi Chun Liu, Kuan-Wei Lee

Abstract:

Fruitful data has been accumulated in database nowadays and is commonly used as support for decision-making. In the healthcare industry, hospital, for instance, ordering pharmacy inventory is one of the key decision. With large drug inventory, the current cost increases and its expiration dates might lead to future issue, such as drug disposal and recycle. In contrast, underestimating demand of the pharmacy inventory, particularly standing drugs, affects the medical treatment and possibly hospital reputation. Prescription behaviour of hospital physicians is one of the critical factor influencing this decision, particularly irregular prescription behaviour. If a drug’s usage amount in the month is irregular and less than the regular usage, it may cause the trend of subsequent stockpiling. On the contrary, if a drug has been prescribed often than expected, it may result in insufficient inventory. We proposed a hierarchical Bayesian mixture model with two components to identify physicians’ regular/irregular prescription patterns with probabilities. Heterogeneity of hospital is considered in our proposed hierarchical Bayes model. The result suggested that modeling the prescription patterns of physician is beneficial for estimating the order quantity of medication and pharmacy inventory management of the hospital. Managerial implication and future research are discussed.

Keywords: hierarchical Bayesian model, poission mixture model, medicines prescription behavior, irregular behavior

Procedia PDF Downloads 102
959 A Study on the False Alarm Rates of MEWMA and MCUSUM Control Charts When the Parameters Are Estimated

Authors: Umar Farouk Abbas, Danjuma Mustapha, Hamisu Idi

Abstract:

It is now a known fact that quality is an important issue in manufacturing industries. A control chart is an integrated and powerful tool in statistical process control (SPC). The mean µ and standard deviation σ parameters are estimated. In general, the multivariate exponentially weighted moving average (MEWMA) and multivariate cumulative sum (MCUSUM) are used in the detection of small shifts in joint monitoring of several correlated variables; the charts used information from past data which makes them sensitive to small shifts. The aim of the paper is to compare the performance of Shewhart xbar, MEWMA, and MCUSUM control charts in terms of their false rates when parameters are estimated with autocorrelation. A simulation was conducted in R software to generate the average run length (ARL) values of each of the charts. After the analysis, the results show that a comparison of the false alarm rates of the charts shows that MEWMA chart has lower false alarm rates than the MCUSUM chart at various levels of parameter estimated to the number of ARL0 (in control) values. Also noticed was that the sample size has an advert effect on the false alarm of the control charts.

Keywords: average run length, MCUSUM chart, MEWMA chart, false alarm rate, parameter estimation, simulation

Procedia PDF Downloads 184
958 Definition of Service Angle of Android’S Robot Hand by Method of Small Movements of Gripper’S Axis Synthesis by Speed Vector

Authors: Valeriy Nebritov

Abstract:

The paper presents a generalized method for determining the service solid angle based on the assigned gripper axis orientation with a stationary grip center. Motion synthesis in this work is carried out in the vector of velocities. As an example, a solid angle of the android robot arm is determined, this angle being formed by the longitudinal axis of a gripper. The nature of the method is based on the study of sets of configuration positions, defining the end point positions of the unit radius sphere sweep, which specifies the service solid angle. From this the spherical curve specifying the shape of the desired solid angle was determined. The results of the research can be used in the development of control systems of autonomous android robots.

Keywords: android robot, control systems, motion synthesis, service angle

Procedia PDF Downloads 171
957 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 102
956 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 118
955 Extensions of Schwarz Lemma in the Half-Plane

Authors: Nicolae Pascu

Abstract:

Aside from being a fundamental tool in Complex analysis, Schwarz Lemma-which was finalized in its most complete form at the beginning of the last century-generated an important area of research in various fields of mathematics, which continues to advance even today. We present some properties of analytic functions in the half-plane which satisfy the conditions of the classical Schwarz Lemma (Carathéodory functions) and obtain a generalization of the well-known Aleksandrov-Sobolev Lemma for analytic functions in the half-plane (the correspondent of Schwarz-Pick Lemma from the unit disk). Using this Schwarz-type lemma, we obtain a characterization for the entire class of Carathéodory functions, which might be of independent interest. We prove two monotonicity properties for Carathéodory functions that do not depend upon their normalization at infinity (the hydrodynamic normalization). The method is based on conformal mapping arguments for analytic functions in the half-plane satisfying appropriate conditions, in the spirit of Schwarz lemma. According to the research findings in this paper, our main results give estimates for the modulus and the argument for the entire class of Carathéodory functions. As applications, we give several extensions of Julia-Wolf-Carathéodory Lemma in a half-strip and show that our results are sharp.

Keywords: schwarz lemma, Julia-wolf-caratéodory lemma, analytic function, normalization condition, caratéodory function

Procedia PDF Downloads 169
954 Regression for Doubly Inflated Multivariate Poisson Distributions

Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta

Abstract:

Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.

Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios

Procedia PDF Downloads 134
953 A Stepwise Approach to Automate the Search for Optimal Parameters in Seasonal ARIMA Models

Authors: Manisha Mukherjee, Diptarka Saha

Abstract:

Reliable forecasts of univariate time series data are often necessary for several contexts. ARIMA models are quite popular among practitioners in this regard. Hence, choosing correct parameter values for ARIMA is a challenging yet imperative task. Thus, a stepwise algorithm is introduced to provide automatic and robust estimates for parameters (p; d; q)(P; D; Q) used in seasonal ARIMA models. This process is focused on improvising the overall quality of the estimates, and it alleviates the problems induced due to the unidimensional nature of the methods that are currently used such as auto.arima. The fast and automated search of parameter space also ensures reliable estimates of the parameters that possess several desirable qualities, consequently, resulting in higher test accuracy especially in the cases of noisy data. After vigorous testing on real as well as simulated data, the algorithm doesn’t only perform better than current state-of-the-art methods, it also completely obviates the need for human intervention due to its automated nature.

Keywords: time series, ARIMA, auto.arima, ARIMA parameters, forecast, R function

Procedia PDF Downloads 132
952 The Influence of Beta Shape Parameters in Project Planning

Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou

Abstract:

Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.

Keywords: beta distribution, PERT, Monte Carlo simulation, skewness, project completion time distribution

Procedia PDF Downloads 120
951 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions

Authors: Hannah F. Opayinka, Adedayo A. Adepoju

Abstract:

This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.

Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution

Procedia PDF Downloads 158
950 Effects of Educational Technology Integration in Classroom Instruction to the Math Performance of Generation Z Students of a Private High School in the Philippines

Authors: May Maricel De Gracia

Abstract:

Different generations respond differently to instruction because of their diverse characteristics, learning styles and study habits. Teaching strategies that were effective many years ago may not be effective now especially to the current generation which is Gen Z. Using quantitative research design, the main goal of this paper is to determine the impact of the implementation of educational technology integration in a private high school in the math performance of its Junior High School (JHS) students on SY 2014-2018 based on their periodical exam performance and on their final math grades. In support, survey on the use of technology was administered to determine the characteristics of both students and teachers of SY 2017-2018. Another survey regarding study habits was also administered to the students to determine their readiness with regards to note-taking skills, time management, test taking/preparation skills, reading, and writing and math skills. Teaching strategies were recommended based on the need of the current Gen Z JHS students. A total of 712 JHS students and 12 math teachers participated in answering the different surveys. Periodic exam means and final math grades between the school years without technology (SY 2004-2008) and with technology (SY 2014-2018) were analyzed through correlation and regression analyses. Result shows that the periodic exam mean has a 35.29% impact to the final grade of the students. In addition, z-test result where p > 0.05 shows that the periodical exam results do not differ significantly between the school years without integration of technology and with the integration of technology. However, with p < 0.01, a significant positive difference was observed in the final math grades of students between the school years without technology integration and with technology integration.

Keywords: classroom instruction, technology, generation z, math performance

Procedia PDF Downloads 124
949 Forecasting Issues in Energy Markets within a Reg-ARIMA Framework

Authors: Ilaria Lucrezia Amerise

Abstract:

Electricity markets throughout the world have undergone substantial changes. Accurate, reliable, clear and comprehensible modeling and forecasting of different variables (loads and prices in the first instance) have achieved increasing importance. In this paper, we describe the actual state of the art focusing on reg-SARMA methods, which have proven to be flexible enough to accommodate the electricity price/load behavior satisfactory. More specifically, we will discuss: 1) The dichotomy between point and interval forecasts; 2) The difficult choice between stochastic (e.g. climatic variation) and non-deterministic predictors (e.g. calendar variables); 3) The confrontation between modelling a single aggregate time series or creating separated and potentially different models of sub-series. The noteworthy point that we would like to make it emerge is that prices and loads require different approaches that appear irreconcilable even though must be made reconcilable for the interests and activities of energy companies.

Keywords: interval forecasts, time series, electricity prices, reg-SARIMA methods

Procedia PDF Downloads 108
948 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 125
947 The Permutation of Symmetric Triangular Equilateral Group in the Cryptography of Private and Public Key

Authors: Fola John Adeyeye

Abstract:

In this paper, we propose a cryptosystem private and public key base on symmetric group Pn and validates its theoretical formulation. This proposed system benefits from the algebraic properties of Pn such as noncommutative high logical, computational speed and high flexibility in selecting key which makes the discrete permutation multiplier logic (DPML) resist to attack by any algorithm such as Pohlig-Hellman. One of the advantages of this scheme is that it explore all the possible triangular symmetries. Against these properties, the only disadvantage is that the law of permutation multiplicity only allow an operation from left to right. Many other cryptosystems can be transformed into their symmetric group.

Keywords: cryptosystem, private and public key, DPML, symmetric group Pn

Procedia PDF Downloads 175
946 Three-Dimensional Generalized Thermoelasticity with Variable Thermal Conductivity

Authors: Hamdy M. Youssef, Mowffaq Oreijah, Hunaydi S. Alsharif

Abstract:

In this paper, a three-dimensional model of the generalized thermoelasticity with one relaxation time and variable thermal conductivity has been constructed. The resulting non-dimensional governing equations together with the Laplace and double Fourier transforms techniques have been applied to a three-dimensional half-space subjected to thermal loading with rectangular pulse and traction free in the directions of the principle co-ordinates. The inverses of double Fourier transforms, and Laplace transforms have been obtained numerically. Numerical results for the temperature increment, the invariant stress, the invariant strain, and the displacement are represented graphically. The variability of the thermal conductivity has significant effects on the thermal and the mechanical waves.

Keywords: thermoelasticity, thermal conductivity, Laplace transforms, Fourier transforms

Procedia PDF Downloads 200