Search results for: asymptotic analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21604

Search results for: asymptotic analysis

21574 Supersonic Flow around a Dihedral Airfoil: Modeling and Experimentation Investigation

Authors: A. Naamane, M. Hasnaoui

Abstract:

Numerical modeling of fluid flows, whether compressible or incompressible, laminar or turbulent presents a considerable contribution in the scientific and industrial fields. However, the development of an approximate model of a supersonic flow requires the introduction of specific and more precise techniques and methods. For this purpose, the object of this paper is modeling a supersonic flow of inviscid fluid around a dihedral airfoil. Based on the thin airfoils theory and the non-dimensional stationary Steichen equation of a two-dimensional supersonic flow in isentropic evolution, we obtained a solution for the downstream velocity potential of the oblique shock at the second order of relative thickness that characterizes a perturbation parameter. This result has been dealt with by the asymptotic analysis and characteristics method. In order to validate our model, the results are discussed in comparison with theoretical and experimental results. Indeed, firstly, the comparison of the results of our model has shown that they are quantitatively acceptable compared to the existing theoretical results. Finally, an experimental study was conducted using the AF300 supersonic wind tunnel. In this experiment, we have considered the incident upstream Mach number over a symmetrical dihedral airfoil wing. The comparison of the different Mach number downstream results of our model with those of the existing theoretical data (relative margin between 0.07% and 4%) and with experimental results (concordance for a deflection angle between 1° and 11°) support the validation of our model with accuracy.

Keywords: asymptotic modelling, dihedral airfoil, supersonic flow, supersonic wind tunnel

Procedia PDF Downloads 117
21573 Refined Procedures for Second Order Asymptotic Theory

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

Refined procedures for higher-order asymptotic theory for non-linear models are developed. These include a new method for deriving stochastic expansions of arbitrary order, new methods for evaluating the moments of polynomials of sample averages, a new method for deriving the approximate moments of the stochastic expansions; an application of these techniques to gather improved inferences with the weak instruments problem is considered. It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. In our application, finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: edgeworth expansions, higher order asymptotics, saddlepoint expansions, weak instruments

Procedia PDF Downloads 266
21572 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 200
21571 Simulation of Dynamic Behavior of Seismic Isolators Using a Parallel Elasto-Plastic Model

Authors: Nicolò Vaiana, Giorgio Serino

Abstract:

In this paper, a one-dimensional (1d) Parallel Elasto- Plastic Model (PEPM), able to simulate the uniaxial dynamic behavior of seismic isolators having a continuously decreasing tangent stiffness with increasing displacement, is presented. The parallel modeling concept is applied to discretize the continuously decreasing tangent stiffness function, thus allowing to simulate the dynamic behavior of seismic isolation bearings by putting linear elastic and nonlinear elastic-perfectly plastic elements in parallel. The mathematical model has been validated by comparing the experimental force-displacement hysteresis loops, obtained testing a helical wire rope isolator and a recycled rubber-fiber reinforced bearing, with those predicted numerically. Good agreement between the simulated and experimental results shows that the proposed model can be an effective numerical tool to predict the forcedisplacement relationship of seismic isolators within relatively large displacements. Compared to the widely used Bouc-Wen model, the proposed one allows to avoid the numerical solution of a first order ordinary nonlinear differential equation for each time step of a nonlinear time history analysis, thus reducing the computation effort, and requires the evaluation of only three model parameters from experimental tests, namely the initial tangent stiffness, the asymptotic tangent stiffness, and a parameter defining the transition from the initial to the asymptotic tangent stiffness.

Keywords: base isolation, earthquake engineering, parallel elasto-plastic model, seismic isolators, softening hysteresis loops

Procedia PDF Downloads 264
21570 A Semi-Analytical Method for Analysis of the Axially Symmetric Problem on Indentation of a Hot Circular Punch into an Arbitrarily Nonhomogeneous Halfspace

Authors: S. Aizikovich, L. Krenev, Y. Tokovyy, Y. C. Wang

Abstract:

An approximate analytical-numerical solution to the axisymmetric problem on thermo-mechanical indentation of a flat cylindrical punch into an arbitrarily non-homogeneous elastic half-space is constructed by making use of the bilateral asymptotic method. The key point of this method lies in evaluation of the ker¬nels in the obtained integral equations by making use of a numerical technique. Once the structure of the kernel is defined, it then is approximated by an analytical expression of special kind so that the solution of the integral equation can be achieved analytically. This fact allows for construction of the solution in an analytical form, which is convenient for analysis of the mechanical effects concerned with arbitrarily presumed non-homogeneity of the material.

Keywords: contact problem, circular punch, arbitrarily-nonhomogeneous halfspace

Procedia PDF Downloads 501
21569 The New Propensity Score Method and Assessment of Propensity Score: A Simulation Study

Authors: Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner

Abstract:

Propensity score (PS) methods have recently become the standard analysis tool for causal inference in observational studies where exposure is not randomly assigned. Thus, confounding can impact the estimation of treatment effect on the outcome. Due to the dangers of discretizing continuous variables, the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect utilizing the stratification of the PS method. In this study, we will develop a new methodology to improve the efficiency of the PS analysis through stratification and simulation study. We will also explore the property of empirical distribution of average treatment effect theoretically, including asymptotic distribution, variance estimation and 95% confident Intervals.

Keywords: propensity score, stratification, emprical distribution, average treatment effect

Procedia PDF Downloads 80
21568 On Unification of the Electromagnetic, Strong and Weak Interactions

Authors: Hassan Youssef Mohamed

Abstract:

In this paper, we show new wave equations, and by using the equations, we concluded that the strong force and the weak force are not fundamental, but they are quantum effects for electromagnetism. This result is different from the current scientific understanding about strong and weak interactions at all. So, we introduce three evidences for our theory. First, we prove the asymptotic freedom phenomenon in the strong force by using our model. Second, we derive the nuclear shell model as an approximation of our model. Third, we prove that the leptons do not participate in the strong interactions, and we prove the short ranges of weak and strong interactions. So, our model is consistent with the current understanding of physics. Finally, we introduce the electron-positron model as the basic ingredients for protons, neutrons, and all matters, so we can study all particles interactions and nuclear interaction as many-body problems of electrons and positrons. Also, we prove the violation of parity conservation in weak interaction as evidence of our theory in the weak interaction. Also, we calculate the average of the binding energy per nucleon.

Keywords: new wave equations, the strong force, the grand unification theory, hydrogen atom, weak force, the nuclear shell model, the asymptotic freedom, electron-positron model, the violation of parity conservation, the binding energy

Procedia PDF Downloads 161
21567 An Analysis of Conditions for Efficiency Gains in Large ICEs Using Cycling

Authors: Bauer Peter, Murillo Jenny

Abstract:

This paper investigates the bounds of achievable fuel efficiency improvements in engines due to cycling between two operating points assuming a series hybrid configuration . It is shown that for linear bsfc dependencies (as a function of power), cycling is only beneficial if the average power needs are smaller than the power at the optimal bsfc value. Exact expressions for the fuel efficiency gains relative to the constant output power case are derived. This asymptotic analysis is then extended to the case where transient losses due to a change in the operating point are also considered. The case of the boundary bsfc trajectory where constant power application and cycling yield the same fuel consumption.is investigated. It is shown that the boundary bsfc locations of the second non-optimal operating points is hyperbolic. The analysis of the boundary case allows to evaluate whether for a particular engine, cycling can be beneficial. The introduced concepts are illustrated through a number of real world examples, i.e. large production Diesel engines in series hybrid configurations.

Keywords: cycling, efficiency, bsfc, series hybrid, diesel, operating point

Procedia PDF Downloads 490
21566 Adaptive Optimal Controller for Uncertain Inverted Pendulum System: A Dynamic Programming Approach for Continuous Time System

Authors: Dao Phuong Nam, Tran Van Tuyen, Do Trong Tan, Bui Minh Dinh, Nguyen Van Huong

Abstract:

In this paper, we investigate the adaptive optimal control law for continuous-time systems with input disturbances and unknown parameters. This paper extends previous works to obtain the robust control law of uncertain systems. Through theoretical analysis, an adaptive dynamic programming (ADP) based optimal control is proposed to stabilize the closed-loop system and ensure the convergence properties of proposed iterative algorithm. Moreover, the global asymptotic stability (GAS) for closed system is also analyzed. The theoretical analysis for continuous-time systems and simulation results demonstrate the performance of the proposed algorithm for an inverted pendulum system.

Keywords: approximate/adaptive dynamic programming, ADP, adaptive optimal control law, input state stability, ISS, inverted pendulum

Procedia PDF Downloads 176
21565 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 122
21564 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy

Authors: Erick Pruchnicki, Nikhil Padhye

Abstract:

Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.

Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials

Procedia PDF Downloads 96
21563 THRAP2 Gene Identified as a Candidate Susceptibility Gene of Thyroid Autoimmune Diseases Pedigree in Tunisian Population

Authors: Ghazi Chabchoub, Mouna Feki, Mohamed Abid, Hammadi Ayadi

Abstract:

Autoimmune thyroid diseases (AITDs), including Graves’ disease (GD) and Hashimoto’s thyroiditis (HT), are inherited as complex traits. Genetic factors associated with AITDs have been tentatively identified by candidate gene and genome scanning approaches. We analysed three intragenic microsatellite markers in the thyroid hormone receptor associated protein 2 gene (THRAP2), mapped near D12S79 marker, which have a potential role in immune function and inflammation [THRAP2-1(TG)n, THRAP2-2 (AC)n and THRAP2-3 (AC)n]. Our study population concerned 12 patients affected with AITDs belonging to a multiplex Tunisian family with high prevalence of AITDs. Fluorescent genotyping was carried out on ABI 3100 sequencers (Applied Biosystems USA) with the use of GENESCAN for semi-automated fragment sizing and GENOTYPER peak-calling software. Statistical analysis was performed using the non parametric Lod score (NPL) by Merlin software. Merlin outputs non-parametric NPLall (Z) and LOD scores and their corresponding asymptotic P values. The analysis for three intragenic markers in the THRAP2 gene revealed strong evidence for linkage (NPL=3.68, P=0.00012). Our results suggested the possible role of THRAP2 gene in AITDs susceptibility in this family.

Keywords: autoimmunity, autoimmune disease, genetic, linkage analysis

Procedia PDF Downloads 105
21562 On the PTC Thermistor Model with a Hyperbolic Tangent Electrical Conductivity

Authors: M. O. Durojaye, J. T. Agee

Abstract:

This paper is on the one-dimensional, positive temperature coefficient (PTC) thermistor model with a hyperbolic tangent function approximation for the electrical conductivity. The method of asymptotic expansion was adopted to obtain the steady state solution and the unsteady-state response was obtained using the method of lines (MOL) which is a well-established numerical technique. The approach is to reduce the partial differential equation to a vector system of ordinary differential equations and solve numerically. Our analysis shows that the hyperbolic tangent approximation introduced is well suitable for the electrical conductivity. Numerical solutions obtained also exhibit correct physical characteristics of the thermistor and are in good agreement with the exact steady state solutions.

Keywords: electrical conductivity, hyperbolic tangent function, PTC thermistor, method of lines

Procedia PDF Downloads 306
21561 Dynamic Analysis of the Heat Transfer in the Magnetically Assisted Reactor

Authors: Tomasz Borowski, Dawid Sołoducha, Rafał Rakoczy, Marian Kordas

Abstract:

The application of magnetic field is essential for a wide range of technologies or processes (i.e., magnetic hyperthermia, bioprocessing). From the practical point of view, bioprocess control is often limited to the regulation of temperature at constant values favourable to microbial growth. The main aim of this study is to determine the effect of various types of electromagnetic fields (i.e., static or alternating) on the heat transfer in a self-designed magnetically assisted reactor. The experimental set-up is equipped with a measuring instrument which controlled the temperature of the liquid inside the container and supervised the real-time acquisition of all the experimental data coming from the sensors. Temperature signals are also sampled from generator of magnetic field. The obtained temperature profiles were mathematically described and analyzed. The parameters characterizing the response to a step input of a first-order dynamic system were obtained and discussed. For example, the higher values of the time constant means slow signal (in this case, temperature) increase. After the period equal to about five-time constants, the sample temperature nearly reached the asymptotic value. This dynamical analysis allowed us to understand the heating effect under the action of various types of electromagnetic fields. Moreover, the proposed mathematical description can be used to compare the influence of different types of magnetic fields on heat transfer operations.

Keywords: heat transfer, magnetically assisted reactor, dynamical analysis, transient function

Procedia PDF Downloads 157
21560 Instability by Weak Precession of the Flow in a Rapidly Rotating Sphere

Authors: S. Kida

Abstract:

We consider the flow of an incompressible viscous fluid in a precessing sphere whose spin and precession axes are orthogonal to each other. The flow is characterized by two non-dimensional parameters, the Reynolds number Re and the Poincare number Po. For which values of (Re, Po) will the flow approach a steady state from an arbitrary initial condition? To answer it we are searching the instability boundary of the steady states in the whole (Re, Po) plane. Here, we focus the rapidly rotating and weakly precessing limit, i.e., Re >> 1 and Po << 1. The steady flow was obtained by the asymptotic expansion for small ε=Po Re¹/² << 1. The flow exhibits nearly a solid-body rotation in the whole sphere except for a thin boundary layer which develops over the sphere surface. The thickness of this boundary layer is of O(δ), where δ=Re⁻¹/², except where two circular critical bands of thickness of O(δ⁴/⁵) and of width of O(δ²/⁵) which are located away from the spin axis by about 60°. We perform the linear stability analysis of the steady flow. We assume that the disturbances are localized in the critical bands and make an expansion analysis in terms of ε to derive the eigenvalue problem for the growth rate of the disturbance, which is solved numerically. As the solution, we obtain an asymptote of the stability boundary as Po=28.36Re⁻⁰.⁸. This agrees excellently with the corresponding laboratory experiments and numerical simulations. One of the most popular instability mechanisms so far is the parametric instability, which turns out, however, not to give the correct stability boundary. The present instability is different from the parametric instability.

Keywords: boundary layer, critical band, instability, precessing sphere

Procedia PDF Downloads 139
21559 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 223
21558 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.

Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation

Procedia PDF Downloads 58
21557 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 509
21556 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface

Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari

Abstract:

With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.

Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis

Procedia PDF Downloads 400
21555 Optimal Perturbation in an Impulsively Blocked Channel Flow

Authors: Avinash Nayak, Debopam Das

Abstract:

The current work implements the variational principle to find the optimum initial perturbation that provides maximum growth in an impulsively blocked channel flow. The conventional method for studying temporal stability has always been through modal analysis. In most of the transient flows, this modal analysis is still followed with the quasi-steady assumption, i.e. change in base flow is much slower compared to perturbation growth rate. There are other studies where transient analysis on time dependent flows is done by formulating the growth of perturbation as an initial value problem. But the perturbation growth is sensitive to the initial condition. This study intends to find the initial perturbation that provides the maximum growth at a later time. Here, the expression of base flow for blocked channel is derived and the formulation is based on the two dimensional perturbation with stream function representing the perturbation quantity. Hence, the governing equation becomes the Orr-Sommerfeld equation. In the current context, the cost functional is defined as the ratio of disturbance energy at a terminal time 'T' to the initial energy, i.e. G(T) = ||q(T)||2/||q(0)||2 where q is the perturbation and ||.|| defines the norm chosen. The above cost functional needs to be maximized against the initial perturbation distribution. It is achieved with the constraint that perturbation follows the basic governing equation, i.e. Orr-Sommerfeld equation. The corresponding adjoint equation is derived and is solved along with the basic governing equation in an iterative manner to provide the initial spatial shape of the perturbation that provides the maximum growth G (T). The growth rate is plotted against time showing the development of perturbation which achieves an asymptotic shape. The effects of various parameters, e.g. Reynolds number, are studied in the process. Thus, the study emphasizes on the usage of optimal perturbation and its growth to understand the stability characteristics of time dependent flows. The assumption of quasi-steady analysis can be verified against these results for the transient flows like impulsive blocked channel flow.

Keywords: blocked channel flow, calculus of variation, hydrodynamic stability, optimal perturbation

Procedia PDF Downloads 408
21554 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 348
21553 Curve Designing Using an Approximating 4-Point C^2 Ternary Non-Stationary Subdivision Scheme

Authors: Muhammad Younis

Abstract:

A ternary 4-point approximating non-stationary subdivision scheme has been introduced that generates the family of $C^2$ limiting curves. The theory of asymptotic equivalence is being used to analyze the convergence and smoothness of the scheme. The comparison of the proposed scheme has been demonstrated using different examples with the existing 4-point ternary approximating schemes, which shows that the limit curves of the proposed scheme behave more pleasantly and can generate conic sections as well.

Keywords: ternary, non-stationary, approximation subdivision scheme, convergence and smoothness

Procedia PDF Downloads 458
21552 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 68
21551 Error Probability of Multi-User Detection Techniques

Authors: Komal Babbar

Abstract:

Multiuser Detection is the intelligent estimation/demodulation of transmitted bits in the presence of Multiple Access Interference. The authors have presented the Bit-error rate (BER) achieved by linear multi-user detectors: Matched filter (which treats the MAI as AWGN), Decorrelating and MMSE. In this work, authors investigate the bit error probability analysis for Matched filter, decorrelating, and MMSE. This problem arises in several practical CDMA applications where the receiver may not have full knowledge of the number of active users and their signature sequences. In particular, the behavior of MAI at the output of the Multi-user detectors (MUD) is examined under various asymptotic conditions including large signal to noise ratio; large near-far ratios; and a large number of users. In the last section Authors also shows Matlab Simulation results for Multiuser detection techniques i.e., Matched filter, Decorrelating, MMSE for 2 users and 10 users.

Keywords: code division multiple access, decorrelating, matched filter, minimum mean square detection (MMSE) detection, multiple access interference (MAI), multiuser detection (MUD)

Procedia PDF Downloads 506
21550 BIASS in the Estimation of Covariance Matrices and Optimality Criteria

Authors: Juan M. Rodriguez-Diaz

Abstract:

The precision of parameter estimators in the Gaussian linear model is traditionally accounted by the variance-covariance matrix of the asymptotic distribution. However, this measure can underestimate the true variance, specially for small samples. Traditionally, optimal design theory pays attention to this variance through its relationship with the model's information matrix. For this reason it seems convenient, at least in some cases, adapt the optimality criteria in order to get the best designs for the actual variance structure, otherwise the loss in efficiency of the designs obtained with the traditional approach may be very important.

Keywords: correlated observations, information matrix, optimality criteria, variance-covariance matrix

Procedia PDF Downloads 419
21549 An Approximation Method for Exact Boundary Controllability of Euler-Bernoulli

Authors: A. Khernane, N. Khelil, L. Djerou

Abstract:

The aim of this work is to study the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of Euler-Bernoulli beam equation. This study may be difficult. This will depend on the problem under consideration (geometry, control, and dimension) and the numerical method used. Knowledge of the asymptotic behaviour of the control governing the system at time T may be useful for its calculation. This idea will be developed in this study. We have characterized as a first step the solution by a minimization principle and proposed secondly a method for its resolution to approximate the control steering the considered system to rest at time T.

Keywords: boundary control, exact controllability, finite difference methods, functional optimization

Procedia PDF Downloads 327
21548 Role of Additional Food Resources in an Ecosystem with Two Discrete Delays

Authors: Ankit Kumar, Balram Dubey

Abstract:

This study proposes a three dimensional prey-predator model with additional food, provided to predator individuals, including gestation delay in predators and delay in supplying the additional food to predators. It is assumed that the interaction between prey and predator is followed by Holling type-II functional response. We discussed the steady states and their local and global asymptotic behavior for the non-delayed system. Hopf-bifurcation phenomenon with respect to different parameters has also been studied. We obtained a range of predator’s tendency factor on provided additional food, in which the periodic solutions occur in the system. We have shown that oscillations can be controlled from the system by increasing the tendency factor. Moreover, the existence of periodic solutions via Hopf-bifurcation is shown with respect to both the delays. Our analysis shows that both delays play an important role in governing the dynamics of the system. It changes the stability behavior into instability behavior. The direction and stability of Hopf-bifurcation are also investigated through the normal form theory and the center manifold theorem. Lastly, some numerical simulations and graphical illustrations have been carried out to validate our analytical findings.

Keywords: additional food, gestation delay, Hopf-bifurcation, prey-predator

Procedia PDF Downloads 112
21547 A Mathematical Model for Hepatitis B Virus Infection and the Impact of Vaccination on Its Dynamics

Authors: T. G. Kassem, A. K. Adunchezor, J. P. Chollom

Abstract:

This paper describes a mathematical model developed to predict the dynamics of Hepatitis B virus (HBV) infection and to evaluate the potential impact of vaccination and treatment on its dynamics. We used a compartmental model expressed by a set of differential equations based on the characteristic of HBV transmission. With these, we find the threshold quantity R0, then find the local asymptotic stability of disease free equilibrium and endemic equilibrium. Furthermore, we find the global stability of the disease free and endemic equilibrium.

Keywords: hepatitis B virus, epidemiology, vaccination, mathematical model

Procedia PDF Downloads 305
21546 Robust Variogram Fitting Using Non-Linear Rank-Based Estimators

Authors: Hazem M. Al-Mofleh, John E. Daniels, Joseph W. McKean

Abstract:

In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.

Keywords: asymptotic relative efficiency, non-linear rank-based, rank estimates, variogram

Procedia PDF Downloads 409
21545 Survey of Methods for Solutions of Spatial Covariance Structures and Their Limitations

Authors: Joseph Thomas Eghwerido, Julian I. Mbegbu

Abstract:

In modelling environment processes, we apply multidisciplinary knowledge to explain, explore and predict the Earth's response to natural human-induced environmental changes. Thus, the analysis of spatial-time ecological and environmental studies, the spatial parameters of interest are always heterogeneous. This often negates the assumption of stationarity. Hence, the dispersion of the transportation of atmospheric pollutants, landscape or topographic effect, weather patterns depends on a good estimate of spatial covariance. The generalized linear mixed model, although linear in the expected value parameters, its likelihood varies nonlinearly as a function of the covariance parameters. As a consequence, computing estimates for a linear mixed model requires the iterative solution of a system of simultaneous nonlinear equations. In other to predict the variables at unsampled locations, we need to know the estimate of the present sampled variables. The geostatistical methods for solving this spatial problem assume covariance stationarity (locally defined covariance) and uniform in space; which is not apparently valid because spatial processes often exhibit nonstationary covariance. Hence, they have globally defined covariance. We shall consider different existing methods of solutions of spatial covariance of a space-time processes at unsampled locations. This stationary covariance changes with locations for multiple time set with some asymptotic properties.

Keywords: parametric, nonstationary, Kernel, Kriging

Procedia PDF Downloads 237