Search results for: normal inverse gaussian distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7975

Search results for: normal inverse gaussian distribution

7855 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach

Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan

Abstract:

Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.

Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach

Procedia PDF Downloads 376
7854 Model-Based Control for Piezoelectric-Actuated Systems Using Inverse Prandtl-Ishlinskii Model and Particle Swarm Optimization

Authors: Jin-Wei Liang, Hung-Yi Chen, Lung Lin

Abstract:

In this paper feedforward controller is designed to eliminate nonlinear hysteresis behaviors of a piezoelectric stack actuator (PSA) driven system. The control design is based on inverse Prandtl-Ishlinskii (P-I) hysteresis model identified using particle swarm optimization (PSO) technique. Based on the identified P-I model, both the inverse P-I hysteresis model and feedforward controller can be determined. Experimental results obtained using the inverse P-I feedforward control are compared with their counterparts using hysteresis estimates obtained from the identified Bouc-Wen model. Effectiveness of the proposed feedforward control scheme is demonstrated. To improve control performance feedback compensation using traditional PID scheme is adopted to integrate with the feedforward controller.

Keywords: the Bouc-Wen hysteresis model, particle swarm optimization, Prandtl-Ishlinskii model, automation engineering

Procedia PDF Downloads 488
7853 Quasistationary States and Mean Field Model

Authors: Sergio Curilef, Boris Atenas

Abstract:

Systems with long-range interactions are very common in nature. They are observed from the atomic scale to the astronomical scale and exhibit anomalies, such as inequivalence of ensembles, negative heat capacity, ergodicity breaking, nonequilibrium phase transitions, quasistationary states, and anomalous diffusion. These anomalies are exacerbated when special initial conditions are imposed; in particular, we use the so-called water bag initial conditions that stand for a uniform distribution. Several theoretical and practical implications are discussed here. A potential energy inspired by dipole-dipole interactions is proposed to build the dipole-type Hamiltonian mean-field model. As expected, the dynamics is novel and general to the behavior of systems with long-range interactions, which is obtained through molecular dynamics technique. Two plateaus sequentially emerge before arriving at equilibrium, which are corresponding to two different quasistationary states. The first plateau is a type of quasistationary state the lifetime of which depends on a power law of N and the second plateau seems to be a true quasistationary state as reported in the literature. The general behavior of the model according to its dynamics and thermodynamics is described. Using numerical simulation we characterize the mean kinetic energy, caloric curve, and the diffusion law through the mean square of displacement. The present challenge is to characterize the distributions in phase space. Certainly, the equilibrium state is well characterized by the Gaussian distribution, but quasistationary states in general depart from any Gaussian function.

Keywords: dipole-type interactions, dynamics and thermodynamics, mean field model, quasistationary states

Procedia PDF Downloads 182
7852 On Block Vandermonde Matrix Constructed from Matrix Polynomial Solvents

Authors: Malika Yaici, Kamel Hariche

Abstract:

In control engineering, systems described by matrix fractions are studied through properties of block roots, also called solvents. These solvents are usually dealt with in a block Vandermonde matrix form. Inverses and determinants of Vandermonde matrices and block Vandermonde matrices are used in solving problems of numerical analysis in many domains but require costly computations. Even though Vandermonde matrices are well known and method to compute inverse and determinants are many and, generally, based on interpolation techniques, methods to compute the inverse and determinant of a block Vandermonde matrix have not been well studied. In this paper, some properties of these matrices and iterative algorithms to compute the determinant and the inverse of a block Vandermonde matrix are given. These methods are deducted from the partitioned matrix inversion and determinant computing methods. Due to their great size, parallelization may be a solution to reduce the computations cost, so a parallelization of these algorithms is proposed and validated by a comparison using algorithmic complexity.

Keywords: block vandermonde matrix, solvents, matrix polynomial, matrix inverse, matrix determinant, parallelization

Procedia PDF Downloads 199
7851 Atomic Layer Deposition Of Metal Oxide Inverse Opals: A Promising Strategy For Photocatalytic Applications

Authors: Hamsasew Hankebo Lemago, Dóra Hessz, Tamás Igricz, Zoltán Erdélyi, , Imre Miklós Szilágyi

Abstract:

Metal oxide inverse opals are a promising class of photocatalysts with a unique hierarchical structure. Atomic layer deposition (ALD) is a versatile technique for the synthesis of high-precision metal oxide thin films, including inverse opals. In this study, we report the synthesis of TiO₂, ZnO, and Al₂O₃ inverse opal and their composites photocatalysts using thermal or plasma-enhanced ALD. The synthesized photocatalysts were characterized using a variety of techniques, including scanning electron microscopy (SEM)-energy dispersive X-ray spectroscopy (EDX), X-ray diffraction (XRD), Raman spectroscopy, photoluminescence (PL), ellipsometry, and UV-visible spectroscopy. The results showed that the ALD-synthesized metal oxide inverse opals had a highly ordered structure and a tunable pore size. The PL spectroscopy results showed low recombination rates of photogenerated electron-hole pairs, while the ellipsometry and UV-visible spectroscopy results showed tunable optical properties and band gap energies. The photocatalytic activity of the samples was evaluated by the degradation of methylene blue under visible light irradiation. The results showed that the ALD-synthesized metal oxide inverse opals exhibited high photocatalytic activity, even under visible light irradiation. The composites photocatalysts showed even higher activity than the individual metal oxide inverse opals. The enhanced photocatalytic activity of the composites can be attributed to the synergistic effect between the different metal oxides. For example, Al₂O₃ can act as a charge carrier scavenger, which can reduce the recombination of photogenerated electron-hole pairs. The ALD-synthesized metal oxide inverse opals and their composites are promising photocatalysts for a variety of applications, such as wastewater treatment, air purification, and energy production. The ALD-synthesized metal oxide inverse opals and their composites are promising photocatalysts for a variety of applications, such as wastewater treatment, air purification, and energy production.

Keywords: ALD, metal oxide inverse opals, photocatalysis, composites

Procedia PDF Downloads 48
7850 A Non-Iterative Shape Reconstruction of an Interface from Boundary Measurement

Authors: Mourad Hrizi

Abstract:

In this paper, we study the inverse problem of reconstructing an interior interface D appearing in the elliptic partial differential equation: Δu+χ(D)u=0 from the knowledge of the boundary measurements. This problem arises from a semiconductor transistor model. We propose a new shape reconstruction procedure that is based on the Kohn-Vogelius formulation and the topological sensitivity method. The inverse problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a function. The unknown subdomain D is reconstructed using a level-set curve of the topological gradient. Finally, we give several examples to show the viability of our proposed method.

Keywords: inverse problem, topological optimization, topological gradient, Kohn-Vogelius formulation

Procedia PDF Downloads 217
7849 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 380
7848 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem

Authors: Kyugneun Lee, Ikjin Lee

Abstract:

Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.

Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis

Procedia PDF Downloads 282
7847 Global Based Histogram for 3D Object Recognition

Authors: Somar Boubou, Tatsuo Narikiyo, Michihiro Kawanishi

Abstract:

In this work, we address the problem of 3D object recognition with depth sensors such as Kinect or Structure sensor. Compared with traditional approaches based on local descriptors, which depends on local information around the object key points, we propose a global features based descriptor. Proposed descriptor, which we name as Differential Histogram of Normal Vectors (DHONV), is designed particularly to capture the surface geometric characteristics of the 3D objects represented by depth images. We describe the 3D surface of an object in each frame using a 2D spatial histogram capturing the normalized distribution of differential angles of the surface normal vectors. The object recognition experiments on the benchmark RGB-D object dataset and a self-collected dataset show that our proposed descriptor outperforms two others descriptors based on spin-images and histogram of normal vectors with linear-SVM classifier.

Keywords: vision in control, robotics, histogram, differential histogram of normal vectors

Procedia PDF Downloads 251
7846 Topological Sensitivity Analysis for Reconstruction of the Inverse Source Problem from Boundary Measurement

Authors: Maatoug Hassine, Mourad Hrizi

Abstract:

In this paper, we consider a geometric inverse source problem for the heat equation with Dirichlet and Neumann boundary data. We will reconstruct the exact form of the unknown source term from additional boundary conditions. Our motivation is to detect the location, the size and the shape of source support. We present a one-shot algorithm based on the Kohn-Vogelius formulation and the topological gradient method. The geometric inverse source problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a source function. Then, we present a non-iterative numerical method for the geometric reconstruction of the source term with unknown support using a level curve of the topological gradient. Finally, we give several examples to show the viability of our presented method.

Keywords: geometric inverse source problem, heat equation, topological optimization, topological sensitivity, Kohn-Vogelius formulation

Procedia PDF Downloads 270
7845 Experimental Investigation of On-Body Channel Modelling at 2.45 GHz

Authors: Hasliza A. Rahim, Fareq Malek, Nur A. M. Affendi, Azuwa Ali, Norshafinash Saudin, Latifah Mohamed

Abstract:

This paper presents the experimental investigation of on-body channel fading at 2.45 GHz considering two effects of the user body movement; stationary and mobile. A pair of body-worn antennas was utilized in this measurement campaign. A statistical analysis was performed by comparing the measured on-body path loss to five well-known distributions; lognormal, normal, Nakagami, Weibull and Rayleigh. The results showed that the average path loss of moving arm varied higher than the path loss in sitting position for upper-arm-to-left-chest link, up to 3.5 dB. The analysis also concluded that the Nakagami distribution provided the best fit for most of on-body static link path loss in standing still and sitting position, while the arm movement can be best described by log-normal distribution.

Keywords: on-body channel communications, fading characteristics, statistical model, body movement

Procedia PDF Downloads 322
7844 Additive White Gaussian Noise Filtering from ECG by Wiener Filter and Median Filter: A Comparative Study

Authors: Hossein Javidnia, Salehe Taheri

Abstract:

The Electrocardiogram (ECG) is the recording of the heart’s electrical potential versus time. ECG signals are often contaminated with noise such as baseline wander and muscle noise. As these signals have been widely used in clinical studies to detect heart diseases, it is essential to filter these noises. In this paper we compare performance of Wiener Filtering and Median Filtering methods to filter Additive White Gaussian (AWG) noise with the determined signal to noise ratio (SNR) ranging from 3 to 5 dB applied to long-term ECG recordings samples. Root mean square error (RMSE) and coefficient of determination (R2) between the filtered ECG and original ECG was used as the filter performance indicator. Experimental results show that Wiener filter has better noise filtering performance than Median filter.

Keywords: ECG noise filtering, Wiener filtering, median filtering, Gaussian noise, filtering performance

Procedia PDF Downloads 496
7843 Pattern Synthesis of Nonuniform Linear Arrays Including Mutual Coupling Effects Based on Gaussian Process Regression and Genetic Algorithm

Authors: Ming Su, Ziqiang Mu

Abstract:

This paper proposes a synthesis method for nonuniform linear antenna arrays that combine Gaussian process regression (GPR) and genetic algorithm (GA). In this method, the GPR model can be used to calculate the array radiation pattern in the presence of mutual coupling effects, and then the GA is used to optimize the excitations and locations of the elements so as to generate the desired radiation pattern. In this paper, taking a 9-element nonuniform linear array as an example and the desired radiation pattern corresponding to a Chebyshev distribution as the optimization objective, optimize the excitations and locations of the elements. Finally, the optimization results are verified by electromagnetic simulation software CST, which shows that the method is effective.

Keywords: nonuniform linear antenna arrays, GPR, GA, mutual coupling effects, active element pattern

Procedia PDF Downloads 81
7842 Pattern of Stress Distribution in Different Ligature-Wire-Brackets Systems: A FE and Experimental Analysis

Authors: Afef Dridi, Salah Mezlini

Abstract:

Since experimental devices cannot calculate stress and deformation of complex structures. The Finite Element Method FEM has been widely used in several fields of research. One of these fields is orthodontics. The advantage of using such a method is the use of an accurate and non invasive method that allows us to have a sufficient data about the physiological reactions can happening in soft tissues. Most of researches done in this field were interested in the study of stresses and deformations induced by orthodontic apparatus in soft tissues (alveolar tissues). Only few studies were interested in the distribution of stress and strain in the orthodontic brackets. These studies, although they tried to be as close as possible to real conditions, their models did not reproduce the clinical cases. For this reason, the model generated by our research is the closest one to reality. In this study, a numerical model was developed to explore the stress and strain distribution under the application of real conditions. A comparison between different material properties was also done.

Keywords: visco-hyperelasticity, FEM, orthodontic treatment, inverse method

Procedia PDF Downloads 235
7841 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models

Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah

Abstract:

In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.

Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model

Procedia PDF Downloads 211
7840 Evaluating Performance of Value at Risk Models for the MENA Islamic Stock Market Portfolios

Authors: Abderrazek Ben Maatoug, Ibrahim Fatnassi, Wassim Ben Ayed

Abstract:

In this paper we investigate the issue of market risk quantification for Middle East and North Africa (MENA) Islamic market equity. We use Value-at-Risk (VaR) as a measure of potential risk in Islamic stock market, for long and short position, based on Riskmetrics model and the conditional parametric ARCH class model volatility with normal, student and skewed student distribution. The sample consist of daily data for the 2006-2014 of 11 Islamic stock markets indices. We conduct Kupiec and Engle and Manganelli tests to evaluate the performance for each model. The main finding of our empirical results show that (i) the superior performance of VaR models based on the Student and skewed Student distribution, for the significance level of α=1% , for all Islamic stock market indices, and for both long and short trading positions (ii) Risk Metrics model, and VaR model based on conditional volatility with normal distribution provides the best accurate VaR estimations for both long and short trading positions for a significance level of α=5%.

Keywords: value-at-risk, risk management, islamic finance, GARCH models

Procedia PDF Downloads 556
7839 Application of Co-Flow Jet Concept to Aircraft Lift Increase

Authors: Sai Likitha Siddanathi

Abstract:

Present project is aimed at increasing the amount of lift produced by typical airfoil. This is achieved by its modification into the co-flow jet structure where a new internal flow is created inside the airfoil from well-designed apertures on its surface. The limit where produced excess lift overcomes the weight of pumping system inserted in airfoil upper portion, and drag force is converted into thrust is discussed in terms of airfoil velocity and angle of attack. Two normal and co-flow jet models are numerically designed and experimental results for both fabricated normal airfoil and CFJ model have been tested in low subsonic wind tunnel. Application has been made to subsonic NACA 652-415 airfoil. Produced lift in CFJ airfoil indicates a maximum value up to a factor of 5 above normal airfoil nearby flow separation ie in relatively weak flow distribution.

Keywords: flow Jet, lift coefficient, drag coefficient, airfoil performance

Procedia PDF Downloads 327
7838 An Automatic Speech Recognition Tool for the Filipino Language Using the HTK System

Authors: John Lorenzo Bautista, Yoon-Joong Kim

Abstract:

This paper presents the development of a Filipino speech recognition tool using the HTK System. The system was trained from a subset of the Filipino Speech Corpus developed by the DSP Laboratory of the University of the Philippines-Diliman. The speech corpus was both used in training and testing the system by estimating the parameters for phonetic HMM-based (Hidden-Markov Model) acoustic models. Experiments on different mixture-weights were incorporated in the study. The phoneme-level word-based recognition of a 5-state HMM resulted in an average accuracy rate of 80.13 for a single-Gaussian mixture model, 81.13 after implementing a phoneme-alignment, and 87.19 for the increased Gaussian-mixture weight model. The highest accuracy rate of 88.70% was obtained from a 5-state model with 6 Gaussian mixtures.

Keywords: Filipino language, Hidden Markov Model, HTK system, speech recognition

Procedia PDF Downloads 441
7837 Identification of the Orthotropic Parameters of Cortical Bone under Nanoindentation

Authors: D. Remache, M. Semaan, C. Baron, M. Pithioux, P. Chabrand, J. M. Rossi, J. L. Milan

Abstract:

A good understanding of the mechanical properties of the bone implies a better understanding of its various diseases, such as osteoporosis. Berkovich nanoindentation tests were performed on the human cortical bone to extract its orthotropic parameters. The nanoindentation experiments were then simulated by the finite element method. Different configurations of interactions between the tip indenter and the bone were simulated. The orthotropic parameters of the material were identified by the inverse method for each configuration. The friction effect on the bone mechanical properties was then discussed. It was found that the inverse method using the finite element method is a very efficient method to predict the mechanical behavior of the bone.

Keywords: mechanical behavior of bone, nanoindentation, finite element analysis, inverse optimization approaches

Procedia PDF Downloads 355
7836 A Time-Varying and Non-Stationary Convolution Spectral Mixture Kernel for Gaussian Process

Authors: Kai Chen, Shuguang Cui, Feng Yin

Abstract:

Gaussian process (GP) with spectral mixture (SM) kernel demonstrates flexible non-parametric Bayesian learning ability in modeling unknown function. In this work a novel time-varying and non-stationary convolution spectral mixture (TN-CSM) kernel with a significant enhancing of interpretability by using process convolution is introduced. A way decomposing the SM component into an auto-convolution of base SM component and parameterizing it to be input dependent is outlined. Smoothly, performing a convolution between two base SM component yields a novel structure of non-stationary SM component with much better generalized expression and interpretation. The TN-CSM perfectly allows compatibility with the stationary SM kernel in terms of kernel form and spectral base ignored and confused by previous non-stationary kernels. On synthetic and real-world datatsets, experiments show the time-varying characteristics of hyper-parameters in TN-CSM and compare the learning performance of TN-CSM with popular and representative non-stationary GP.

Keywords: Gaussian process, spectral mixture, non-stationary, convolution

Procedia PDF Downloads 164
7835 Effect of Fault Depth on Near-Fault Peak Ground Velocity

Authors: Yanyan Yu, Haiping Ding, Pengjun Chen, Yiou Sun

Abstract:

Fault depth is an important parameter to be determined in ground motion simulation, and peak ground velocity (PGV) demonstrates good application prospect. Using numerical simulation method, the variations of distribution and peak value of near-fault PGV with different fault depth were studied in detail, and the reason of some phenomena were discussed. The simulation results show that the distribution characteristics of PGV of fault-parallel (FP) component and fault-normal (FN) component are distinctly different; the value of PGV FN component is much larger than that of FP component. With the increase of fault depth, the distribution region of the FN component strong PGV moves forward along the rupture direction, while the strong PGV zone of FP component becomes gradually far away from the fault trace along the direction perpendicular to the strike. However, no matter FN component or FP component, the strong PGV distribution area and its value are both quickly reduced with increased fault depth. The results above suggest that the fault depth have significant effect on both FN component and FP component of near-fault PGV.

Keywords: fault depth, near-fault, PGV, numerical simulation

Procedia PDF Downloads 317
7834 TNFRSF11B Gene Polymorphisms A163G and G11811C in Prediction of Osteoporosis Risk

Authors: I. Boroňová, J.Bernasovská, J. Kľoc, Z. Tomková, E. Petrejčíková, D. Gabriková, S. Mačeková

Abstract:

Osteoporosis is a complex health disease characterized by low bone mineral density, which is determined by an interaction of genetics with metabolic and environmental factors. Current research in genetics of osteoporosis is focused on identification of responsible genes and polymorphisms. TNFRSF11B gene plays a key role in bone remodeling. The aim of this study was to investigate the genotype and allele distribution of A163G (rs3102735) osteoprotegerin gene promoter and G1181C (rs2073618) osteoprotegerin first exon polymorphisms in the group of 180 unrelated postmenopausal women with diagnosed osteoporosis and 180 normal controls. Genomic DNA was isolated from peripheral blood leukocytes using standard methodology. Genotyping for presence of different polymorphisms was performed using the Custom Taqman®SNP Genotyping assays. Hardy-Weinberg equilibrium was tested for each SNP in the groups of participants using the chi-square (χ2) test. The distribution of investigated genotypes in the group of patients with osteoporosis were as follows: AA (66.7%), AG (32.2%), GG (1.1%) for A163G polymorphism; GG (19.4%), CG (44.4%), CC (36.1%) for G1181C polymorphism. The distribution of genotypes in normal controls were follows: AA (71.1%), AG (26.1%), GG (2.8%) for A163G polymorphism; GG (22.2%), CG (48.9%), CC (28.9%) for G1181C polymorphism. In A163G polymorphism the variant G allele was more common among patients with osteoporosis: 17.2% versus 15.8% in normal controls. Also, in G1181C polymorphism the phenomenon of more frequent occurrence of C allele in the group of patients with osteoporosis was observed (58.3% versus 53.3%). Genotype and allele distributions showed no significant differences (A163G: χ2=0.270, p=0.605; χ2=0.250, p=0.616; G1181C: χ2= 1.730, p=0.188; χ2=1.820, p=0.177). Our results represents an initial study, further studies of more numerous file and associations studies will be carried out. Knowing the distribution of genotypes is important for assessing the impact of these polymorphisms on various parameters associated with osteoporosis. Screening for identification of “at-risk” women likely to develop osteoporosis and initiating subsequent early intervention appears to be most effective strategy to substantially reduce the risks of osteoporosis.

Keywords: osteoporosis, real-time PCR method, SNP polymorphisms

Procedia PDF Downloads 302
7833 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 118
7832 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications

Authors: Avinoam Rabinovich

Abstract:

CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.

Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow

Procedia PDF Downloads 41
7831 Improving Detection of Illegitimate Scores and Assessment in Most Advantageous Tenders

Authors: Hao-Hsi Tseng, Hsin-Yun Lee

Abstract:

The Most Advantageous Tender (MAT) has been criticized for its susceptibility to dictatorial situations and for its processing of same score, same rank issues. This study applies the four criteria from Arrow's Impossibility Theorem to construct a mechanism for revealing illegitimate scores in scoring methods. While commonly be used to improve on problems resulting from extreme scores, ranking methods hide significant defects, adversely affecting selection fairness. To address these shortcomings, this study relies mainly on the overall evaluated score method, using standardized scores plus normal cumulative distribution function conversion to calculate the evaluation of vender preference. This allows for free score evaluations, which reduces the influence of dictatorial behavior and avoiding same score, same rank issues. Large-scale simulations confirm that this method outperforms currently used methods using the Impossibility Theorem.

Keywords: Arrow’s impossibility theorem, cumulative normal distribution function, most advantageous tender, scoring method

Procedia PDF Downloads 437
7830 Vitamin D Status in Relation to Body Mass Index: Population of Carpathian Region

Authors: Vladyslav Povoroznyuk, Ivan Pankiv

Abstract:

The present research has attempted to link a higher body weight with a lower vitamin D status. Objective: Vitamin D status of Carpathian region population in Ukraine was studied to examine whether serum levels of 25-hydroxyvitamin D [25(OH)D] are associated with body mass index (BMI). Methods: Data collected from 302 adults (18–84 years) were analyzed. Variables measured included serum 25(OH)D, weight and height used to determine BMI status. Results: Mean 25(OH)D level was 23.2 ± 8.1 ng/mL for the group; 26.3 ± 8.4 ng/mL and 22.8 ± 9.1 ng/mL for males and females, respectively. Based on BMI, 3.6% were underweight, 21.2% had a normal weight, 46.4% were overweight and 28.8% obese. Only in 28 cases (9.3%), content of 25(ОН)D in the serum of blood was within the normal limits, and there were vitamin D deficiency and insufficiency observed in other cases (90.7%). Thus, severe vitamin D deficiency was revealed in 1.7% of the inspected. A significant interrelation between levels of 25(OH)D in blood and BMI was found among persons with BMI 25-29.9 kg/m2. Mean value of 25(OH)D levels among persons with obesity did not differ to a significant extent from indexes in persons with normal body weight. Conclusion: Status of vitamin D among the population of Carpathian region remains far from optimal and requires urgent measures in correction and prevention. Results confirmed a poor inverse relationship between vitamin D status and BMI. Intercommunication between maintenance of vitamin D and BMI requires further investigations.

Keywords: body mass index, Carpathian region, obesity, vitamin D

Procedia PDF Downloads 362
7829 Bayesian Variable Selection in Quantile Regression with Application to the Health and Retirement Study

Authors: Priya Kedia, Kiranmoy Das

Abstract:

There is a rich literature on variable selection in regression setting. However, most of these methods assume normality for the response variable under consideration for implementing the methodology and establishing the statistical properties of the estimates. In many real applications, the distribution for the response variable may be non-Gaussian, and one might be interested in finding the best subset of covariates at some predetermined quantile level. We develop dynamic Bayesian approach for variable selection in quantile regression framework. We use a zero-inflated mixture prior for the regression coefficients, and consider the asymmetric Laplace distribution for the response variable for modeling different quantiles of its distribution. An efficient Gibbs sampler is developed for our computation. Our proposed approach is assessed through extensive simulation studies, and real application of the proposed approach is also illustrated. We consider the data from health and retirement study conducted by the University of Michigan, and select the important predictors when the outcome of interest is out-of-pocket medical cost, which is considered as an important measure for financial risk. Our analysis finds important predictors at different quantiles of the outcome, and thus enhance our understanding on the effects of different predictors on the out-of-pocket medical cost.

Keywords: variable selection, quantile regression, Gibbs sampler, asymmetric Laplace distribution

Procedia PDF Downloads 124
7828 Methods for Solving Identification Problems

Authors: Fadi Awawdeh

Abstract:

In this work, we highlight the key concepts in using semigroup theory as a methodology used to construct efficient formulas for solving inverse problems. The proposed method depends on some results concerning integral equations. The experimental results show the potential and limitations of the method and imply directions for future work.

Keywords: identification problems, semigroup theory, methods for inverse problems, scientific computing

Procedia PDF Downloads 450
7827 Gaussian Mixture Model Based Identification of Arterial Wall Movement for Computation of Distension Waveform

Authors: Ravindra B. Patil, P. Krishnamoorthy, Shriram Sethuraman

Abstract:

This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

Keywords: distension waveform, Gaussian Mixture Model, RF ultrasound, arterial wall movement

Procedia PDF Downloads 474
7826 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 189