Search results for: generalized Chebyshev polynomial
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1002

Search results for: generalized Chebyshev polynomial

792 On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes

Authors: Amit Ghosh, Chanchal Kundu

Abstract:

Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.

Keywords: cumulative past inaccuracy, marginal and conditional past lifetimes, conditional proportional reversed hazard rate model, usual stochastic order

Procedia PDF Downloads 226
791 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (bi)digraphs, rough set theory, systems of interacting agents, complex systems

Procedia PDF Downloads 213
790 Magnetohydrodynamic Flow over an Exponentially Stretching Sheet

Authors: Raj Nandkeolyar, Precious Sibanda

Abstract:

The flow of a viscous, incompressible, and electrically conducting fluid under the influence of aligned magnetic field acting along the direction of fluid flow over an exponentially stretching sheet is investigated numerically. The nonlinear partial differential equations governing the flow model is transformed to a set of nonlinear ordinary differential equations using suitable similarity transformation and the solution is obtained using a local linearization method followed by the Chebyshev spectral collocation method. The effects of various parameters affecting the flow and heat transfer as well as the induced magnetic field are discussed using suitable graphs and tables.

Keywords: aligned magnetic field, exponentially stretching sheet, induced magnetic field, magnetohydrodynamic flow

Procedia PDF Downloads 431
789 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.

Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution

Procedia PDF Downloads 348
788 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 171
787 A Guide for Using Viscoelasticity in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent the behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell model and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Afterwards, a guide is illustrated to ease using of viscoelasticity tool in ANSYS.

Keywords: ANSYS, generalized Maxwell model, finite element method, Prony series, viscoelasticity, viscoelastic material curve fitting

Procedia PDF Downloads 536
786 Generalized Additive Model Approach for the Chilean Hake Population in a Bio-Economic Context

Authors: Selin Guney, Andres Riquelme

Abstract:

The traditional bio-economic method for fisheries modeling uses some estimate of the growth parameters and the system carrying capacity from a biological model for the population dynamics (usually a logistic population growth model) which is then analyzed as a traditional production function. The stock dynamic is transformed into a revenue function and then compared with the extraction costs to estimate the maximum economic yield. In this paper, the logistic population growth model for the population is combined with a forecast of the abundance and location of the stock by using a generalized additive model approach. The paper focuses on the Chilean hake population. This method allows for the incorporation of climatic variables and the interaction with other marine species, which in turn will increase the reliability of the estimates and generate better extraction paths for different conservation objectives, such as the maximum biological yield or the maximum economic yield.

Keywords: bio-economic, fisheries, GAM, production

Procedia PDF Downloads 227
785 Robust Pattern Recognition via Correntropy Generalized Orthogonal Matching Pursuit

Authors: Yulong Wang, Yuan Yan Tang, Cuiming Zou, Lina Yang

Abstract:

This paper presents a novel sparse representation method for robust pattern classification. Generalized orthogonal matching pursuit (GOMP) is a recently proposed efficient sparse representation technique. However, GOMP adopts the mean square error (MSE) criterion and assign the same weights to all measurements, including both severely and slightly corrupted ones. To reduce the limitation, we propose an information-theoretic GOMP (ITGOMP) method by exploiting the correntropy induced metric. The results show that ITGOMP can adaptively assign small weights on severely contaminated measurements and large weights on clean ones, respectively. An ITGOMP based classifier is further developed for robust pattern classification. The experiments on public real datasets demonstrate the efficacy of the proposed approach.

Keywords: correntropy induced metric, matching pursuit, pattern classification, sparse representation

Procedia PDF Downloads 332
784 Hydraulic Conductivity Prediction of Cement Stabilized Pavement Base Incorporating Recycled Plastics and Recycled Aggregates

Authors: Md. Shams Razi Shopnil, Tanvir Imtiaz, Sabrina Mahjabin, Md. Sahadat Hossain

Abstract:

Saturated hydraulic conductivity is one of the most significant attributes of pavement base course. Determination of hydraulic conductivity is a routine procedure for regular aggregate base courses. However, in many cases, a cement-stabilized base course is used with compromised drainage ability. Traditional hydraulic conductivity testing procedure is a readily available option which leads to two consequential drawbacks, i.e., the time required for the specimen to be saturated and extruding the sample after completion of the laboratory test. To overcome these complications, this study aims at formulating an empirical approach to predicting hydraulic conductivity based on Unconfined Compressive Strength test results. To do so, this study comprises two separate experiments (Constant Head Permeability test and Unconfined Compressive Strength test) conducted concurrently on a specimen having the same physical credentials. Data obtained from the two experiments were then used to devise a correlation between hydraulic conductivity and unconfined compressive strength. This correlation in the form of a polynomial equation helps to predict the hydraulic conductivity of cement-treated pavement base course, bypassing the cumbrous process of traditional permeability and less commonly used horizontal permeability tests. The correlation was further corroborated by a different set of data, and it has been found that the derived polynomial equation is deemed to be a viable tool to predict hydraulic conductivity.

Keywords: hydraulic conductivity, unconfined compressive strength, recycled plastics, recycled concrete aggregates

Procedia PDF Downloads 65
783 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory

Authors: Damir Latypov

Abstract:

A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.

Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory

Procedia PDF Downloads 122
782 Order Picking Problem: An Exact and Heuristic Algorithms for the Generalized Travelling Salesman Problem With Geographical Overlap Between Clusters

Authors: Farzaneh Rajabighamchi, Stan van Hoesel, Christof Defryn

Abstract:

The generalized traveling salesman problem (GTSP) is an extension of the traveling salesman problem (TSP) where the set of nodes is partitioned into clusters, and the salesman must visit exactly one node per cluster. In this research, we apply the definition of the GTSP to an order picker routing problem with multiple locations per product. As such, each product represents a cluster and its corresponding nodes are the locations at which the product can be retrieved. To pick a certain product item from the warehouse, the picker needs to visit one of these locations during its pick tour. As all products are scattered throughout the warehouse, the product clusters not separated geographically. We propose an exact LP model as well as heuristic and meta-heuristic solution algorithms for the order picking problem with multiple product locations.

Keywords: warehouse optimization, order picking problem, generalised travelling salesman problem, heuristic algorithm

Procedia PDF Downloads 83
781 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 135
780 Application of Generalized Autoregressive Score Model to Stock Returns

Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke

Abstract:

The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.

Keywords: generalized autoregressive score model, South Africa, stock returns, time-varying

Procedia PDF Downloads 478
779 Generalized Hyperbolic Functions: Exponential-Type Quantum Interactions

Authors: Jose Juan Peña, J. Morales, J. García-Ravelo

Abstract:

In the search of potential models applied in the theoretical treatment of diatomic molecules, some of them have been constructed by using standard hyperbolic functions as well as from the so-called q-deformed hyperbolic functions (sc q-dhf) for displacing and modifying the shape of the potential under study. In order to transcend the scope of hyperbolic functions, in this work, a kind of generalized q-deformed hyperbolic functions (g q-dhf) is presented. By a suitable transformation, through the q deformation parameter, it is shown that these g q-dhf can be expressed in terms of their corresponding standard ones besides they can be reduced to the sc q-dhf. As a useful application of the proposed approach, and considering a class of exactly solvable multi-parameter exponential-type potentials, some new q-deformed quantum interactions models that can be used as interesting alternative in quantum physics and quantum states are presented. Furthermore, due that quantum potential models are conditioned on the q-dependence of the parameters that characterize to the exponential-type potentials, it is shown that many specific cases of q-deformed potentials are obtained as particular cases from the proposal.

Keywords: diatomic molecules, exponential-type potentials, hyperbolic functions, q-deformed potentials

Procedia PDF Downloads 158
778 Chemometric Estimation of Phytochemicals Affecting the Antioxidant Potential of Lettuce

Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Aleksandra Tepic-Horecki, Zdravko Sumic

Abstract:

In this paper, the influence of six different phytochemical content (phenols, carotenoids, chlorophyll a, chlorophyll b, chlorophyll a + b and vitamin C) on antioxidant potential of Murai and Levistro lettuce varieties was evaluated. Variable selection was made by generalized pair correlation method (GPCM) as a novel ranking method. This method is used for the discrimination between two variables that almost equal correlate to a dependent variable. Fisher’s conditional exact and McNemar’s test were carried out. Established multiple linear (MLR) models were statistically evaluated. As the best phytochemicals for the antioxidant potential prediction, chlorophyll a, chlorophyll a + b and total carotenoids content stand out. This was confirmed through both GPCM and MLR, predictive ability of obtained MLR can be used for antioxidant potential estimation for similar lettuce samples. This article is based upon work from the project of the Provincial Secretariat for Science and Technological Development of Vojvodina (No. 114-451-347/2015-02).

Keywords: antioxidant activity, generalized pair correlation method, lettuce, regression analysis

Procedia PDF Downloads 363
777 A Generalized Sparse Bayesian Learning Algorithm for Near-Field Synthetic Aperture Radar Imaging: By Exploiting Impropriety and Noncircularity

Authors: Pan Long, Bi Dongjie, Li Xifeng, Xie Yongle

Abstract:

The near-field synthetic aperture radar (SAR) imaging is an advanced nondestructive testing and evaluation (NDT&E) technique. This paper investigates the complex-valued signal processing related to the near-field SAR imaging system, where the measurement data turns out to be noncircular and improper, meaning that the complex-valued data is correlated to its complex conjugate. Furthermore, we discover that the degree of impropriety of the measurement data and that of the target image can be highly correlated in near-field SAR imaging. Based on these observations, A modified generalized sparse Bayesian learning algorithm is proposed, taking impropriety and noncircularity into account. Numerical results show that the proposed algorithm provides performance gain, with the help of noncircular assumption on the signals.

Keywords: complex-valued signal processing, synthetic aperture radar, 2-D radar imaging, compressive sensing, sparse Bayesian learning

Procedia PDF Downloads 100
776 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 336
775 Pareto Optimal Material Allocation Mechanism

Authors: Peter Egri, Tamas Kis

Abstract:

Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.

Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling

Procedia PDF Downloads 301
774 Learning Algorithms for Fuzzy Inference Systems Composed of Double- and Single-Input Rule Modules

Authors: Hirofumi Miyajima, Kazuya Kishida, Noritaka Shigei, Hiromi Miyajima

Abstract:

Most of self-tuning fuzzy systems, which are automatically constructed from learning data, are based on the steepest descent method (SDM). However, this approach often requires a large convergence time and gets stuck into a shallow local minimum. One of its solutions is to use fuzzy rule modules with a small number of inputs such as DIRMs (Double-Input Rule Modules) and SIRMs (Single-Input Rule Modules). In this paper, we consider a (generalized) DIRMs model composed of double and single-input rule modules. Further, in order to reduce the redundant modules for the (generalized) DIRMs model, pruning and generative learning algorithms for the model are suggested. In order to show the effectiveness of them, numerical simulations for function approximation, Box-Jenkins and obstacle avoidance problems are performed.

Keywords: Box-Jenkins's problem, double-input rule module, fuzzy inference model, obstacle avoidance, single-input rule module

Procedia PDF Downloads 330
773 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant

Procedia PDF Downloads 264
772 Relationship between Matrilin-3 (MATN-3) Gene Single Nucleotide Six Polymorphism, Transforming Growth Factor Beta 2 and Radiographic Grading in Primary Osteoarthritis

Authors: Heba Esaily, Rawhia Eledl, Daila Aboelela, Rasha Noreldin

Abstract:

Objective: Assess serum level of Transforming growth factor beta 2 (TGF-β2) and Matrilin-3 (MATN3) SNP6 polymorphism in osteoarthritic patients Background: Osteoarthritis (OA) is a musculoskeletal disease characterized by pain and joint stiffness. TGF-β 2 is involved in chondrogenesis and osteogenesis, It has found that MATN3 gene and protein expression was correlated with the extent of tissue damage in OA. Findings suggest that regulation of MATN3 expression is essential for maintenance of the cartilage extracellular matrix microenvironment Subjects and Methods: 72 cases of primary OA (56 with knee OA and 16 with generalized OA were compared with that of 18 healthy controls. Radiographs were scored with the Kellgren-Lawrence scale. Serum TGF-β2 was measured by using (ELISA), levels of marker were correlated to radiographic grading of disease and MATN3 SNP6 polymorphism was determined by (PCR-RFLP). Results: MATN3 SNP6 polymorphism and serum level of TGF-β2 were higher in OA compared with controls. Genotype, NN and N allele frequency were higher in patients with OA compared with controls. NN genotype and N allele frequency were higher in knee osteoarthritis than generalized OA. Significant positive correlation between level of TGFβ2 and radiographic grading in group with knee OA, but no correlation between serum level of TGFβ2 and radiographic grading in generalized OA. Conclusion: MATN3 SNP6 polymorphism and TGF-β2 implicated in the pathogenesis of osteoarthritis. Association of N/N genotype with primary osteoarthritis emphasizes on the need for prospective study include larger sample size to confirm the results of the present study.

Keywords: Matrilin-3, transforming growth factor beta 2, primary osteoarthritis, knee osteoarthritis

Procedia PDF Downloads 247
771 An Equivalence between a Harmonic Form and a Closed Co-Closed Differential Form in L^Q and Non-L^Q Spaces

Authors: Lina Wu, Ye Li

Abstract:

An equivalent relation between a harmonic form and a closed co-closed form is established on a complete non-compact manifold. This equivalence has been generalized for a differential k-form ω from Lq spaces to non-Lq spaces when q=2 in the context of p-balanced growth where p=2. Especially for a simple differential k-form on a complete non-compact manifold, the equivalent relation has been verified with the extended scope of q for from finite q-energy in Lq spaces to infinite q-energy in non-Lq spaces when with 2-balanced growth. Generalized Hadamard Theorem, Cauchy-Schwarz Inequality, and Calculus skills including Integration by Parts as well as Convergent Series have been applied as estimation techniques to evaluate growth rates for a differential form. In particular, energy growth rates as indicated by an appropriate power range in a selected test function lead to a balance between a harmonic differential form and a closed co-closed differential form. Research ideas and computational methods in this paper could provide an innovative way in the study of broadening Lq spaces to non-Lq spaces with a wide variety of infinite energy growth for a differential form.

Keywords: closed forms, co-closed forms, harmonic forms, L^q spaces, p-balanced growth, simple differential k-forms

Procedia PDF Downloads 425
770 The Role of Surgery to Remove the Primary Tumor in Patients with Metastatic Breast Cancer

Authors: A. D. Zikiryahodjaev, L. V. Bolotina, A. S. Sukhotko

Abstract:

Purpose. To evaluate the expediency and timeliness of performance of surgical treatment as a component of multi-therapy treatment of patients with stage IV breast cancers. Materials and Methods. This investigation comparatively analyzed the results of complex treatment with or without surgery in patients with metastatic breast cancer. We analyzed retrospectively treatment experience of 196 patients with generalized breast cancer in the department of oncology and breast reconstructive surgery of P.A. Herzen Moscow Cancer Research Institute from 2000 to 2012. The average age was (58±1,1) years. Invasive ductul carcinoma was verified in128 patients (65,3%), invasive lobular carcinoma-33 (16,8%), complex form - 19 (9,7%). Complex palliative care involving drug and radiation therapies was performed in two patient groups. The first group includes 124 patients who underwent surgical intervention as complex treatment, the second group includes 72 patients with only medical therapy. Standard systemic therapy was given to all patients. Results. Overall, 3-and 5-year survival in fist group was 43,8 and 21%, in second - 15,1 and 9,3% respectively [p=0,00002 log-rank]. Median survival in patients with surgical treatment composed 32 months, in patients with only systemic therapy-21. The factors having influencing an influence on the prognosis and the quality of life outcomes for of patients with generalized breast cancer were are also studied: hormone-dependent tumor, Her2/neu hyper-expression, reproductive function status (age, menopause existence). Conclusion.Removing primary breast tumor in patients with generalized breast cancer improve long-term outcomes. Three- and five-year survival increased by 28,7 and 16,3% respectively, and median survival–for 11 months. These patients may benefit from resection of the breast tumor. One explanation for the effect of this resection is that reducing the tumor load influences metastatic growth.

Keywords: breast cancer, combination therapy, factors of prognosis, primary tumor

Procedia PDF Downloads 390
769 Multimodal Convolutional Neural Network for Musical Instrument Recognition

Authors: Yagya Raj Pandeya, Joonwhoan Lee

Abstract:

The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network.

Keywords: multimodal, 3D convolution, music-video feature extraction, generalized mean

Procedia PDF Downloads 191
768 A Combined Error Control with Forward Euler Method for Dynamical Systems

Authors: R. Vigneswaran, S. Thilakanathan

Abstract:

Variable time-stepping algorithms for solving dynamical systems performed poorly for long time computations which pass close to a fixed point. To overcome this difficulty, several authors considered phase space error controls for numerical simulation of dynamical systems. In one generalized phase space error control, a step-size selection scheme was proposed, which allows this error control to be incorporated into the standard adaptive algorithm as an extra constraint at negligible extra computational cost. For this generalized error control, it was already analyzed the forward Euler method applied to the linear system whose coefficient matrix has real and negative eigenvalues. In this paper, this result was extended to the linear system whose coefficient matrix has complex eigenvalues with negative real parts. Some theoretical results were obtained and numerical experiments were carried out to support the theoretical results.

Keywords: adaptivity, fixed point, long time simulations, stability, linear system

Procedia PDF Downloads 289
767 Generalized Approach to Linear Data Transformation

Authors: Abhijith Asok

Abstract:

This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.

Keywords: data transformation, dummy dimension, linear transformation, scaling

Procedia PDF Downloads 280
766 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine

Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland

Abstract:

The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.

Keywords: force-velocity, leg-press, power-velocity, profiling, reliability

Procedia PDF Downloads 28
765 Adomian’s Decomposition Method to Generalized Magneto-Thermoelasticity

Authors: Hamdy M. Youssef, Eman A. Al-Lehaibi

Abstract:

Due to many applications and problems in the fields of plasma physics, geophysics, and other many topics, the interaction between the strain field and the magnetic field has to be considered. Adomian introduced the decomposition method for solving linear and nonlinear functional equations. This method leads to accurate, computable, approximately convergent solutions of linear and nonlinear partial and ordinary differential equations even the equations with variable coefficients. This paper is dealing with a mathematical model of generalized thermoelasticity of a half-space conducting medium. A magnetic field with constant intensity acts normal to the bounding plane has been assumed. Adomian’s decomposition method has been used to solve the model when the bounding plane is taken to be traction free and thermally loaded by harmonic heating. The numerical results for the temperature increment, the stress, the strain, the displacement, the induced magnetic, and the electric fields have been represented in figures. The magnetic field, the relaxation time, and the angular thermal load have significant effects on all the studied fields.

Keywords: Adomian’s decomposition method, magneto-thermoelasticity, finite conductivity, iteration method, thermal load

Procedia PDF Downloads 121
764 A Comparative Study of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) and Extreme Value Theory (EVT) Model in Modeling Value-at-Risk (VaR)

Authors: Longqing Li

Abstract:

The paper addresses the inefficiency of the classical model in measuring the Value-at-Risk (VaR) using a normal distribution or a Student’s t distribution. Specifically, the paper focuses on the one day ahead Value-at-Risk (VaR) of major stock market’s daily returns in US, UK, China and Hong Kong in the most recent ten years under 95% confidence level. To improve the predictable power and search for the best performing model, the paper proposes using two leading alternatives, Extreme Value Theory (EVT) and a family of GARCH models, and compares the relative performance. The main contribution could be summarized in two aspects. First, the paper extends the GARCH family model by incorporating EGARCH and TGARCH to shed light on the difference between each in estimating one day ahead Value-at-Risk (VaR). Second, to account for the non-normality in the distribution of financial markets, the paper applies Generalized Error Distribution (GED), instead of the normal distribution, to govern the innovation term. A dynamic back-testing procedure is employed to assess the performance of each model, a family of GARCH and the conditional EVT. The conclusion is that Exponential GARCH yields the best estimate in out-of-sample one day ahead Value-at-Risk (VaR) forecasting. Moreover, the discrepancy of performance between the GARCH and the conditional EVT is indistinguishable.

Keywords: Value-at-Risk, Extreme Value Theory, conditional EVT, backtesting

Procedia PDF Downloads 300
763 Generalized Mathematical Description and Simulation of Grid-Tied Thyristor Converters

Authors: V. S. Klimash, Ye Min Thu

Abstract:

Thyristor rectifiers, inverters grid-tied, and AC voltage regulators are widely used in industry, and on electrified transport, they have a lot in common both in the power circuit and in the control system. They have a common mathematical structure and switching processes. At the same time, the rectifier, but the inverter units and thyristor regulators of alternating voltage are considered separately both theoretically and practically. They are written about in different books as completely different devices. The aim of this work is to combine them into one class based on the unity of the equations describing electromagnetic processes, and then, to show this unity on the mathematical model and experimental setup. Based on research from mathematics to the product, a conclusion is made about the methodology for the rapid conduct of research and experimental design work, preparation for production and serial production of converters with a unified bundle. In recent years, there has been a transition from thyristor circuits and transistor in modular design. Showing the example of thyristor rectifiers and AC voltage regulators, we can conclude that there is a unity of mathematical structures and grid-tied thyristor converters.

Keywords: direct current, alternating current, rectifier, AC voltage regulator, generalized mathematical model

Procedia PDF Downloads 230