Search results for: moments estimator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 516

Search results for: moments estimator

426 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier

Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi

Abstract:

The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.

Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance

Procedia PDF Downloads 459
425 Environmentally Adaptive Acoustic Echo Suppression for Barge-in Speech Recognition

Authors: Jong Han Joo, Jung Hoon Lee, Young Sun Kim, Jae Young Kang, Seung Ho Choi

Abstract:

In this study, we propose a novel technique for acoustic echo suppression (AES) during speech recognition under barge-in conditions. Conventional AES methods based on spectral subtraction apply fixed weights to the estimated echo path transfer function (EPTF) at the current signal segment and to the EPTF estimated until the previous time interval. We propose a new approach that adaptively updates weight parameters in response to abrupt changes in the acoustic environment due to background noises or double-talk. Furthermore, we devised a voice activity detector and an initial time-delay estimator for barge-in speech recognition in communication networks. The initial time delay is estimated using log-spectral distance measure, as well as cross-correlation coefficients. The experimental results show that the developed techniques can be successfully applied in barge-in speech recognition systems.

Keywords: acoustic echo suppression, barge-in, speech recognition, echo path transfer function, initial delay estimator, voice activity detector

Procedia PDF Downloads 344
424 Home Range and Spatial Interaction Modelling of Black Bears

Authors: Fekadu L. Bayisa, Elvan Ceyhan, Todd D. Steury

Abstract:

Interaction between individuals within the same species is an important component of population dynamics. An interaction can be either static (based on spatial overlap) or dynamic (based on movement interactions). Using GPS collar data, we can quantify both static and dynamic interactions between black bears. The goal of this work is to determine the level of black bear interactions using the 95% and 50% home ranges, as well as to model black bear spatial interactions, which could be attraction, avoidance/repulsion, or a lack of interaction at all, to gain new insights and improve our understanding of ecological processes. Recent methodological developments in home range estimation, inhomogeneous multitype/cross-type summary statistics, and envelope testing methods are explored to study the nature of black bear interactions. Our findings, in general, indicate that the black bears of one type in our data set tend to cluster around another type.

Keywords: autocorrelated kernel density estimator, cross-type summary function, inhomogeneous multitype Poisson process, kernel density estimator, minimum convex polygon, pointwise and global envelope tests

Procedia PDF Downloads 49
423 An Alternative Rectangular Tunnels to Conventional Twin Circular Bored Tunnels in Weak Ground Conditions

Authors: Alex Atanaw Alebachew

Abstract:

The outcomes of a numerical research study conducted using the PLAXIS software to analyze surface settlements and moments generated in tunnel linings. The investigation focuses on both circular and rectangular twin tunnels. The study suggests that rectangular tunnels, although considered unconventional in modern tunneling practices, may be a viable option for shallow-depth tunneling in weak ground. The recommendation for engineers in the tunneling industry is to consider the use of rectangular tunnel boring machines (TBMs) based on the findings of this analysis. The research emphasizes the importance of evaluating various tunneling methods to optimize performance and address specific challenges in different ground conditions. These findings provide valuable insights into the behavior of rectangular tunnels compared to circular tunnels, emphasizing factors such as burial depth, relative positioning, tunnel size, and critical distance that influence surface settlements and bending moments. This research explores the feasibility of utilizing rectangular Tunnel Boring Machines (TBMs) as an alternative to conventional circular TBMs. The research findings indicate that rectangular tunnels exhibit slightly lower settlement than circular tunnels at shallow depths, especially in a narrower range directly above the twin tunnels. This difference could be attributed to maintaining a consistent tunnel-lining thickness across all depths. In deeper tunnel scenarios, circular tunnels experience less settlement compared to rectangular tunnels. Additionally, parallel rectangular tunnels settle more gradually than piggyback configurations, while piggyback tunnels show increased moments in the tunnel built second at the same level. Both settlement and moment coefficients increase with the diameter of twin tunnels, irrespective of their shape. The critical distance for both circular and rectangular tunnels is around 2.5 times the tunnel diameter, and distances closer than this result in a notable increase in moments. Rectangular tunnels spaced closer than 5 times the diameter led to higher settlement, and circular tunnels spaced closer than 2.5 to 3 times the diameter experience increased settlement as well.

Keywords: alternative, rectangular, tunnel, twin bored circular, weak ground

Procedia PDF Downloads 25
422 Carbohydrate Intake Estimation in Type I Diabetic Patients Described by UVA/Padova Model

Authors: David A. Padilla, Rodolfo Villamizar

Abstract:

In recent years, closed loop control strategies have been developed in order to establish a healthy glucose profile in type 1 diabetic mellitus (T1DM) patients. However, the controller itself is unable to define a suitable reference trajectory for glucose. In this paper, a control strategy Is proposed where the shape of the reference trajectory is generated bases in the amount of carbohydrates present during the digestive process, due to the effect of carbohydrate intake. Since there no exists a sensor to measure the amount of carbohydrates consumed, an estimator is proposed. Thus this paper presents the entire process of designing a carbohydrate estimator, which allows estimate disturbance for a predictive controller (MPC) in a T1MD patient, the estimation will be used to establish a profile of reference and improve the response of the controller by providing the estimated information of ingested carbohydrates. The dynamics of the diabetic model used are due to the equations described by the UVA/Padova model of the T1DMS simulator, the system was developed and simulated in Simulink, taking into account the noise and limitations of the glucose control system actuators.

Keywords: estimation, glucose control, predictive controller, MPC, UVA/Padova

Procedia PDF Downloads 231
421 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 240
420 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 299
419 The Probability Foundation of Fundamental Theoretical Physics

Authors: Quznetsov Gunn

Abstract:

In the study of the logical foundations of probability theory, it was found that the terms and equations of the fundamental theoretical physics represent terms and theorems of the classical probability theory, more precisely, of that part of this theory, which considers the probability of dot events in the 3 + 1 space-time. In particular, the masses, moments, energies, spins, etc. turn out of parameters of probability distributions such events. The terms and the equations of the electroweak and of the quark-gluon theories turn out the theoretical-probabilistic terms and theorems. Here the relation of a neutrino to his lepton becomes clear, the W and Z bosons masses turn out dynamic ones, the cause of the asymmetry between particles and antiparticles is the impossibility of the birth of single antiparticles. In addition, phenomena such as confinement and asymptotic freedom receive their probabilistic explanation. And here we have the logical foundations of the gravity theory with phenomena dark energy and dark matter.

Keywords: classical theory of probability, logical foundation of fundamental theoretical physics, masses, moments, energies, spins

Procedia PDF Downloads 271
418 Refined Procedures for Second Order Asymptotic Theory

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

Refined procedures for higher-order asymptotic theory for non-linear models are developed. These include a new method for deriving stochastic expansions of arbitrary order, new methods for evaluating the moments of polynomials of sample averages, a new method for deriving the approximate moments of the stochastic expansions; an application of these techniques to gather improved inferences with the weak instruments problem is considered. It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. In our application, finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: edgeworth expansions, higher order asymptotics, saddlepoint expansions, weak instruments

Procedia PDF Downloads 254
417 Multisymplectic Geometry and Noether Symmetries for the Field Theories and the Relativistic Mechanics

Authors: H. Loumi-Fergane, A. Belaidi

Abstract:

The problem of symmetries in field theory has been analyzed using geometric frameworks, such as the multisymplectic models by using in particular the multivector field formalism. In this paper, we expand the vector fields associated to infinitesimal symmetries which give rise to invariant quantities as Noether currents for classical field theories and relativistic mechanic using the multisymplectic geometry where the Poincaré-Cartan form has thus been greatly simplified using the Second Order Partial Differential Equation (SOPDE) for multi-vector fields verifying Euler equations. These symmetries have been classified naturally according to the construction of the fiber bundle used.  In this work, unlike other works using the analytical method, our geometric model has allowed us firstly to distinguish the angular moments of the gauge field obtained during different transformations while these moments are gathered in a single expression and are obtained during a rotation in the Minkowsky space. Secondly, no conditions are imposed on the Lagrangian of the mechanics with respect to its dependence in time and in qi, the currents obtained naturally from the transformations are respectively the energy and the momentum of the system.

Keywords: conservation laws, field theories, multisymplectic geometry, relativistic mechanics

Procedia PDF Downloads 180
416 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: confidence interval, handwriting, kernel density estimator, KDE, logistic regression LoR, repeatability, reproducibility

Procedia PDF Downloads 94
415 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 121
414 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 562
413 Review for Mechanical Tests of Corner Joints on Wooden Windows and Effects to the Stiffness

Authors: Milan Podlena, Stepan Hysek, Jiri Prochazka, Martin Bohm, Jan Bomba

Abstract:

Corner joints are the weakest part of windows, where the members are connected together. Since the dimensions of the windows started become bigger, the strength requirements for corner joints started to increase as well. Therefore, the aim of this study was to test the samples of corner joints of wooden windows. Moisture content of test specimens was stabilized in the climate chamber. After conditioning, test specimens were loaded in the laboratory conditions onto an universal testing machine and the failure load was measured. Data was recalculated by using goniometric, bending moment and stiffness equation to the stiffness coefficients and the bending moments were investigated. The results showed difference that was observed for the mortise with tenon joint and the dowel joint. This difference was explained by a varied adhesive bond area, which is related to the dimensions of dowels (diameter and length) as well. The bending moments and stiffness ware (except of type of corner joint) also affected by type of used adhesive, type of dowels and wood species.

Keywords: corner joint, wooden window, bending moment, stiffness

Procedia PDF Downloads 188
412 First Cracking Moments of Hybrid Fiber Reinforced Polymer-Steel Reinforced Concrete Beams

Authors: Saruhan Kartal, Ilker Kalkan

Abstract:

The present paper reports the cracking moment estimates of a set of steel-reinforced, Fiber Reinforced Polymer (FRP)-reinforced and hybrid steel-FRP reinforced concrete beams, calculated from different analytical formulations in the codes, together with the experimental cracking load values. A total of three steel-reinforced, four FRP-reinforced, 12 hybrid FRP-steel over-reinforced and five hybrid FRP-steel under-reinforced concrete beam tests were analyzed within the scope of the study. Glass FRP (GFRP) and Basalt FRP (BFRP) bars were used in the beams as FRP bars. In under-reinforced hybrid beams, rupture of the FRP bars preceded crushing of concrete, while concrete crushing preceded FRP rupture in over-reinforced beams. In both types, steel yielding took place long before the FRP rupture and concrete crushing. The cracking moment mainly depends on two quantities, namely the moment of inertia of the section at the initiation of cracking and the flexural tensile strength of concrete, i.e. the modulus of rupture. In the present study, two different definitions of uncracked moment of inertia, i.e. the gross and the uncracked transformed moments of inertia, were adopted. Two analytical equations for the modulus of rupture (ACI 318M and Eurocode 2) were utilized in the calculations as well as the experimental tensile strength of concrete from prismatic specimen tests. The ACI 318M modulus of rupture expression produced cracking moment estimates closer to the experimental cracking moments of FRP-reinforced and hybrid FRP-steel reinforced concrete beams when used in combination with the uncracked transformed moment of inertia, yet the Eurocode 2 modulus of rupture expression gave more accurate cracking moment estimates in steel-reinforced concrete beams. All of the analytical definitions produced analytical values considerably different from the experimental cracking load values of the solely FRP-reinforced concrete beam specimens.

Keywords: polymer reinforcement, four-point bending, hybrid use of reinforcement, cracking moment

Procedia PDF Downloads 111
411 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium

Authors: Janne Engblom, Elias Oikarinen

Abstract:

The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.

Keywords: dynamic model, panel data, cross-sectional dependence, interaction model

Procedia PDF Downloads 230
410 Synthesis of Balanced 3-RRR Planar Parallel Manipulators

Authors: Arakelian Vigen, Geng Jing, Le Baron Jean-Paul

Abstract:

The paper deals with the design of parallel manipulators with balanced inertia forces and moments. The balancing of the resultant of the inertia forces of 3-RRR planar parallel manipulators is carried out through mass redistribution and centre of mass acceleration minimization. The proposed balancing technique is achieved in two steps: at first, optimal redistribution of the masses of input links is accomplished, which ensures the similarity of the end-effector trajectory and the manipulator’s common centre of mass trajectory, then, optimal trajectory planning of the end-effector by 'bang-bang' profile is reached. In such a way, the minimization of the magnitude of the acceleration of the centre of mass of the manipulator brings about a minimization of shaking force. To minimize the resultant of the inertia moments (shaking moment), the active balancing via inertia flywheel is applied. However, in this case, the active balancing is quite different from previous applications because it provides only a partial cancellation of the shaking moment due to the incomplete balancing of shaking force.

Keywords: dynamic balancing, inertia force minimization, inertia moment minimization, 3-RRR planar parallel manipulator

Procedia PDF Downloads 430
409 The Impact of Board Characteristics on Firm Performance: Evidence from Banking Industry in India

Authors: Manmeet Kaur, Madhu Vij

Abstract:

The Board of Directors in a firm performs the primary role of an internal control mechanism. This Study seeks to understand the relationship between internal governance and performance of banks in India. The research paper investigates the effect of board structure (proportion of nonexecutive directors, gender diversity, board size and meetings per year) on the firm performance. This paper evaluates the impact of corporate governance mechanisms on bank’s financial performance using panel data for 28 listed banks in National Stock Exchange of India for the period of 2008-2014. Returns on Asset, Return on Equity, Tobin’s Q and Net Interest Margin were used as the financial performance indicators. To estimate the relationship among governance and bank performance initially the Study uses Pooled Ordinary Least Square (OLS) Estimation and Generalized Least Square (GLS) Estimation. Then a well-developed panel Generalized Method of Moments (GMM) Estimator is developed to investigate the dynamic nature of performance and governance relationship. The Study empirically confirms that two-step system GMM approach controls the problem of unobserved heterogeneity and endogeneity as compared to the OLS and GLS approach. The result suggests that banks with small board, boards with female members, and boards that meet more frequently tend to be more efficient and subsequently have a positive impact on performance of banks. The study offers insights to policy makers interested in enhancing the quality of governance of banks in India. Also, the findings suggest that board structure plays a vital role in the improvement of corporate governance mechanism for financial institutions. There is a need to have efficient boards in banks to improve the overall health of the financial institutions and the economic development of the country.

Keywords: board of directors, corporate governance, GMM estimation, Indian banking

Procedia PDF Downloads 234
408 Effects of Umbilical Cord Clamping on Puppies Neonatal Vitality

Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, Joao C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado

Abstract:

In veterinary medicine, the standard procedure during a caesarian section is clamping the umbilical cord immediately after birth. In human neonates, when the umbilical cord is kept intact after birth, blood continues to flow from the cord to the newborn, but this procedure may prove to be difficult in dogs due to the shorter umbilical cord and the number of newborns in the litter. However, a possible detachment of the placenta while keeping the umbilical cord intact may make the residual blood to flow to the neonate. This study compared the effects on neonatal vitality between clamping and no clamping the umbilical cord of dogs born through cesarean section, assessing them through Apgar and reflex scores. Fifty puppies delivered from 16 bitches were randomly allocated to receive clamping of the umbilical cord immediately (n=25) or to not receive the clamping until breathing (n=25). The neonates were assessed during the first five min of life and once again 10 min after the first assessment. The differences observed between the two moments were significant (p < 0.01) for both the Apgar and reflex scores. The differences observed between the groups (clamped vs. not clamped) were not significant for the Apgar score in the 1st moment (p=0.1), but the 2nd moment was significantly (p < 0.01) in the group not clamped, as well as significant (p < 0.05) for the reflex score in the 1st moment and 2nd moment (p < 0.05), revealing higher neonatal vitality in the not clamped group. The differences observed between the moments (1st vs. 2nd) of each group as significant (p < 0.01), revealing higher neonatal vitality in the 2nd moments. In the no clamping group, after removing the neonates together with the umbilical cord and the placenta, we observed that the umbilical cords were full of blood at the time of birth and later became whitish and collapsed, demonstrating the blood transfer. The results suggest that keeping the umbilical cord intact for at least three minutes after the onset breathing is not detrimental and may contribute to increase neonate vitality in puppies delivered by cesarean section.

Keywords: puppy vitality, newborn dog, cesarean section, Apgar score

Procedia PDF Downloads 119
407 Regionalization of IDF Curves with L-Moments for Storm Events

Authors: Noratiqah Mohd Ariff, Abdul Aziz Jemain, Mohd Aftar Abu Bakar

Abstract:

The construction of Intensity-Duration-Frequency (IDF) curves is one of the most common and useful tools in order to design hydraulic structures and to provide a mathematical relationship between rainfall characteristics. IDF curves, especially those in Peninsular Malaysia, are often built using moving windows of rainfalls. However, these windows do not represent the actual rainfall events since the duration of rainfalls is usually prefixed. Hence, instead of using moving windows, this study aims to find regionalized distributions for IDF curves of extreme rainfalls based on storm events. Homogeneity test is performed on annual maximum of storm intensities to identify homogeneous regions of storms in Peninsular Malaysia. The L-moment method is then used to regionalized Generalized Extreme Value (GEV) distribution of these annual maximums and subsequently. IDF curves are constructed using the regional distributions. The differences between the IDF curves obtained and IDF curves found using at-site GEV distributions are observed through the computation of the coefficient of variation of root mean square error, mean percentage difference and the coefficient of determination. The small differences implied that the construction of IDF curves could be simplified by finding a general probability distribution of each region. This will also help in constructing IDF curves for sites with no rainfall station.

Keywords: IDF curves, L-moments, regionalization, storm events

Procedia PDF Downloads 496
406 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 109
405 Contrasted Mean and Median Models in Egyptian Stock Markets

Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid

Abstract:

Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.

Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming

Procedia PDF Downloads 284
404 Creating Moments and Memories: An Evaluation of the Starlight 'Moments' Program for Palliative Children, Adolescents and Their Families

Authors: C. Treadgold, S. Sivaraman

Abstract:

The Starlight Children's Foundation (Starlight) is an Australian non-profit organisation that delivers programs, in partnership with health professionals, to support children, adolescents, and their families who are living with a serious illness. While supporting children and adolescents with life-limiting conditions has always been a feature of Starlight's work, providing a dedicated program, specifically targeting and meeting the needs of the paediatric palliative population, is a recent area of focus. Recognising the challenges in providing children’s palliative services, Starlight initiated a research and development project to better understand and meet the needs of this group. The aim was to create a program which enhances the wellbeing of children, adolescents, and their families receiving paediatric palliative care in their community through the provision of on-going, tailored, positive experiences or 'moments'. This paper will present the results of the formative evaluation of this unique program, highlighting the development processes and outcomes of the pilot. The pilot was designed using an innovation methodology, which included a number of research components. There was a strong belief that it needed to be delivered in partnership with a dedicated palliative care team, helping to ensure the best interests of the family were always represented. This resulted in Starlight collaborating with both the Victorian Paediatric Palliative Care Program (VPPCP) at the Royal Children's Hospital, Melbourne, and the Sydney Children's Hospital Network (SCHN) to pilot the 'Moments' program. As experts in 'positive disruption', with a long history of collaborating with health professionals, Starlight was well placed to deliver a program which helps children, adolescents, and their families to experience moments of joy, connection and achieve their own sense of accomplishment. Building on Starlight’s evidence-based approach and experience in creative service delivery, the program aims to use the power of 'positive disruption' to brighten the lives of this group and create important memories. The clinical and Starlight team members collaborate to ensure that the child and family are at the centre of the program. The design of each experience is specific to their needs and ensures the creation of positive memories and family connection. It aims for each moment to enhance quality of life. The partnership with the VPPCP and SCHN has allowed the program to reach families across metropolitan and regional locations. In late 2019 a formative evaluation of the pilot was conducted utilising both quantitative and qualitative methodologies to document both the delivery and outcomes of the program. Central to the evaluation was the interviews conducted with both clinical teams and families in order to gain a comprehensive understanding of the impact of and satisfaction with the program. The findings, which will be shared in this presentation, provide practical insight into the delivery of the program, the key elements for its success with families, and areas which could benefit from additional research and focus. It will use stories and case studies from the pilot to highlight the impact of the program and discuss what opportunities, challenges, and learnings emerged.

Keywords: children, families, memory making, pediatric palliative care, support

Procedia PDF Downloads 80
403 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 326
402 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix

Authors: Tsui-Tsai Lin

Abstract:

Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.

Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration

Procedia PDF Downloads 242
401 Frequency Analysis Using Multiple Parameter Probability Distributions for Rainfall to Determine Suitable Probability Distribution in Pakistan

Authors: Tasir Khan, Yejuan Wang

Abstract:

The study of extreme rainfall events is very important for flood management in river basins and the design of water conservancy infrastructure. Evaluation of quantiles of annual maximum rainfall (AMRF) is required in different environmental fields, agriculture operations, renewable energy sources, climatology, and the design of different structures. Therefore, the annual maximum rainfall (AMRF) was performed at different stations in Pakistan. Multiple probability distributions, log normal (LN), generalized extreme value (GEV), Gumbel (max), and Pearson type3 (P3) were used to find out the most appropriate distributions in different stations. The L moments method was used to evaluate the distribution parameters. Anderson darling test, Kolmogorov- Smirnov test, and chi-square test showed that two distributions, namely GUM (max) and LN, were the best appropriate distributions. The quantile estimate of a multi-parameter PD offers extreme rainfall through a specific location and is therefore important for decision-makers and planners who design and construct different structures. This result provides an indication of these multi-parameter distribution consequences for the study of sites and peak flow prediction and the design of hydrological maps. Therefore, this discovery can support hydraulic structure and flood management.

Keywords: RAMSE, multiple frequency analysis, annual maximum rainfall, L-moments

Procedia PDF Downloads 52
400 Crack Propagation Effect at the Interface of a Composite Beam

Authors: Mezidi Amar

Abstract:

In this research work, crack propagation at the interface of a composite beam is considered. The behavior of composite beams (CB) depends upon a law based on relationship between tangential or normal efforts with inelastic propagation. Throughout this study, composite beams are classified like composite beams with partial connection or sandwich beams of three layers. These structural systems are controlled by the same nature of differential equations regarding their behavior in the plane, as well as out-of-plane. Multi-layer elements with partial connection are typically met in the field of timber construction where the elements are assembled by joining. The formalism of the behavior in the plane and out-of-plane of these composite beams is obtained and their results concerning the engineering aspect or simple of interpretation are proposed for the case of composite beams made up of rectangular section and simply supported section. An apparent analytical peculiarity or paradox in the bending behavior of elastic–composite beams with interlayer slip, sandwich beam or other similar problems subjected to boundary moments exists. For a fully composite beam subjected to end moments, the partial composite model will render a non-vanishing uniform value for the normal force in the individual subelement. Obtained results are similar to those for the case of vibrations in the plane as well for the composite beams as for the sandwich beams where eigen-frequencies increase with related rigidity.

Keywords: composite beam, behaviour, interface, deflection, propagation

Procedia PDF Downloads 266
399 Relevance Feedback within CBIR Systems

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

We present here the results for a comparative study of some techniques, available in the literature, related to the relevance feedback mechanism in the case of a short-term learning. Only one method among those considered here is belonging to the data mining field which is the K-Nearest Neighbours Algorithm (KNN) while the rest of the methods is related purely to the information retrieval field and they fall under the purview of the following three major axes: Shifting query, Feature Weighting and the optimization of the parameters of similarity metric. As a contribution, and in addition to the comparative purpose, we propose a new version of the KNN algorithm referred to as an incremental KNN which is distinct from the original version in the sense that besides the influence of the seeds, the rate of the actual target image is influenced also by the images already rated. The results presented here have been obtained after experiments conducted on the Wang database for one iteration and utilizing colour moments on the RGB space. This compact descriptor, Colour Moments, is adequate for the efficiency purposes needed in the case of interactive systems. The results obtained allow us to claim that the proposed algorithm proves good results; it even outperforms a wide range of techniques available in the literature.

Keywords: CBIR, category search, relevance feedback, query point movement, standard Rocchio’s formula, adaptive shifting query, feature weighting, original KNN, incremental KNN

Procedia PDF Downloads 253
398 Static and Dynamic Analysis of Hyperboloidal Helix Having Thin Walled Open and Close Sections

Authors: Merve Ermis, Murat Yılmaz, Nihal Eratlı, Mehmet H. Omurtag

Abstract:

The static and dynamic analyses of hyperboloidal helix having the closed and the open square box sections are investigated via the mixed finite element formulation based on Timoshenko beam theory. Frenet triad is considered as local coordinate systems for helix geometry. Helix domain is discretized with a two-noded curved element and linear shape functions are used. Each node of the curved element has 12 degrees of freedom, namely, three translations, three rotations, two shear forces, one axial force, two bending moments and one torque. Finite element matrices are derived by using exact nodal values of curvatures and arc length and it is interpolated linearly throughout the element axial length. The torsional moments of inertia for close and open square box sections are obtained by finite element solution of St. Venant torsion formulation. With the proposed method, the torsional rigidity of simply and multiply connected cross-sections can be also calculated in same manner. The influence of the close and the open square box cross-sections on the static and dynamic analyses of hyperboloidal helix is investigated. The benchmark problems are represented for the literature.

Keywords: hyperboloidal helix, squared cross section, thin walled cross section, torsional rigidity

Procedia PDF Downloads 336
397 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: generalized extreme values, likelihood estimation, precipitation data, Wakeby distribution

Procedia PDF Downloads 110