Search results for: cumulative variance.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1476

Search results for: cumulative variance.

1476 Contrast Enhancement of Color Images with Color Morphing Approach

Authors: Javed Khan, Aamir Saeed Malik, Nidal Kamel, Sarat Chandra Dass, Azura Mohd Affandi

Abstract:

Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques.

Keywords: contrast enhacement, normalized RGB, adaptive histogram equalization, cumulative variance.

Procedia PDF Downloads 340
1475 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 141
1474 Parameter Interactions in the Cumulative Prospect Theory: Fitting the Binary Choice Experiment Data

Authors: Elzbieta Babula, Juhyun Park

Abstract:

Tversky and Kahneman’s cumulative prospect theory assumes symmetric probability cumulation with regard to the reference point within decision weights. Theoretically, this model should be invariant under the change of the direction of probability cumulation. In the present study, this phenomenon is being investigated by creating a reference model that allows verifying the parameter interactions in the cumulative prospect theory specifications. The simultaneous parametric fitting of utility and weighting functions is applied to binary choice data from the experiment. The results show that the flexibility of the probability weighting function is a crucial characteristic allowing to prevent parameter interactions while estimating cumulative prospect theory.

Keywords: binary choice experiment, cumulative prospect theory, decision weights, parameter interactions

Procedia PDF Downloads 181
1473 Efficient Frontier: Comparing Different Volatility Estimators

Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković

Abstract:

Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.

Keywords: variance, lower semi-variance, range-based volatility, MPT

Procedia PDF Downloads 476
1472 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 290
1471 BIASS in the Estimation of Covariance Matrices and Optimality Criteria

Authors: Juan M. Rodriguez-Diaz

Abstract:

The precision of parameter estimators in the Gaussian linear model is traditionally accounted by the variance-covariance matrix of the asymptotic distribution. However, this measure can underestimate the true variance, specially for small samples. Traditionally, optimal design theory pays attention to this variance through its relationship with the model's information matrix. For this reason it seems convenient, at least in some cases, adapt the optimality criteria in order to get the best designs for the actual variance structure, otherwise the loss in efficiency of the designs obtained with the traditional approach may be very important.

Keywords: correlated observations, information matrix, optimality criteria, variance-covariance matrix

Procedia PDF Downloads 398
1470 The Analysis of Personalized Low-Dose Computed Tomography Protocol Based on Cumulative Effective Radiation Dose and Cumulative Organ Dose for Patients with Breast Cancer with Regular Chest Computed Tomography Follow up

Authors: Okhee Woo

Abstract:

Purpose: The aim of this study is to evaluate 2-year cumulative effective radiation dose and cumulative organ dose on regular follow-up computed tomography (CT) scans in patients with breast cancer and to establish personalized low-dose CT protocol. Methods and Materials: A retrospective study was performed on the patients with breast cancer who were diagnosed and managed consistently on the basis of routine breast cancer follow-up protocol between 2012-01 and 2016-06. Based on ICRP (International Commission on Radiological Protection) 103, the cumulative effective radiation doses of each patient for 2-year follow-up were analyzed using the commercial radiation management software (Radimetrics, Bayer healthcare). The personalized effective doses on each organ were analyzed in detail by the software-providing Monte Carlo simulation. Results: A total of 3822 CT scans on 490 patients was evaluated (age: 52.32±10.69). The mean scan number for each patient was 7.8±4.54. Each patient was exposed 95.54±63.24 mSv of radiation for 2 years. The cumulative CT radiation dose was significantly higher in patients with lymph node metastasis (p = 0.00). The HER-2 positive patients were more exposed to radiation compared to estrogen or progesterone receptor positive patient (p = 0.00). There was no difference in the cumulative effective radiation dose with different age groups. Conclusion: To acknowledge how much radiation exposed to a patient is a starting point of management of radiation exposure for patients with long-term CT follow-up. The precise and personalized protocol, as well as iterative reconstruction, may reduce hazard from unnecessary radiation exposure.

Keywords: computed tomography, breast cancer, effective radiation dose, cumulative organ dose

Procedia PDF Downloads 151
1469 Occupational Cumulative Effective Doses of Radiation Workers in Hamad Medical Corporation in Qatar

Authors: Omar Bobes, Abeer Al-Attar, Mohammad Hassan Kharita, Huda Al-Naemi

Abstract:

The number of radiological examinations has increased steadily in recent years. As a result, the risk of possible radiation-induced consequential damage also increases through continuous, lifelong, and increasing exposure to ionizing radiation. Therefore, radiation dose monitoring in medicine became an essential element of medical practice. In this study, the occupational cumulative doses for radiation workers in Hamad medical corporation in Qatar have been assessed for a period of five years. The number of monitored workers selected for this study was 555 (out of a total of 1250 monitored workers) who have been working continuously -with no interruption- with ionizing radiation over the past five years from 2015 to 2019. The aim of this work is to examine the occupational groups and the activities where the higher radiation exposure occurred and in what order of magnitude. The most exposed group was the nuclear medicine technologist staff, with an average cumulative dose of 8.4 mSv. The highest individual cumulative dose was 9.8 mSv recorded for the PET-CT technologist category.

Keywords: cumulative dose, effective dose, monitoring, occupational exposure, dosimetry

Procedia PDF Downloads 202
1468 Exploring the Energy Model of Cumulative Grief

Authors: Masica Jordan Alston, Angela N. Bullock, Angela S. Henderson, Stephanie Strianse, Sade Dunn, Joseph Hackett, Alaysia Black Hackett, Marcus Mason

Abstract:

The Energy Model of Cumulative Grief was created in 2018. The Energy Model of Cumulative Grief utilizes historic models of grief stage theories. The innovative model is additionally unique due to its focus on cultural responsiveness. The Energy Model of Cumulative Grief helps to train practitioners who work with clients dealing with grief and loss. This paper assists in introducing the world to this innovative model and exploring how this model positively impacted a convenience sample of 140 practitioners and individuals experiencing grief and loss. Respondents participated in Webinars provided by the National Grief and Loss Center of America (NGLCA). Participants in this cross-sectional research design study completed one of three Grief and Loss Surveys created by the Grief and Loss Centers of America. Data analysis for this study was conducted via SPSS and Survey Hero to examine survey results for respondents. Results indicate that the Energy Model of Cumulative Grief was an effective resource for participants in addressing grief and loss. The majority of participants found the Webinars to be helpful and a conduit to providing them with higher levels of hope. The findings suggest that using The Energy Model of Cumulative Grief is effective in providing culturally responsive grief and loss resources to practitioners and clients. There are far reaching implications with the use of technology to provide hope to those suffering from grief and loss worldwide through The Energy Model of Cumulative Grief.

Keywords: grief, loss, grief energy, grieving brain

Procedia PDF Downloads 43
1467 On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes

Authors: Amit Ghosh, Chanchal Kundu

Abstract:

Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.

Keywords: cumulative past inaccuracy, marginal and conditional past lifetimes, conditional proportional reversed hazard rate model, usual stochastic order

Procedia PDF Downloads 215
1466 A Generalized Family of Estimators for Estimation of Unknown Population Variance in Simple Random Sampling

Authors: Saba Riaz, Syed A. Hussain

Abstract:

This paper is addressing the estimation method of the unknown population variance of the variable of interest. A new generalized class of estimators of the finite population variance has been suggested using the auxiliary information. To improve the precision of the proposed class, known population variance of the auxiliary variable has been used. Mathematical expressions for the biases and the asymptotic variances of the suggested class are derived under large sample approximation. Theoretical and numerical comparisons are made to investigate the performances of the proposed class of estimators. The empirical study reveals that the suggested class of estimators performs better than the usual estimator, classical ratio estimator, classical product estimator and classical linear regression estimator. It has also been found that the suggested class of estimators is also more efficient than some recently published estimators.

Keywords: study variable, auxiliary variable, finite population variance, bias, asymptotic variance, percent relative efficiency

Procedia PDF Downloads 184
1465 Dividend Initiations and IPO Long-Run Performance

Authors: Nithi Sermsiriviboon, Somchai Supattarakul

Abstract:

Dividend initiations are an economically significant event that has important implications for a firm’s future financial capacity. Given that the market’s expectation of a consistent payout, managers of IPO firms must approach the initial dividend decision cautiously. We compare the long run performance of IPO firms that initiated dividends with those of similarly matched non-payers. We found that firms which initiated dividends perform significantly better up to three years after the initiation date. Moreover, we measure investor reactions by 2-day around dividend announcement date cumulative abnormal return. We evidence no statistically significant differences between cumulative abnormal returns (CAR) of IPO firms and cumulative abnormal returns of Non-IPO firms, indicating that investors do not respond to dividend announcement of IPO firms more than they do to the dividend announcement of Non-IPO firms.

Keywords: dividend, initial public offerings, long-run performance, finance

Procedia PDF Downloads 200
1464 Distributed Energy Storage as a Potential Solution to Electrical Network Variance

Authors: V. Rao, A. Bedford

Abstract:

As the efficient performance of national grid becomes increasingly important to maintain the electrical network stability, the balance between the generation and the demand must be effectively maintained. To do this, any losses that occur in the power network must be reduced by compensating for it. In this paper, one of the main cause for the losses in the network is identified as the variance, which hinders the grid’s power carrying capacity. The reason for the variance in the grid is investigated and identified as the rise in the integration of renewable energy sources (RES) such as wind and solar power. The intermittent nature of these RES along with fluctuating demands gives rise to variance in the electrical network. The losses that occur during this process is estimated by analyzing the network’s power profiles. Whilst researchers have identified different ways to tackle this problem, little consideration is given to energy storage. This paper seeks to redress this by considering the role of energy storage systems as potential solutions to reduce variance in the network. The implementation of suitable energy storage systems based on different applications is presented in this paper as part of variance reduction method and thus contribute towards maintaining a stable and efficient grid operation.

Keywords: energy storage, electrical losses, national grid, renewable energy, variance

Procedia PDF Downloads 279
1463 Application of Hyperbinomial Distribution in Developing a Modified p-Chart

Authors: Shourav Ahmed, M. Gulam Kibria, Kais Zaman

Abstract:

Control charts graphically verify variation in quality parameters. Attribute type control charts deal with quality parameters that can only hold two states, e.g., good or bad, yes or no, etc. At present, p-control chart is most commonly used to deal with attribute type data. In construction of p-control chart using binomial distribution, the value of proportion non-conforming must be known or estimated from limited sample information. As the probability distribution of fraction non-conforming (p) is considered in hyperbinomial distribution unlike a constant value in case of binomial distribution, it reduces the risk of false detection. In this study, a statistical control chart is proposed based on hyperbinomial distribution when prior estimate of proportion non-conforming is unavailable and is estimated from limited sample information. We developed the control limits of the proposed modified p-chart using the mean and variance of hyperbinomial distribution. The proposed modified p-chart can also utilize additional sample information when they are available. The study also validates the use of modified p-chart by comparing with the result obtained using cumulative distribution function of hyperbinomial distribution. The study clearly indicates that the use of hyperbinomial distribution in construction of p-control chart yields much accurate estimate of quality parameters than using binomial distribution.

Keywords: binomial distribution, control charts, cumulative distribution function, hyper binomial distribution

Procedia PDF Downloads 233
1462 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution

Procedia PDF Downloads 469
1461 Optimal Replacement Period for a One-Unit System with Double Repair Cost Limits

Authors: Min-Tsai Lai, Taqwa Hariguna

Abstract:

This paper presents a periodical replacement model for a system, considering the concept of single and cumulative repair cost limits simultaneously. The failures are divided into two types. Minor failure can be corrected by minimal repair and serious failure makes the system breakdown completely. When a minor failure occurs, if the repair cost is less than a single repair cost limit L1 and the accumulated repair cost is less than a cumulative repair cost limit L2, then minimal repair is executed, otherwise, the system is preventively replaced. The system is also replaced at time T or at serious failure. The optimal period T minimizing the long-run expected cost per unit time is verified to be finite and unique under some specific conditions.

Keywords: repair-cost limit, cumulative repair-cost limit, minimal repair, periodical replacement policy

Procedia PDF Downloads 327
1460 Some Statistical Properties of Residual Sea Level along the Coast of Vietnam

Authors: Doan Van Chinh, Bui Thi Kien Trinh

Abstract:

This paper outlines some statistical properties of residual sea level (RSL) at six representative tidal stations located along the coast of Vietnam. It was found that the positive RSL varied on average between 9.82 and 19.96cm and the negative RSL varied on average between -16.62 and -9.02cm. The maximum positive RSL varied on average between 102.8 and 265.5cm with the maximum negative RSL varied on average between -250.4 and -66.4cm. It is seen that the biggest positive RSL ere appeared in the summer months and the biggest negative RSL ere appeared in the winter months. The cumulative frequency of RSL less than 50 cm occurred between 95 and 99% of the times while the frequency of RSL higher than 100 cm accounted for between 0.01 and 0.2%. It also was found that the cumulative frequency of duration of RSL less than 24 hours occurred between 90 and 99% while the frequency of duration longer than 72 hours was in the order of 0.1 and 1%.

Keywords: coast of Vietnam, residual sea level, residual water, surge, cumulative frequency

Procedia PDF Downloads 250
1459 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 539
1458 Sales-Based Dynamic Investment and Leverage Decisions: A Longitudinal Study

Authors: Rihab Belguith, Fathi Abid

Abstract:

The paper develops a system-based approach to investigate the dynamic adjustment of debt structure and investment policies of the Dow-Jones index. This approach enables the assessment of relations among sales, debt, and investment opportunities by considering the simultaneous effect of the market environmental change and future growth opportunities. We integrate the firm-specific sales variance to capture the industries' conditions in the model. Empirical results were obtained through a panel data set of firms with different sectors. The analysis support that environmental change does not affect equally the different industry since operating leverage differs among industries and so the sensitivity to sales variance. Including adjusted-specific variance, we find that there is no monotonic relation between leverage, sales, and investment. The firm may choose a low debt level in response to high sales variance but high leverage to attenuate the negative relation between sales variance and the current level of investment. We further find that while the overall effect of debt maturity on leverage is unaffected by the level of growth opportunities, the shorter the maturity of debt is, the smaller the direct effect of sales variance on investment.

Keywords: dynamic panel, investment, leverage decision, sales uncertainty

Procedia PDF Downloads 200
1457 Behavior Loss Aversion Experimental Laboratory of Financial Investments

Authors: Jihene Jebeniani

Abstract:

We proposed an approach combining both the techniques of experimental economy and the flexibility of discrete choice models in order to test the loss aversion. Our main objective was to test the loss aversion of the Cumulative Prospect Theory (CPT). We developed an experimental laboratory in the context of the financial investments that aimed to analyze the attitude towards the risk of the investors. The study uses the lotteries and is basing on econometric modeling. The estimated model was the ordered probit.

Keywords: risk aversion, behavioral finance, experimental economic, lotteries, cumulative prospect theory

Procedia PDF Downloads 431
1456 Financial Market Reaction to Non-Financial Reports

Authors: Petra Dilling

Abstract:

This study examines the market reaction to the publication of integrated reports for a sample of 316 global companies for the reporting year 2018. Applying event study methodology, we find significant cumulative average abnormal returns (CAARs) after the publication date. To ensure robust estimation resultsthe three-factor model, according to Fama and French, is used as well as a market-adjusted model, a CAPM and a Frama-French model taking GARCH effects into account. We find a significant positive CAAR after the publication day of the integrated report. Our results suggest that investors react to information provided in the integrated report and that they react differently to the annual financial report. Furthermore, our cross-sectional analysis confirms that companies with a significant positive cumulative average abnormal show certain characteristic. It was found that European companies have a higher likelihood to experience a stronger significant positive market reaction to their integrated report publication.

Keywords: integrated report, event methodology, cumulative abnormal return, sustainability, CAPM

Procedia PDF Downloads 110
1455 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 555
1454 Trajectories of Conduct Problems and Cumulative Risk from Early Childhood to Adolescence

Authors: Leslie M. Gutman

Abstract:

Conduct problems (CP) represent a major dilemma, with wide-ranging and long-lasting individual and societal impacts. Children experience heterogeneous patterns of conduct problems; based on the age of onset, developmental course and related risk factors from around age 3. Early childhood represents a potential window for intervention efforts aimed at changing the trajectory of early starting conduct problems. Using the UK Millennium Cohort Study (n = 17,206 children), this study (a) identifies trajectories of conduct problems from ages 3 to 14 years and (b) assesses the cumulative and interactive effects of individual, family and socioeconomic risk factors from ages 9 months to 14 years. The same factors according to three domains were assessed, including child (i.e., low verbal ability, hyperactivity/inattention, peer problems, emotional problems), family (i.e., single families, parental poor physical and mental health, large family size) and socioeconomic (i.e., low family income, low parental education, unemployment, social housing). A cumulative risk score for the child, family, and socioeconomic domains at each age was calculated. It was then examined how the cumulative risk scores explain variation in the trajectories of conduct problems. Lastly, interactive effects among the different domains of cumulative risk were tested. Using group-based trajectory modeling, four distinct trajectories were found including a ‘low’ problem group and three groups showing childhood-onset conduct problems: ‘school-age onset’; ‘early-onset, desisting’; and ‘early-onset, persisting’. The ‘low’ group (57% of the sample) showed a low probability of conducts problems, close to zero, from 3 to 14 years. The ‘early-onset, desisting’ group (23% of the sample) demonstrated a moderate probability of CP in early childhood, with a decline from 3 to 5 years and a low probability thereafter. The ‘early-onset, persistent’ group (8%) followed a high probability of conduct problems, which declined from 11 years but was close to 70% at 14 years. In the ‘school-age onset’ group, 12% of the sample showed a moderate probability of conduct problems from 3 and 5 years, with a sharp increase by 7 years, increasing to 50% at 14 years. In terms of individual risk, all factors increased the likelihood of being in the childhood-onset groups compared to the ‘low’ group. For cumulative risk, the socioeconomic domain at 9 months and 3 years, the family domain at all ages except 14 years and child domain at all ages were found to differentiate childhood-onset groups from the ‘low’ group. Cumulative risk at 9 months and 3 years did not differentiate between the ‘school-onset’ group and ‘low’ group. Significant interactions were found between the domains for the ‘early-onset, desisting group’ suggesting that low levels of risk in one domain may buffer the effects of high risk in another domain. The implications of these findings for preventive interventions will be highlighted.

Keywords: conduct problems, cumulative risk, developmental trajectories, early childhood, adolescence

Procedia PDF Downloads 220
1453 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.

Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance

Procedia PDF Downloads 398
1452 A Mean–Variance–Skewness Portfolio Optimization Model

Authors: Kostas Metaxiotis

Abstract:

Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.

Keywords: evolutionary algorithms, portfolio optimization, skewness, stock selection

Procedia PDF Downloads 147
1451 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić

Abstract:

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation

Procedia PDF Downloads 349
1450 Portfolio Optimization under a Hybrid Stochastic Volatility and Constant Elasticity of Variance Model

Authors: Jai Heui Kim, Sotheara Veng

Abstract:

This paper studies the portfolio optimization problem for a pension fund under a hybrid model of stochastic volatility and constant elasticity of variance (CEV) using asymptotic analysis method. When the volatility component is fast mean-reverting, it is able to derive asymptotic approximations for the value function and the optimal strategy for general utility functions. Explicit solutions are given for the exponential and hyperbolic absolute risk aversion (HARA) utility functions. The study also shows that using the leading order optimal strategy results in the value function, not only up to the leading order, but also up to first order correction term. A practical strategy that does not depend on the unobservable volatility level is suggested. The result is an extension of the Merton's solution when stochastic volatility and elasticity of variance are considered simultaneously.

Keywords: asymptotic analysis, constant elasticity of variance, portfolio optimization, stochastic optimal control, stochastic volatility

Procedia PDF Downloads 260
1449 The Effect of "Trait" Variance of Personality on Depression: Application of the Trait-State-Occasion Modeling

Authors: Pei-Chen Wu

Abstract:

Both preexisting cross-sectional and longitudinal studies of personality-depression relationship have suffered from one main limitation: they ignored the stability of the construct of interest (e.g., personality and depression) can be expected to influence the estimate of the association between personality and depression. To address this limitation, the Trait-State-Occasion (TSO) modeling was adopted to analyze the sources of variance of the focused constructs. A TSO modeling was operated by partitioning a state variance into time-invariant (trait) and time-variant (occasion) components. Within a TSO framework, it is possible to predict change on the part of construct that really changes (i.e., time-variant variance), when controlling the trait variances. 750 high school students were followed for 4 waves over six-month intervals. The baseline data (T1) were collected from the senior high schools (aged 14 to 15 years). Participants were given Beck Depression Inventory and Big Five Inventory at each assessment. TSO modeling revealed that 70~78% of the variance in personality (five constructs) was stable over follow-up period; however, 57~61% of the variance in depression was stable. For personality construct, there were 7.6% to 8.4% of the total variance from the autoregressive occasion factors; for depression construct there were 15.2% to 18.1% of the total variance from the autoregressive occasion factors. Additionally, results showed that when controlling initial symptom severity, the time-invariant components of all five dimensions of personality were predictive of change in depression (Extraversion: B= .32, Openness: B = -.21, Agreeableness: B = -.27, Conscientious: B = -.36, Neuroticism: B = .39). Because five dimensions of personality shared some variance, the models in which all five dimensions of personality were simultaneous to predict change in depression were investigated. The time-invariant components of five dimensions were still significant predictors for change in depression (Extraversion: B = .30, Openness: B = -.24, Agreeableness: B = -.28, Conscientious: B = -.35, Neuroticism: B = .42). In sum, the majority of the variability of personality was stable over 2 years. Individuals with the greater tendency of Extraversion and Neuroticism have higher degrees of depression; individuals with the greater tendency of Openness, Agreeableness and Conscientious have lower degrees of depression.

Keywords: assessment, depression, personality, trait-state-occasion model

Procedia PDF Downloads 145
1448 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 15
1447 Characteristics of Cumulative Distribution Function of Grown Crack Size at Specified Fatigue Crack Propagation Life under Different Maximum Fatigue Loads in AZ31

Authors: Seon Soon Choi

Abstract:

Magnesium alloy has been widely used in structure such as an automobile. It is necessary to consider probabilistic characteristics of a structural material because a fatigue behavior of a structure has a randomness and uncertainty. The purpose of this study is to find the characteristics of the cumulative distribution function (CDF) of the grown crack size at a specified fatigue crack propagation life and to investigate a statistical crack propagation in magnesium alloys. The statistical fatigue data of the grown crack size are obtained through the fatigue crack propagation (FCP) tests under different maximum fatigue load conditions conducted on the replicated specimens of magnesium alloys. The 3-parameter Weibull distribution is used to find the CDF of grown crack size. The CDF of grown crack size in case of larger maximum fatigue load has longer tail in below 10 percent and above 90 percent. The fatigue failure occurs easily as the tail of CDF of grown crack size becomes long. The fatigue behavior under the larger maximum fatigue load condition shows more rapid propagation and failure mode.

Keywords: cumulative distribution function, fatigue crack propagation, grown crack size, magnesium alloys, maximum fatigue load

Procedia PDF Downloads 256