Search results for: cumulative probabilities
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 553

Search results for: cumulative probabilities

553 Logistic Regression Model versus Additive Model for Recurrent Event Data

Authors: Entisar A. Elgmati

Abstract:

Recurrent infant diarrhea is studied using daily data collected in Salvador, Brazil over one year and three months. A logistic regression model is fitted instead of Aalen's additive model using the same covariates that were used in the analysis with the additive model. The model gives reasonably similar results to that using additive regression model. In addition, the problem with the estimated conditional probabilities not being constrained between zero and one in additive model is solved here. Also martingale residuals that have been used to judge the goodness of fit for the additive model are shown to be useful for judging the goodness of fit of the logistic model.

Keywords: additive model, cumulative probabilities, infant diarrhoea, recurrent event

Procedia PDF Downloads 634
552 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities

Authors: Retius Chifurira

Abstract:

Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.

Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities

Procedia PDF Downloads 199
551 Parameter Interactions in the Cumulative Prospect Theory: Fitting the Binary Choice Experiment Data

Authors: Elzbieta Babula, Juhyun Park

Abstract:

Tversky and Kahneman’s cumulative prospect theory assumes symmetric probability cumulation with regard to the reference point within decision weights. Theoretically, this model should be invariant under the change of the direction of probability cumulation. In the present study, this phenomenon is being investigated by creating a reference model that allows verifying the parameter interactions in the cumulative prospect theory specifications. The simultaneous parametric fitting of utility and weighting functions is applied to binary choice data from the experiment. The results show that the flexibility of the probability weighting function is a crucial characteristic allowing to prevent parameter interactions while estimating cumulative prospect theory.

Keywords: binary choice experiment, cumulative prospect theory, decision weights, parameter interactions

Procedia PDF Downloads 214
550 A Study of Life Expectancy in an Urban Set up of North-Eastern India under Dynamic Consideration Incorporating Cause Specific Mortality

Authors: Mompi Sharma, Labananda Choudhury, Anjana M. Saikia

Abstract:

Background: The period life table is entirely based on the assumption that the mortality patterns of the population existing in the given period will persist throughout their lives. However, it has been observed that the mortality rate continues to decline. As such, if the rates of change of probabilities of death are considered in a life table then we get a dynamic life table. Although, mortality has been declining in all parts of India, one may be interested to know whether these declines had appeared more in an urban area of underdeveloped regions like North-Eastern India. So, attempt has been made to know the mortality pattern and the life expectancy under dynamic scenario in Guwahati, the biggest city of North Eastern India. Further, if the probabilities of death changes then there is a possibility that its different constituent probabilities will also change. Since cardiovascular disease (CVD) is the leading cause of death in Guwahati. Therefore, an attempt has also been made to formulate dynamic cause specific death ratio and probabilities of death due to CVD. Objectives: To construct dynamic life table for Guwahati for the year 2011 based on the rates of change of probabilities of death over the previous 10 and 25 years (i.e.,2001 and 1986) and to compute corresponding dynamic cause specific death ratio and probabilities of death due to CVD. Methodology and Data: The study uses the method proposed by Denton and Spencer (2011) to construct dynamic life table for Guwahati. So, the data from the Office of the Birth and Death, Guwahati Municipal Corporation for the years 1986, 2001 and 2011 are taken. The population based data are taken from 2001 and 2011 census (India). However, the population data for 1986 has been estimated. Also, the cause of death ratio and probabilities of death due to CVD are computed for the aforementioned years and then extended to dynamic set up for the year 2011 by considering the rates of change of those probabilities over the previous 10 and 25 years. Findings: The dynamic life expectancy at birth (LEB) for Guwahati is found to be higher than the corresponding values in the period table by 3.28 (5.65) years for males and 8.30 (6.37) years for females during the period of 10 (25) years. The life expectancies under dynamic consideration in all the other age groups are also seen higher than the usual life expectancies, which may be possible due to gradual decline in probabilities of death since 1986-2011. Further, a continuous decline has also been observed in death ratio due to CVD along with cause specific probabilities of death for both sexes. As a consequence, dynamic cause of death probability due to CVD is found to be less in comparison to usual procedure. Conclusion: Since incorporation of changing mortality rates in period life table for Guwahati resulted in higher life expectancies and lower probabilities of death due to CVD, this would possibly bring out the real situation of deaths prevailing in the city.

Keywords: cause specific death ratio, cause specific probabilities of death, dynamic, life expectancy

Procedia PDF Downloads 231
549 Optimal Continuous Scheduled Time for a Cumulative Damage System with Age-Dependent Imperfect Maintenance

Authors: Chin-Chih Chang

Abstract:

Many manufacturing systems suffer failures due to complex degradation processes and various environment conditions such as random shocks. Consider an operating system is subject to random shocks and works at random times for successive jobs. When successive jobs often result in production losses and performance deterioration, it would be better to do maintenance or replacement at a planned time. A preventive replacement (PR) policy is presented to replace the system before a failure occurs at a continuous time T. In such a policy, the failure characteristics of the system are designed as follows. Each job would cause a random amount of additive damage to the system, and the system fails when the cumulative damage has exceeded a failure threshold. Suppose that the deteriorating system suffers one of the two types of shocks with age-dependent probabilities: type-I (minor) shock is rectified by a minimal repair, or type-II (catastrophic) shock causes the system to fail. A corrective replacement (CR) is performed immediately when the system fails. In summary, a generalized maintenance model to scheduling replacement plan for an operating system is presented below. PR is carried out at time T, whereas CR is carried out when any type-II shock occurs and the total damage exceeded a failure level. The main objective is to determine the optimal continuous schedule time of preventive replacement through minimizing the mean cost rate function. The existence and uniqueness of optimal replacement policy are derived analytically. It can be seen that the present model is a generalization of the previous models, and the policy with preventive replacement outperforms the one without preventive replacement.

Keywords: preventive replacement, working time, cumulative damage model, minimal repair, imperfect maintenance, optimization

Procedia PDF Downloads 361
548 Critical Appraisal of Different Drought Indices of Drought Predection and Their Application in KBK Districts of Odisha

Authors: Bibhuti Bhusan Sahoo, Ramakar Jha

Abstract:

Mapping of the extreme events (droughts) is one of the adaptation strategies to consequences of increasing climatic inconsistency and climate alterations. There is no operational practice to forecast the drought. One of the suggestions is to update mapping of drought prone areas for developmental planning. Drought indices play a significant role in drought mitigation. Many scientists have worked on different statistical analysis in drought and other climatological hazards. Many researchers have studied droughts individually for different sub-divisions or for India. Very few workers have studied district wise probabilities over large scale. In the present study, district wise drought probabilities over KBK (Kalahandi-Balangir-Koraput) districts of Odisha, India, Which are seriously prone to droughts, has been established using Hydrological drought index and Meteorological drought index along with the remote sensing drought indices to develop a multidirectional approach in the field of drought mitigation. Mapping for moderate and severe drought probabilities for KBK districts has been done and regions belonging different class intervals of probabilities of drought have been demarcated. Such type of information would be a good tool for planning purposes, for input in modelling and better promising results can be achieved.

Keywords: drought indices, KBK districts, proposed drought severity index, SPI

Procedia PDF Downloads 447
547 The Analysis of Personalized Low-Dose Computed Tomography Protocol Based on Cumulative Effective Radiation Dose and Cumulative Organ Dose for Patients with Breast Cancer with Regular Chest Computed Tomography Follow up

Authors: Okhee Woo

Abstract:

Purpose: The aim of this study is to evaluate 2-year cumulative effective radiation dose and cumulative organ dose on regular follow-up computed tomography (CT) scans in patients with breast cancer and to establish personalized low-dose CT protocol. Methods and Materials: A retrospective study was performed on the patients with breast cancer who were diagnosed and managed consistently on the basis of routine breast cancer follow-up protocol between 2012-01 and 2016-06. Based on ICRP (International Commission on Radiological Protection) 103, the cumulative effective radiation doses of each patient for 2-year follow-up were analyzed using the commercial radiation management software (Radimetrics, Bayer healthcare). The personalized effective doses on each organ were analyzed in detail by the software-providing Monte Carlo simulation. Results: A total of 3822 CT scans on 490 patients was evaluated (age: 52.32±10.69). The mean scan number for each patient was 7.8±4.54. Each patient was exposed 95.54±63.24 mSv of radiation for 2 years. The cumulative CT radiation dose was significantly higher in patients with lymph node metastasis (p = 0.00). The HER-2 positive patients were more exposed to radiation compared to estrogen or progesterone receptor positive patient (p = 0.00). There was no difference in the cumulative effective radiation dose with different age groups. Conclusion: To acknowledge how much radiation exposed to a patient is a starting point of management of radiation exposure for patients with long-term CT follow-up. The precise and personalized protocol, as well as iterative reconstruction, may reduce hazard from unnecessary radiation exposure.

Keywords: computed tomography, breast cancer, effective radiation dose, cumulative organ dose

Procedia PDF Downloads 196
546 Calculating Collision Risk Exposures and Risk Probabilities at Container Terminals

Authors: Mohammad Ali Hasanzadeh, Thierry Vanelslander, Eddy Van De Voorde

Abstract:

Nowadays maritime transport is a key element in international trade and global supply chain. Economies of scale in transporting goods are one of the most attractive elements of using ships. Without maritime transport, almost no globalization of economics can be imagined. Within maritime transport, ports are the interface between lands and see. Even though using ships help cargo owners to have a competitive margin but an accident in port during loading or unloading or even moving cargoes within the terminal can diminish such margin. Statistics shows that due to the high-speed notion of activities within ports, collision accidents are the most common type of accidents. To mitigate such accidents, the appropriate risk exposures have to be defined and calculate, later on risk probabilities can be determined for each type of accident, i.e. fatal, severe, moderate and minor ones. Having such risk probabilities help managers to define the effectiveness of each collision risk control option. This research defined travelled distance as main collision risk exposure in container terminals, taking all the related items into consideration, it was calculated for Shahid Rajae container terminals. Following this finding, collision risk probabilities were computed.

Keywords: collision accident, container terminal, maritime transport, risk exposure

Procedia PDF Downloads 384
545 Occupational Cumulative Effective Doses of Radiation Workers in Hamad Medical Corporation in Qatar

Authors: Omar Bobes, Abeer Al-Attar, Mohammad Hassan Kharita, Huda Al-Naemi

Abstract:

The number of radiological examinations has increased steadily in recent years. As a result, the risk of possible radiation-induced consequential damage also increases through continuous, lifelong, and increasing exposure to ionizing radiation. Therefore, radiation dose monitoring in medicine became an essential element of medical practice. In this study, the occupational cumulative doses for radiation workers in Hamad medical corporation in Qatar have been assessed for a period of five years. The number of monitored workers selected for this study was 555 (out of a total of 1250 monitored workers) who have been working continuously -with no interruption- with ionizing radiation over the past five years from 2015 to 2019. The aim of this work is to examine the occupational groups and the activities where the higher radiation exposure occurred and in what order of magnitude. The most exposed group was the nuclear medicine technologist staff, with an average cumulative dose of 8.4 mSv. The highest individual cumulative dose was 9.8 mSv recorded for the PET-CT technologist category.

Keywords: cumulative dose, effective dose, monitoring, occupational exposure, dosimetry

Procedia PDF Downloads 242
544 Exploring the Energy Model of Cumulative Grief

Authors: Masica Jordan Alston, Angela N. Bullock, Angela S. Henderson, Stephanie Strianse, Sade Dunn, Joseph Hackett, Alaysia Black Hackett, Marcus Mason

Abstract:

The Energy Model of Cumulative Grief was created in 2018. The Energy Model of Cumulative Grief utilizes historic models of grief stage theories. The innovative model is additionally unique due to its focus on cultural responsiveness. The Energy Model of Cumulative Grief helps to train practitioners who work with clients dealing with grief and loss. This paper assists in introducing the world to this innovative model and exploring how this model positively impacted a convenience sample of 140 practitioners and individuals experiencing grief and loss. Respondents participated in Webinars provided by the National Grief and Loss Center of America (NGLCA). Participants in this cross-sectional research design study completed one of three Grief and Loss Surveys created by the Grief and Loss Centers of America. Data analysis for this study was conducted via SPSS and Survey Hero to examine survey results for respondents. Results indicate that the Energy Model of Cumulative Grief was an effective resource for participants in addressing grief and loss. The majority of participants found the Webinars to be helpful and a conduit to providing them with higher levels of hope. The findings suggest that using The Energy Model of Cumulative Grief is effective in providing culturally responsive grief and loss resources to practitioners and clients. There are far reaching implications with the use of technology to provide hope to those suffering from grief and loss worldwide through The Energy Model of Cumulative Grief.

Keywords: grief, loss, grief energy, grieving brain

Procedia PDF Downloads 82
543 Modeling Binomial Dependent Distribution of the Values: Synthesis Tables of Probabilities of Errors of the First and Second Kind of Biometrics-Neural Network Authentication System

Authors: B. S.Akhmetov, S. T. Akhmetova, D. N. Nadeyev, V. Yu. Yegorov, V. V. Smogoonov

Abstract:

Estimated probabilities of errors of the first and second kind for nonideal biometrics-neural transducers 256 outputs, the construction of nomograms based error probability of 'own' and 'alien' from the mathematical expectation and standard deviation of the normalized measures Hamming.

Keywords: modeling, errors, probability, biometrics, neural network, authentication

Procedia PDF Downloads 482
542 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 173
541 On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes

Authors: Amit Ghosh, Chanchal Kundu

Abstract:

Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.

Keywords: cumulative past inaccuracy, marginal and conditional past lifetimes, conditional proportional reversed hazard rate model, usual stochastic order

Procedia PDF Downloads 251
540 Factorization of Computations in Bayesian Networks: Interpretation of Factors

Authors: Linda Smail, Zineb Azouz

Abstract:

Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks.

Keywords: Bayesian networks, D-Separation, level two Bayesian networks, factorization of computation

Procedia PDF Downloads 528
539 Dividend Initiations and IPO Long-Run Performance

Authors: Nithi Sermsiriviboon, Somchai Supattarakul

Abstract:

Dividend initiations are an economically significant event that has important implications for a firm’s future financial capacity. Given that the market’s expectation of a consistent payout, managers of IPO firms must approach the initial dividend decision cautiously. We compare the long run performance of IPO firms that initiated dividends with those of similarly matched non-payers. We found that firms which initiated dividends perform significantly better up to three years after the initiation date. Moreover, we measure investor reactions by 2-day around dividend announcement date cumulative abnormal return. We evidence no statistically significant differences between cumulative abnormal returns (CAR) of IPO firms and cumulative abnormal returns of Non-IPO firms, indicating that investors do not respond to dividend announcement of IPO firms more than they do to the dividend announcement of Non-IPO firms.

Keywords: dividend, initial public offerings, long-run performance, finance

Procedia PDF Downloads 235
538 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 88
537 Semi-Supervised Learning Using Pseudo F Measure

Authors: Mahesh Balan U, Rohith Srinivaas Mohanakrishnan, Venkat Subramanian

Abstract:

Positive and unlabeled learning (PU) has gained more attention in both academic and industry research literature recently because of its relevance to existing business problems today. Yet, there still seems to be some existing challenges in terms of validating the performance of PU learning, as the actual truth of unlabeled data points is still unknown in contrast to a binary classification where we know the truth. In this study, we propose a novel PU learning technique based on the Pseudo-F measure, where we address this research gap. In this approach, we train the PU model to discriminate the probability distribution of the positive and unlabeled in the validation and spy data. The predicted probabilities of the PU model have a two-fold validation – (a) the predicted probabilities of reliable positives and predicted positives should be from the same distribution; (b) the predicted probabilities of predicted positives and predicted unlabeled should be from a different distribution. We experimented with this approach on a credit marketing case study in one of the world’s biggest fintech platforms and found evidence for benchmarking performance and backtested using historical data. This study contributes to the existing literature on semi-supervised learning.

Keywords: PU learning, semi-supervised learning, pseudo f measure, classification

Procedia PDF Downloads 234
536 Optimal Replacement Period for a One-Unit System with Double Repair Cost Limits

Authors: Min-Tsai Lai, Taqwa Hariguna

Abstract:

This paper presents a periodical replacement model for a system, considering the concept of single and cumulative repair cost limits simultaneously. The failures are divided into two types. Minor failure can be corrected by minimal repair and serious failure makes the system breakdown completely. When a minor failure occurs, if the repair cost is less than a single repair cost limit L1 and the accumulated repair cost is less than a cumulative repair cost limit L2, then minimal repair is executed, otherwise, the system is preventively replaced. The system is also replaced at time T or at serious failure. The optimal period T minimizing the long-run expected cost per unit time is verified to be finite and unique under some specific conditions.

Keywords: repair-cost limit, cumulative repair-cost limit, minimal repair, periodical replacement policy

Procedia PDF Downloads 365
535 Some Statistical Properties of Residual Sea Level along the Coast of Vietnam

Authors: Doan Van Chinh, Bui Thi Kien Trinh

Abstract:

This paper outlines some statistical properties of residual sea level (RSL) at six representative tidal stations located along the coast of Vietnam. It was found that the positive RSL varied on average between 9.82 and 19.96cm and the negative RSL varied on average between -16.62 and -9.02cm. The maximum positive RSL varied on average between 102.8 and 265.5cm with the maximum negative RSL varied on average between -250.4 and -66.4cm. It is seen that the biggest positive RSL ere appeared in the summer months and the biggest negative RSL ere appeared in the winter months. The cumulative frequency of RSL less than 50 cm occurred between 95 and 99% of the times while the frequency of RSL higher than 100 cm accounted for between 0.01 and 0.2%. It also was found that the cumulative frequency of duration of RSL less than 24 hours occurred between 90 and 99% while the frequency of duration longer than 72 hours was in the order of 0.1 and 1%.

Keywords: coast of Vietnam, residual sea level, residual water, surge, cumulative frequency

Procedia PDF Downloads 289
534 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 580
533 The Influence of Gossip on the Absorption Probabilities in Moran Process

Authors: Jurica Hižak

Abstract:

Getting to know the agents, i.e., identifying the free riders in a population, can be considered one of the main challenges in establishing cooperation. An ordinary memory-one agent such as Tit-for-tat may learn “who is who” in the population through direct interactions. Past experiences serve them as a landmark to know with whom to cooperate and against whom to retaliate in the next encounter. However, this kind of learning is risky and expensive. A cheaper and less painful way to detect free riders may be achieved by gossiping. For this reason, as part of this research, a special type of Tit-for-tat agent was designed – a “Gossip-Tit-for-tat” agent that can share data with other agents of its kind. The performances of both strategies, ordinary Tit-for-tat and Gossip-Tit-for-tat, against Always-defect have been compared in the finite-game framework of the Iterated Prisoner’s Dilemma via the Moran process. Agents were able to move in a random-walk fashion, and they were programmed to play Prisoner’s Dilemma each time they met. Moreover, at each step, one randomly selected individual was eliminated, and one individual was reproduced in accordance with the Moran process of selection. In this way, the size of the population always remained the same. Agents were selected for reproduction via the roulette wheel rule, i.e., proportionally to the relative fitness of the strategy. The absorption probability was calculated after the population had been absorbed completely by cooperators, which means that all the states have been occupied and all of the transition probabilities have been determined. It was shown that gossip increases absorption probabilities and therefore enhances the evolution of cooperation in the population.

Keywords: cooperation, gossip, indirect reciprocity, Moran process, prisoner’s dilemma, tit-for-tat

Procedia PDF Downloads 97
532 Behavior Loss Aversion Experimental Laboratory of Financial Investments

Authors: Jihene Jebeniani

Abstract:

We proposed an approach combining both the techniques of experimental economy and the flexibility of discrete choice models in order to test the loss aversion. Our main objective was to test the loss aversion of the Cumulative Prospect Theory (CPT). We developed an experimental laboratory in the context of the financial investments that aimed to analyze the attitude towards the risk of the investors. The study uses the lotteries and is basing on econometric modeling. The estimated model was the ordered probit.

Keywords: risk aversion, behavioral finance, experimental economic, lotteries, cumulative prospect theory

Procedia PDF Downloads 470
531 Determinants of Probability Weighting and Probability Neglect: An Experimental Study of the Role of Emotions, Risk Perception, and Personality in Flood Insurance Demand

Authors: Peter J. Robinson, W. J. Wouter Botzen

Abstract:

Individuals often over-weight low probabilities and under-weight moderate to high probabilities, however very low probabilities are either significantly over-weighted or neglected. Little is known about factors affecting probability weighting in Prospect Theory related to emotions specific to risk (anticipatory and anticipated emotions), the threshold of concern, as well as personality traits like locus of control. This study provides these insights by examining factors that influence probability weighting in the context of flood insurance demand in an economic experiment. In particular, we focus on determinants of flood probability neglect to provide recommendations for improved risk management. In addition, results obtained using real incentives and no performance-based payments are compared in the experiment with high experimental outcomes. Based on data collected from 1’041 Dutch homeowners, we find that: flood probability neglect is related to anticipated regret, worry and the threshold of concern. Moreover, locus of control and regret affect probabilistic pessimism. Nevertheless, we do not observe strong evidence that incentives influence flood probability neglect nor probability weighting. The results show that low, moderate and high flood probabilities are under-weighted, which is related to framing in the flooding context and the degree of realism respondents attach to high probability property damages. We suggest several policies to overcome psychological factors related to under-weighting flood probabilities to improve flood preparations. These include policies that promote better risk communication to enhance insurance decisions for individuals with a high threshold of concern, and education and information provision to change the behaviour of internal locus of control types as well as people who see insurance as an investment. Multi-year flood insurance may also prevent short-sighted behaviour of people who have a tendency to regret paying for insurance. Moreover, bundling low-probability/high-impact risks with more immediate risks may achieve an overall covered risk which is less likely to be judged as falling below thresholds of concern. These measures could aid the development of a flood insurance market in the Netherlands for which we find to be demand.

Keywords: flood insurance demand, prospect theory, risk perceptions, risk preferences

Procedia PDF Downloads 273
530 Financial Market Reaction to Non-Financial Reports

Authors: Petra Dilling

Abstract:

This study examines the market reaction to the publication of integrated reports for a sample of 316 global companies for the reporting year 2018. Applying event study methodology, we find significant cumulative average abnormal returns (CAARs) after the publication date. To ensure robust estimation resultsthe three-factor model, according to Fama and French, is used as well as a market-adjusted model, a CAPM and a Frama-French model taking GARCH effects into account. We find a significant positive CAAR after the publication day of the integrated report. Our results suggest that investors react to information provided in the integrated report and that they react differently to the annual financial report. Furthermore, our cross-sectional analysis confirms that companies with a significant positive cumulative average abnormal show certain characteristic. It was found that European companies have a higher likelihood to experience a stronger significant positive market reaction to their integrated report publication.

Keywords: integrated report, event methodology, cumulative abnormal return, sustainability, CAPM

Procedia PDF Downloads 150
529 Trajectories of Conduct Problems and Cumulative Risk from Early Childhood to Adolescence

Authors: Leslie M. Gutman

Abstract:

Conduct problems (CP) represent a major dilemma, with wide-ranging and long-lasting individual and societal impacts. Children experience heterogeneous patterns of conduct problems; based on the age of onset, developmental course and related risk factors from around age 3. Early childhood represents a potential window for intervention efforts aimed at changing the trajectory of early starting conduct problems. Using the UK Millennium Cohort Study (n = 17,206 children), this study (a) identifies trajectories of conduct problems from ages 3 to 14 years and (b) assesses the cumulative and interactive effects of individual, family and socioeconomic risk factors from ages 9 months to 14 years. The same factors according to three domains were assessed, including child (i.e., low verbal ability, hyperactivity/inattention, peer problems, emotional problems), family (i.e., single families, parental poor physical and mental health, large family size) and socioeconomic (i.e., low family income, low parental education, unemployment, social housing). A cumulative risk score for the child, family, and socioeconomic domains at each age was calculated. It was then examined how the cumulative risk scores explain variation in the trajectories of conduct problems. Lastly, interactive effects among the different domains of cumulative risk were tested. Using group-based trajectory modeling, four distinct trajectories were found including a ‘low’ problem group and three groups showing childhood-onset conduct problems: ‘school-age onset’; ‘early-onset, desisting’; and ‘early-onset, persisting’. The ‘low’ group (57% of the sample) showed a low probability of conducts problems, close to zero, from 3 to 14 years. The ‘early-onset, desisting’ group (23% of the sample) demonstrated a moderate probability of CP in early childhood, with a decline from 3 to 5 years and a low probability thereafter. The ‘early-onset, persistent’ group (8%) followed a high probability of conduct problems, which declined from 11 years but was close to 70% at 14 years. In the ‘school-age onset’ group, 12% of the sample showed a moderate probability of conduct problems from 3 and 5 years, with a sharp increase by 7 years, increasing to 50% at 14 years. In terms of individual risk, all factors increased the likelihood of being in the childhood-onset groups compared to the ‘low’ group. For cumulative risk, the socioeconomic domain at 9 months and 3 years, the family domain at all ages except 14 years and child domain at all ages were found to differentiate childhood-onset groups from the ‘low’ group. Cumulative risk at 9 months and 3 years did not differentiate between the ‘school-onset’ group and ‘low’ group. Significant interactions were found between the domains for the ‘early-onset, desisting group’ suggesting that low levels of risk in one domain may buffer the effects of high risk in another domain. The implications of these findings for preventive interventions will be highlighted.

Keywords: conduct problems, cumulative risk, developmental trajectories, early childhood, adolescence

Procedia PDF Downloads 249
528 Decision-Making Under Uncertainty in Obsessive-Compulsive Disorder

Authors: Helen Pushkarskaya, David Tolin, Lital Ruderman, Ariel Kirshenbaum, J. MacLaren Kelly, Christopher Pittenger, Ifat Levy

Abstract:

Obsessive-Compulsive Disorder (OCD) produces profound morbidity. Difficulties with decision making and intolerance of uncertainty are prominent clinical features of OCD. The nature and etiology of these deficits are poorly understood. We used a well-validated choice task, grounded in behavioral economic theory, to investigate differences in valuation and value-based choice during decision making under uncertainty in 20 unmedicated participants with OCD and 20 matched healthy controls. Participants’ choices were used to assess individual decision-making characteristics. Compared to controls, individuals with OCD were less consistent in their choices and less able to identify options that were unambiguously preferable. These differences correlated with symptom severity. OCD participants did not differ from controls in how they valued uncertain options when outcome probabilities were known (risk) but were more likely than controls to avoid uncertain options when these probabilities were imprecisely specified (ambiguity). These results suggest that the underlying neural mechanisms of valuation and value-based choices during decision-making are abnormal in OCD. Individuals with OCD show elevated intolerance of uncertainty, but only when outcome probabilities are themselves uncertain. Future research focused on the neural valuation network, which is implicated in value-based computations, may provide new neurocognitive insights into the pathophysiology of OCD. Deficits in decision-making processes may represent a target for therapeutic intervention.

Keywords: obsessive compulsive disorder, decision-making, uncertainty intolerance, risk aversion, ambiguity aversion, valuation

Procedia PDF Downloads 614
527 Optical Emission Studies of Laser Produced Lead Plasma: Measurements of Transition Probabilities of the 6P7S → 6P2 Transitions Array

Authors: Javed Iqbal, R. Ahmed, M. A. Baig

Abstract:

We present new data on the optical emission spectra of the laser produced lead plasma using a pulsed Nd:YAG laser at 1064 nm (pulse energy 400 mJ, pulse width 5 ns, 10 Hz repetition rate) in conjunction with a set of miniature spectrometers covering the spectral range from 200 nm to 720 nm. Well resolved structure due to the 6p7s → 6p2 transition array of neutral lead and a few multiplets of singly ionized lead have been observed. The electron temperatures have been calculated in the range (9000 - 10800) ± 500 K using four methods; two line ratio, Boltzmann plot, Saha-Boltzmann plot and Morrata method whereas, the electron number densities have been determined in the range (2.0 – 8.0) ± 0.6 ×1016 cm-3 using the Stark broadened line profiles of neutral lead lines, singly ionized lead lines and hydrogen Hα-line. Full width at half maximum (FWHM) of a number of neutral and singly ionized lead lines have been extracted by the Lorentzian fit to the experimentally observed line profiles. Furthermore, branching fractions have been deduced for eleven lines of the 6p7s → 6p2 transition array in lead whereas the absolute values of the transition probabilities have been calculated by combining the experimental branching fractions with the life times of the excited levels The new results are compared with the existing data showing a good agreement.

Keywords: LIBS, plasma parameters, transition probabilities, branching fractions, stark width

Procedia PDF Downloads 283
526 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks

Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz

Abstract:

Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.

Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks

Procedia PDF Downloads 143
525 Characteristics of Cumulative Distribution Function of Grown Crack Size at Specified Fatigue Crack Propagation Life under Different Maximum Fatigue Loads in AZ31

Authors: Seon Soon Choi

Abstract:

Magnesium alloy has been widely used in structure such as an automobile. It is necessary to consider probabilistic characteristics of a structural material because a fatigue behavior of a structure has a randomness and uncertainty. The purpose of this study is to find the characteristics of the cumulative distribution function (CDF) of the grown crack size at a specified fatigue crack propagation life and to investigate a statistical crack propagation in magnesium alloys. The statistical fatigue data of the grown crack size are obtained through the fatigue crack propagation (FCP) tests under different maximum fatigue load conditions conducted on the replicated specimens of magnesium alloys. The 3-parameter Weibull distribution is used to find the CDF of grown crack size. The CDF of grown crack size in case of larger maximum fatigue load has longer tail in below 10 percent and above 90 percent. The fatigue failure occurs easily as the tail of CDF of grown crack size becomes long. The fatigue behavior under the larger maximum fatigue load condition shows more rapid propagation and failure mode.

Keywords: cumulative distribution function, fatigue crack propagation, grown crack size, magnesium alloys, maximum fatigue load

Procedia PDF Downloads 287
524 Quality of the Ruin Probabilities Approximation Using the Regenerative Processes Approach regarding to Large Claims

Authors: Safia Hocine, Djamil Aïssani

Abstract:

Risk models, recently studied in the literature, are becoming increasingly complex. It is rare to find explicit analytical relations to calculate the ruin probability. Indeed, the stability issue occurs naturally in ruin theory, when parameters in risk cannot be estimated than with uncertainty. However, in most cases, there are no explicit formulas for the ruin probability. Hence, the interest to obtain explicit stability bounds for these probabilities in different risk models. In this paper, we interest to the stability bounds of the univariate classical risk model established using the regenerative processes approach. By adopting an algorithmic approach, we implement this approximation and determine numerically the bounds of ruin probability in the case of large claims (heavy-tailed distribution).

Keywords: heavy-tailed distribution, large claims, regenerative process, risk model, ruin probability, stability

Procedia PDF Downloads 362