Search results for: elaboration likelihood model theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20705

Search results for: elaboration likelihood model theory

20705 A Framework for Consumer Selection on Travel Destinations

Authors: J. Rhodes, V. Cheng, P. Lok

Abstract:

The aim of this study is to develop a parsimonious model that explains the effect of different stimulus on a tourist’s intention to visit a new destination. The model consists of destination trust and interest as the mediating variables. The model was tested using two different types of stimulus; both studies empirically supported the proposed model. Furthermore, the first study revealed that advertising has a stronger effect than positive online reviews. The second study found that the peripheral route of the elaboration likelihood model has a stronger influence power than the central route in this context.

Keywords: advertising, electronic word-of-mouth, elaboration likelihood model, intention to visit, trust

Procedia PDF Downloads 459
20704 An Elaboration Likelihood Model to Evaluate Consumer Behavior on Facebook Marketplace: Trust on Seller as a Moderator

Authors: Sharmistha Chowdhury, Shuva Chowdhury

Abstract:

Buying-selling new as well as second-hand goods like tools, furniture, household, electronics, clothing, baby stuff, vehicles, and hobbies through the Facebook marketplace has become a new paradigm for c2c sellers. This phenomenon encourages and empowers decentralised home-oriented sellers. This study adopts Elaboration Likelihood Model (ELM) to explain consumer behaviour on Facebook Marketplace (FM). ELM suggests that consumers process information through the central and peripheral routes, which eventually shape their attitudes towards posts. The central route focuses on information quality, and the peripheral route focuses on cues. Sellers’ FM posts usually include product features, prices, conditions, pictures, and pick-up location. This study uses information relevance and accuracy as central route factors. The post’s attractiveness represents cues and creates positive or negative associations with the product. A post with remarkable pictures increases the attractiveness of the post. So, post aesthetics is used as a peripheral route factor. People influenced via the central or peripheral route forms an attitude that includes multiple processes – response and purchase intention. People respond to FM posts through save, share and chat. Purchase intention reflects a positive image of the product and higher purchase intention. This study proposes trust on sellers as a moderator to test the strength of its influence on consumer attitudes and behaviour. Trust on sellers is assessed whether sellers have badges or not. A sample questionnaire will be developed and distributed among a group of random FM sellers who are selling vehicles on this platform to conduct the study. The chosen product of this study is the vehicle, a high-value purchase item. High-value purchase requires consumers to consider forming their attitude without any sign of impulsiveness seriously. Hence, vehicles are the perfect choice to test the strength of consumers attitudes and behaviour. The findings of the study add to the elaboration likelihood model and online second-hand marketplace literature.

Keywords: consumer behaviour, elaboration likelihood model, facebook marketplace, c2c marketing

Procedia PDF Downloads 139
20703 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 146
20702 Laser Powder Bed Fusion Awareness for Engineering Students in France and Qatar

Authors: Hiba Naccache, Rima Hleiss

Abstract:

Additive manufacturing AM or 3D printing is one of the pillars of Industry 4.0. Compared to traditional manufacturing, AM provides a prototype before production in order to optimize the design and avoid the stock market and uses strictly necessary material which can be recyclable, for the benefit of leaning towards local production, saving money, time and resources. Different types of AM exist and it has a broad range of applications across several industries like aerospace, automotive, medicine, education and else. The Laser Powder Bed Fusion (LPBF) is a metal AM technique that uses a laser to liquefy metal powder, layer by layer, to build a three-dimensional (3D) object. In industry 4.0 and aligned with the numbers 9 (Industry, Innovation and Infrastructure) and 12 (Responsible Production and Consumption) of the Sustainable Development Goals of the UNESCO 2030 Agenda, the AM’s manufacturers committed to minimizing the environmental impact by being sustainable in every production. The LPBF has several environmental advantages, like reduced waste production, lower energy consumption, and greater flexibility in creating components with lightweight and complex geometries. However, LPBF also have environmental drawbacks, like energy consumption, gas consumption and emissions. It is critical to recognize the environmental impacts of LPBF in order to mitigate them. To increase awareness and promote sustainable practices regarding LPBF, the researchers use the Elaboration Likelihood Model (ELM) theory where people from multiple universities in France and Qatar process information in two ways: peripherally and centrally. The peripheral campaigns use superficial cues to get attention, and the central campaigns provide clear and concise information. The authors created a seminar including a video showing LPBF production and a website with educational resources. The data is collected using questionnaire to test attitude about the public awareness before and after the seminar. The results reflected a great shift on the awareness toward LPBF and its impact on the environment. With no presence of similar research, to our best knowledge, this study will add to the literature on the sustainability of the LPBF production technique.

Keywords: additive manufacturing, laser powder bed fusion, elaboration likelihood model theory, sustainable development goals, education-awareness, France, Qatar, specific energy consumption, environmental impact, lightweight components

Procedia PDF Downloads 90
20701 The Effects of Negative Electronic Word-of-Mouth and Webcare on Thai Online Consumer Behavior

Authors: Pongsatorn Tantrabundit, Lersak Phothong, Ong-art Chanprasitchai

Abstract:

Due to the emergence of the Internet, it has extended the traditional Word-of-Mouth (WOM) to a new form called “Electronic Word-of-Mouth (eWOM).” Unlike traditional WOM, eWOM is able to present information in various ways by applying different components. Each eWOM component generates different effects on online consumer behavior. This research investigates the effects of Webcare (responding message) from product/ service providers on negative eWOM by applying two types of products (search and experience). The proposed conceptual model was developed based on the combination of the stages in consumer decision-making process, theory of reasoned action (TRA), theory of planned behavior (TPB), the technology acceptance model (TAM), the information integration theory and the elaboration likelihood model. The methodology techniques used in this study included multivariate analysis of variance (MANOVA) and multiple regression analysis. The results suggest that Webcare does slightly increase Thai online consumer’s perceptions on perceived eWOM trustworthiness, information diagnosticity and quality. For negative eWOM, we also found that perceived eWOM Trustworthiness, perceived eWOM diagnosticity and quality have a positive relationship with eWOM influence whereas perceived valence has a negative relationship with eWOM influence in Thai online consumers.

Keywords: consumer behavior, electronic word-of-mouth, online review, online word-of-mouth, Thai online consumer, webcare

Procedia PDF Downloads 206
20700 Corpus-Based Model of Key Concepts Selection for the Master English Language Course "Government Relations"

Authors: Elena Pozdnyakova

Abstract:

“Government Relations” is a field of knowledge presently taught at the majority of universities around the globe. English as the default language can become the language of teaching since the issues discussed are both global and national in character. However for this field of knowledge key concepts and their word representations in English don’t often coincide with those in other languages. International master’s degree students abroad as well as students, taught the course in English at their national universities, are exposed to difficulties, connected with correct conceptualizing of terminology of GR in British and American academic traditions. The study was carried out during the GR English language course elaboration (pilot research: 2013 -2015) at Moscow State Institute of Foreign Relations (University), Russian Federation. Within this period, English language instructors designed and elaborated the three-semester course of GR. Methodologically the course design was based on elaboration model with the special focus on conceptual elaboration sequence and theoretical elaboration sequence. The course designers faced difficulties in concept selection and theoretical elaboration sequence. To improve the results and eliminate the problems with concept selection, a new, corpus-based approach was worked out. The computer-based tool WordSmith 6.0 was used with the aim to build a model of key concept selection. The corpus of GR English texts consisted of 1 million words (the study corpus). The approach was based on measuring effect size, i.e. the percent difference of the frequency of a word in the study corpus when compared to that in the reference corpus. The results obtained proved significant improvement in the process of concept selection. The corpus-based model also facilitated theoretical elaboration of teaching materials.

Keywords: corpus-based study, English as the default language, key concepts, measuring effect size, model of key concept selection

Procedia PDF Downloads 306
20699 Negative Self-Awareness and Its Effect on Crime

Authors: Guinevere Servis

Abstract:

This paper hypothesizes that withdrawal from positive self-awareness, and the increase of counterfactual-thinking and self-handicapping can help provide ample justification for an individual before, during and/or after committing a crime. The understanding of who someone is, one’s perspective on the world, and why they think the way they do is key to therapy in preventing recidivism. Developing habits to escape self-awareness, by using self-handicapping and counterfactual ideologies, may provide the necessary thinking patterns to decide disobeying the law is a worthy act to pursue. An increase in self-awareness is hypothesized to decrease the likelihood of recidivism, and ways of thinking that withdraw from self awareness can increase the likelihood of it. Especially for those who have been disadvantaged in life and disobeyed the law, self-handicapping and counterfactual thinking can also help to justify one's wrongdoing. The understanding of how a criminal views their disadvantages in the world, and one’s thinking patterns are hypothesized to help one better understand the entire scope on why a crime was committed and thus reduce the likelihood of recidivism. Utilizing therapy for prisoners to increase self-awareness of both thought and action can lead to a healthy, happier life and reduce the likelihood of reoffending. By discussing the terms associated with self-awareness theory and other psychological topics such as self-handicapping, counterfactual thinking, this paper argues the actions towards increasing positive self-awareness can help decrease likelihood of recidivism. Adversely, hypothesizing that increasing the ways of thinking that withdraw one from self-awareness, through counterfactual thinking and self-handicapping, can inherently increase the likelihood of recidivism. Evaluating these findings to further understand the needed changes in correctional institutions is fundamental to reducing crime, benefiting the criminal, the victim(s) and their family, and the state.

Keywords: crime, self-awareness theory, correctional institutions, self-regulate, counterfactual thinking, recidivism

Procedia PDF Downloads 102
20698 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution

Authors: Al Omari Mohammed Ahmed

Abstract:

This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.

Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring

Procedia PDF Downloads 441
20697 Inference for Compound Truncated Poisson Lognormal Model with Application to Maximum Precipitation Data

Authors: M. Z. Raqab, Debasis Kundu, M. A. Meraou

Abstract:

In this paper, we have analyzed maximum precipitation data during a particular period of time obtained from different stations in the Global Historical Climatological Network of the USA. One important point to mention is that some stations are shut down on certain days for some reason or the other. Hence, the maximum values are recorded by excluding those readings. It is assumed that the number of stations that operate follows zero-truncated Poisson random variables, and the daily precipitation follows a lognormal random variable. We call this model a compound truncated Poisson lognormal model. The proposed model has three unknown parameters, and it can take a variety of shapes. The maximum likelihood estimators can be obtained quite conveniently using Expectation-Maximization (EM) algorithm. Approximate maximum likelihood estimators are also derived. The associated confidence intervals also can be obtained from the observed Fisher information matrix. Simulation results have been performed to check the performance of the EM algorithm, and it is observed that the EM algorithm works quite well in this case. When we analyze the precipitation data set using the proposed model, it is observed that the proposed model provides a better fit than some of the existing models.

Keywords: compound Poisson lognormal distribution, EM algorithm, maximum likelihood estimation, approximate maximum likelihood estimation, Fisher information, skew distribution

Procedia PDF Downloads 109
20696 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 143
20695 The Grand Unified Theory of Everything as a Generalization to the Standard Model Called as the General Standard Model

Authors: Amir Deljoo

Abstract:

The endeavor to comprehend the existence have been the center of thought for human in form of different disciplines and now basically in physics as the theory of everything. Here, after a brief review of the basic frameworks of thought, and a history of thought since ancient up to present, a logical methodology is presented based on a core axiom after which a function, a proto-field and then a coordinates are explained. Afterwards a generalization to Standard Model is proposed as General Standard Model which is believed to be the base of the Unified Theory of Everything.

Keywords: general relativity, grand unified theory, quantum mechanics, standard model, theory of everything

Procedia PDF Downloads 101
20694 Influencer Endorsement: Consumer Purchase Intention in Social Media Marketing

Authors: Izian Idris, Melissa Ha, Mikkay Wong

Abstract:

Social media marketing, including influencer marketing, is an ongoing phenomenon, and most companies as well as industries, are finding it crucial to implement social media marketing in their marketing strategies. However, social media influencer marketing still needs to be explored, and further research on this area needs to be carried out to fully understand the importance of social media influencer marketing in impacting consumer purchase decisions. Influencer endorsement has become a trend to grab users’ attention these days. Thus, the aim of this research paper is to explore the attributes of social media influencers/influencer as the endorser that impact consumer purchase intentions. The attributes that will be investigated include attitude homophily, physical attractiveness, and social attractiveness. Following this, the elaboration likelihood model from the theory of persuasion is implemented in this research to further examine the influence of social media influencer attributes on consumer purchase intentions. This study will be able to help marketers, businesses, and researchers understand the attributes of social media influencers as endorsers that will impact consumer purchase intentions and allow businesses to enhance their strategies to better cater to their target market.

Keywords: influencer, endorsement, consumer purchase, social media

Procedia PDF Downloads 86
20693 A Flexible Pareto Distribution Using α-Power Transformation

Authors: Shumaila Ehtisham

Abstract:

In Statistical Distribution Theory, considering an additional parameter to classical distributions is a usual practice. In this study, a new distribution referred to as α-Power Pareto distribution is introduced by including an extra parameter. Several properties of the proposed distribution including explicit expressions for the moment generating function, mode, quantiles, entropies and order statistics are obtained. Unknown parameters have been estimated by using maximum likelihood estimation technique. Two real datasets have been considered to examine the usefulness of the proposed distribution. It has been observed that α-Power Pareto distribution outperforms while compared to different variants of Pareto distribution on the basis of model selection criteria.

Keywords: α-power transformation, maximum likelihood estimation, moment generating function, Pareto distribution

Procedia PDF Downloads 216
20692 The Magic Bullet in Africa: Exploring an Alternative Theoretical Model

Authors: Daniel Nkrumah

Abstract:

The Magic Bullet theory was a popular media effect theory that defined the power of the mass media in altering beliefs and perceptions of its audiences. However, following the People's Choice study, the theory was said to have been disproved and was supplanted by the Two-Step Flow Theory. This paper examines the relevance of the Magic Bullet theory in Africa and establishes whether it is still relevant in Africa's media spaces and societies. Using selected cases on the continent, it adopts a grounded theory approach and explores a new theoretical model that attempts to enforce an argument that the Two-Step Flow theory though important and valid, was ill-conceived as a direct replacement to the Magic Bullet theory.

Keywords: magic bullet theory, two-step flow theory, media effects, african media

Procedia PDF Downloads 129
20691 A New Distribution and Application on the Lifetime Data

Authors: Gamze Ozel, Selen Cakmakyapan

Abstract:

We introduce a new model called the Marshall-Olkin Rayleigh distribution which extends the Rayleigh distribution using Marshall-Olkin transformation and has increasing and decreasing shapes for the hazard rate function. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile function, some entropy measures, and order statistics are presented. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The potentiality of the new model is illustrated by means of real life data set.

Keywords: Marshall-Olkin distribution, Rayleigh distribution, estimation, maximum likelihood

Procedia PDF Downloads 502
20690 The Generalized Pareto Distribution as a Model for Sequential Order Statistics

Authors: Mahdy ‎Esmailian, Mahdi ‎Doostparast, Ahmad ‎Parsian

Abstract:

‎In this article‎, ‎sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered‎. ‎Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data‎. ‎Necessary conditions for existence and uniqueness of the derived ML estimates are given‎. Due to complexity in the proposed likelihood function‎, ‎a useful re-parametrization is suggested‎. ‎For illustrative purposes‎, ‎a Monte Carlo simulation study is conducted and an illustrative example is analysed‎.

Keywords: bayesian estimation‎, generalized pareto distribution‎, ‎maximum likelihood estimation‎, sequential order statistics

Procedia PDF Downloads 512
20689 Residual Life Estimation Based on Multi-Phase Nonlinear Wiener Process

Authors: Hao Chen, Bo Guo, Ping Jiang

Abstract:

Residual life (RL) estimation based on multi-phase nonlinear Wiener process was studied in this paper, which is significant for complicated products with small samples. Firstly, nonlinear Wiener model with random parameter was introduced and multi-phase nonlinear Wiener model was proposed to model degradation process of products that were nonlinear and separated into different phases. Then the multi-phase RL probability density function based on the presented model was derived approximately in a closed form and parameters estimation was achieved with the method of maximum likelihood estimation (MLE). Finally, the method was applied to estimate the RL of high voltage plus capacitor. Compared with the other three different models by log-likelihood function (Log-LF) and Akaike information criterion (AIC), the results show that the proposed degradation model can capture degradation process of high voltage plus capacitors in a better way and provide a more reliable result.

Keywords: multi-phase nonlinear wiener process, residual life estimation, maximum likelihood estimation, high voltage plus capacitor

Procedia PDF Downloads 453
20688 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL

Procedia PDF Downloads 355
20687 Credit Risk Prediction Based on Bayesian Estimation of Logistic Regression Model with Random Effects

Authors: Sami Mestiri, Abdeljelil Farhat

Abstract:

The aim of this current paper is to predict the credit risk of banks in Tunisia, over the period (2000-2005). For this purpose, two methods for the estimation of the logistic regression model with random effects: Penalized Quasi Likelihood (PQL) method and Gibbs Sampler algorithm are applied. By using the information on a sample of 528 Tunisian firms and 26 financial ratios, we show that Bayesian approach improves the quality of model predictions in terms of good classification as well as by the ROC curve result.

Keywords: forecasting, credit risk, Penalized Quasi Likelihood, Gibbs Sampler, logistic regression with random effects, curve ROC

Procedia PDF Downloads 542
20686 A Model of Teacher Leadership in History Instruction

Authors: Poramatdha Chutimant

Abstract:

The objective of the research was to propose a model of teacher leadership in history instruction for utilization. Everett M. Rogers’ Diffusion of Innovations Theory is applied as theoretical framework. Qualitative method is to be used in the study, and the interview protocol used as an instrument to collect primary data from best practices who awarded by Office of National Education Commission (ONEC). Open-end questions will be used in interview protocol in order to gather the various data. Then, information according to international context of history instruction is the secondary data used to support in the summarizing process (Content Analysis). Dendrogram is a key to interpret and synthesize the primary data. Thus, secondary data comes as the supportive issue in explanation and elaboration. In-depth interview is to be used to collected information from seven experts in educational field. The focal point is to validate a draft model in term of future utilization finally.

Keywords: history study, nationalism, patriotism, responsible citizenship, teacher leadership

Procedia PDF Downloads 281
20685 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 206
20684 Disintegration of Deuterons by Photons Reaction Model for GEANT4 with Dibaryon Formalism

Authors: Jae Won Shin, Chang Ho Hyun

Abstract:

A disintegration of deuterons by photons (dγ → np) reaction model for GEANT4 is developed in this work. An effective field theory with dibaryon fields Introducing a dibaryon field, we can take into account the effective range contribution to the propagator up to infinite order, and it consequently makes the convergence of the theory better than the pionless effective field theory without dibaryon fields. We develop a hadronic model for GEANT4 which is specialized for the disintegration of the deuteron by photons, dγ → np. For the description of two-nucleon interactions, we employ an effective field theory so called pionless theory with dibaryon fields (dEFT). In spite of its simplicity, the theory has proven very effective and useful in the applications to various two-nucleon systems and processes at low energies. We apply the new model of GEANT4 (G4dEFT) to the calculation of total and differential cross sections in dγ → np, and obtain good agreements to experimental data for a wide range of incoming photon energies.

Keywords: dγ → np, dibaryon fields, effective field theory, GEANT4

Procedia PDF Downloads 380
20683 Extreme Value Modelling of Ghana Stock Exchange Indices

Authors: Kwabena Asare, Ezekiel N. N. Nortey, Felix O. Mettle

Abstract:

Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana Stock Exchange All-Shares indices (2000-2010) by applying the Extreme Value Theory to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before EVT method was applied. The Peak Over Threshold (POT) approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model’s goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the Value at Risk (VaR) and Expected Shortfall (ES) risk measures at some high quantiles, based on the fitted GPD model.

Keywords: extreme value theory, expected shortfall, generalized pareto distribution, peak over threshold, value at risk

Procedia PDF Downloads 559
20682 A Learning-Based EM Mixture Regression Algorithm

Authors: Yi-Cheng Tian, Miin-Shen Yang

Abstract:

The mixture likelihood approach to clustering is a popular clustering method where the expectation and maximization (EM) algorithm is the most used mixture likelihood method. In the literature, the EM algorithm had been used for mixture regression models. However, these EM mixture regression algorithms are sensitive to initial values with a priori number of clusters. In this paper, to resolve these drawbacks, we construct a learning-based schema for the EM mixture regression algorithm such that it is free of initializations and can automatically obtain an approximately optimal number of clusters. Some numerical examples and comparisons demonstrate the superiority and usefulness of the proposed learning-based EM mixture regression algorithm.

Keywords: clustering, EM algorithm, Gaussian mixture model, mixture regression model

Procedia PDF Downloads 510
20681 Sensing to Respond & Recover in Emergency

Authors: Alok Kumar, Raviraj Patil

Abstract:

The ability to respond to an incident of a disastrous event in a vulnerable area is very crucial an aspect of emergency management. The ability to constantly predict the likelihood of an event along with its severity in an area and react to those significant events which are likely to have a high impact allows the authorities to respond by allocating resources optimally in a timely manner. It provides for measuring, monitoring, and modeling facilities that integrate underlying systems into one solution to improve operational efficiency, planning, and coordination. We were particularly involved in this innovative incubation work on the current state of research and development in collaboration. technologies & systems for a disaster.

Keywords: predictive analytics, advanced analytics, area flood likelihood model, area flood severity model, level of impact model, mortality score, economic loss score, resource allocation, crew allocation

Procedia PDF Downloads 321
20680 Sustainable Enterprise Theory: A Starting Point for Reporting Sustainable Business Values

Authors: Arne Fagerstrom, Gary Cunningham, Fredrik Hartwig

Abstract:

In this paper, a theory of sustainable enterprises, sustainable enterprise theory (SET), is developed. The sustainable enterprise theory can only be a valid theory if knowledge about life and nature is complete. Knowledge limitations should not stop enterprises from doing business with a goal of better long-term life on earth. Life demands stewardship of the resources used during one’s lifetime. This paper develops a model influenced by (the classical) enterprise theory and resource theory that includes more than money in the business activities of an enterprise. The sustainable enterprise theory is then used in an analysis of accountability and in discussions about sustainable businesses.

Keywords: sustainable business, sustainability reporting, sustainable values, theory of the firm

Procedia PDF Downloads 582
20679 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data

Authors: Natalia Feruleva

Abstract:

The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.

Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data

Procedia PDF Downloads 121
20678 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
20677 A Kolmogorov-Smirnov Type Goodness-Of-Fit Test of Multinomial Logistic Regression Model in Case-Control Studies

Authors: Chen Li-Ching

Abstract:

The multinomial logistic regression model is used popularly for inferring the relationship of risk factors and disease with multiple categories. This study based on the discrepancy between the nonparametric maximum likelihood estimator and semiparametric maximum likelihood estimator of the cumulative distribution function to propose a Kolmogorov-Smirnov type test statistic to assess adequacy of the multinomial logistic regression model for case-control data. A bootstrap procedure is presented to calculate the critical value of the proposed test statistic. Empirical type I error rates and powers of the test are performed by simulation studies. Some examples will be illustrated the implementation of the test.

Keywords: case-control studies, goodness-of-fit test, Kolmogorov-Smirnov test, multinomial logistic regression

Procedia PDF Downloads 457
20676 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities

Authors: Retius Chifurira

Abstract:

Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.

Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities

Procedia PDF Downloads 201