Search results for: sum conditional variance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1338

Search results for: sum conditional variance

1248 Reconsidering Taylor’s Law with Chaotic Population Dynamical Systems

Authors: Yuzuru Mitsui, Takashi Ikegami

Abstract:

The exponents of Taylor’s law in deterministic chaotic systems are computed, and their meanings are intensively discussed. Taylor’s law is the scaling relationship between the mean and variance (in both space and time) of population abundance, and this law is known to hold in a variety of ecological time series. The exponents found in the temporal Taylor’s law are different from those of the spatial Taylor’s law. The temporal Taylor’s law is calculated on the time series from the same locations (or the same initial states) of different temporal phases. However, with the spatial Taylor’s law, the mean and variance are calculated from the same temporal phase sampled from different places. Most previous studies were done with stochastic models, but we computed the temporal and spatial Taylor’s law in deterministic systems. The temporal Taylor’s law evaluated using the same initial state, and the spatial Taylor’s law was evaluated using the ensemble average and variance. There were two main discoveries from this work. First, it is often stated that deterministic systems tend to have the value two for Taylor’s exponent. However, most of the calculated exponents here were not two. Second, we investigated the relationships between chaotic features measured by the Lyapunov exponent, the correlation dimension, and other indexes with Taylor’s exponents. No strong correlations were found; however, there is some relationship in the same model, but with different parameter values, and we will discuss the meaning of those results at the end of this paper.

Keywords: chaos, density effect, population dynamics, Taylor’s law

Procedia PDF Downloads 152
1247 Dynamic Comovements between Exchange Rates, Stock Prices and Oil Prices: Evidence from Developed and Emerging Latin American Markets

Authors: Nini Johana Marin Rodriguez

Abstract:

This paper applies DCC, EWMA and OGARCH models to compare the dynamic correlations between exchange rates, oil prices, exchange rates and stock markets to examine the time-varying conditional correlations to the daily oil prices and index returns in relation to the US dollar/local currency for developed (Canada and Mexico) and emerging Latin American markets (Brazil, Chile, Colombia and Peru). Changes in correlation interactions are indicative of structural changes in market linkages with implications to contagion and interdependence. For each pair of stock price-exchange rate and oil price-US dollar/local currency, empirical evidence confirms of a strengthening negative correlation in the last decade. Methodologies suggest only two events have significatively impact in the countries analyzed: global financial crisis and Europe crisis, both events are associated with shifts of correlations to stronger negative level for most of the pairs analyzed. While, the first event has a shifting effect on mainly emerging members, the latter affects developed members. The identification of these relationships provides benefits in risk diversification and inflation targeting.

Keywords: crude oil, dynamic conditional correlation, exchange rates, interdependence, stock prices

Procedia PDF Downloads 284
1246 Regulation of SHP-2 Activity by Small Molecules for the Treatment of T Cell-Mediated Diseases

Authors: Qiang Xu, Xingxin Wu, Wenjie Guo, Xingqi Wang, Yang Sun, Renxiang Tan

Abstract:

The phosphatase SHP-2 is known to exert regulatory activities on cytokine receptor signaling and the dysregulation of SHP-2 has been implicated in the pathogenesis of a variety of diseases. Here we report several small molecule regulators of SHP-2 for the treatment of T cell-mediated diseases. The new cyclodepsipeptide trichomides A, isolated from the fermentation products of Trichothecium roseum, increased the phosphorylation of SHP-2 in activated T cells, and ameliorated contact dermatitis in mice. The trichomides A’s effects were significantly reversed by using the SHP-2-specific inhibitor PHPS1 or T cell-conditional SHP-2 knockout mice. Another compound is a cerebroside Fusaruside isolated from the endophytic fungus Fusarium sp. IFB-121. Fusaruside also triggered the tyrosine phosphorylation of SHP-2, which provided a possible mean of selectively targeting STAT1 for the treatment of Th1 cell-mediated inflammation and led to the discovery of the non-phosphatase-like function of SHP-2. Namely, the Fusaruside-activated pY-SHP-2 selectively sequestrated the cytosolic STAT1 to prevent its recruitment to IFN-R, which contributed to the improvement of experimental colitis in mice. Blocking the pY-SHP-2-STAT1 interaction, with SHP-2 inhibitor NSC-87877 or using T cells from conditional SHP-2 knockout mice, reversed the effects of fusaruside. Furthermore, the fusaruside’s effect is independent of the phosphatase activity of SHP-2, demonstrating a novel role for SHP-2 in regulating STAT1 signaling and Th1-type immune responses.

Keywords: SHP-2, small molecules, T cell, T cell-mediated diseases

Procedia PDF Downloads 291
1245 Integrating Data Envelopment Analysis and Variance Inflation Factor to Measure the Efficiency of Decision Making Units

Authors: Mostafa Kazemi, Zahra N. Farkhani

Abstract:

This paper proposes an integrated Data Envelopment Analysis (DEA) and Variance Inflation Factor (VIF) model for measuring the technical efficiency of decision making units. The model is validated using a set of 69% sales representatives’ dairy products. The analysis is done in two stages, in the first stage, VIF technique is used to distinguish independent effective factors of resellers, and in the second stage we used DEA for measuring efficiency for both constant and variable return to scales status. Further DEA is used to examine the utilization of environmental factors on efficiency. Results of this paper indicated an average managerial efficiency of 83% in the whole sales representatives’ dairy products. In addition, technical and scale efficiency were counted 96% and 80% respectively. 38% of sales representative have the technical efficiency of 100% and 72% of the sales representative in terms of managerial efficiency are quite efficient.High levels of relative efficiency indicate a good condition for sales representative efficiency.

Keywords: data envelopment analysis (DEA), relative efficiency, sales representatives’ dairy products, variance inflation factor (VIF)

Procedia PDF Downloads 532
1244 Parameter Estimation of Additive Genetic and Unique Environment (AE) Model on Diabetes Mellitus Type 2 Using Bayesian Method

Authors: Andi Darmawan, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Diabetes mellitus (DM) is a chronic disease in human that occurred if pancreas cannot produce enough of insulin hormone or the body uses ineffectively insulin hormone which causes increasing level of glucose in the blood, or it was called hyperglycemia. In Indonesia, DM is a serious disease on health because it can cause blindness, kidney disease, diabetic feet (gangrene), and stroke. The type of DM criteria can also be divided based on the main causes; they are DM type 1, type 2, and gestational. Diabetes type 1 or previously known as insulin-independent diabetes is due to a lack of production of insulin hormone. Diabetes type 2 or previously known as non-insulin dependent diabetes is due to ineffective use of insulin while gestational diabetes is a hyperglycemia that found during pregnancy. The most one type commonly found in patient is DM type 2. The main factors of this disease are genetic (A) and life style (E). Those disease with 2 factors can be constructed with additive genetic and unique environment (AE) model. In this article was discussed parameter estimation of AE model using Bayesian method and the inheritance character simulation on parent-offspring. On the AE model, there are response variable, predictor variables, and parameters were capable of representing the number of population on research. The population can be measured through a taken random sample. The response and predictor variables can be determined by sample while the parameters are unknown, so it was required to estimate the parameters based on the sample. Estimation of AE model parameters was obtained based on a joint posterior distribution. The simulation was conducted to get the value of genetic variance and life style variance. The results of simulation are 0.3600 for genetic variance and 0.0899 for life style variance. Therefore, the variance of genetic factor in DM type 2 is greater than life style.

Keywords: AE model, Bayesian method, diabetes mellitus type 2, genetic, life style

Procedia PDF Downloads 250
1243 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.

Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification

Procedia PDF Downloads 488
1242 Extreme Value Modelling of Ghana Stock Exchange Indices

Authors: Kwabena Asare, Ezekiel N. N. Nortey, Felix O. Mettle

Abstract:

Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana Stock Exchange All-Shares indices (2000-2010) by applying the Extreme Value Theory to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before EVT method was applied. The Peak Over Threshold (POT) approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model’s goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the Value at Risk (VaR) and Expected Shortfall (ES) risk measures at some high quantiles, based on the fitted GPD model.

Keywords: extreme value theory, expected shortfall, generalized pareto distribution, peak over threshold, value at risk

Procedia PDF Downloads 519
1241 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 130
1240 Secondary Traumatic Stress and Related Factors in Australian Social Workers and Psychologists

Authors: Cindy Davis, Samantha Rayner

Abstract:

Secondary traumatic stress (STS) is an indirect form of trauma affecting the psychological well-being of mental health workers; STS is found to be a prevalent risk in mental health occupations. Various factors impact the development of STS within the literature; including the level of trauma individuals are exposed to and their level of empathy. Research is limited on STS in mental health workers in Australia; therefore, this study examined STS and related factors of empathetic behavior and trauma caseload among mental health workers. The research utilized an online survey quantitative research design with a purposive sample of 190 mental health workers (176 females) recruited via professional websites and unofficial social media groups. Participants completed an online questionnaire comprising of demographics, the secondary traumatic stress scale and the empathy scale for social workers. A standard hierarchical regression analysis was conducted to examine the significance of covariates, traumatized clients, traumatic stress within workload and empathy in predicting STS. The current research found 29.5% of participants to meet the criteria for a diagnosis of STS. Age and past trauma within the covariates were significantly associated with STS. Amount of traumatized clients significantly predicted 4.7% of the variance in STS, traumatic stress within workload significantly predicted 4.8% of the variance in STS and empathy significantly predicted 4.9% of the variance in STS. These three independent variables and the covariates accounted for 18.5% of the variance in STS. Practical implications include a focus on developing risk strategies and treatment methods that can diminish the impact of STS.

Keywords: mental health, PTSD, social work, trauma

Procedia PDF Downloads 306
1239 Effect of Depth on Texture Features of Ultrasound Images

Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes

Abstract:

In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.

Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering

Procedia PDF Downloads 268
1238 Estimating the Receiver Operating Characteristic Curve from Clustered Data and Case-Control Studies

Authors: Yalda Zarnegarnia, Shari Messinger

Abstract:

Receiver operating characteristic (ROC) curves have been widely used in medical research to illustrate the performance of the biomarker in correctly distinguishing the diseased and non-diseased groups. Correlated biomarker data arises in study designs that include subjects that contain same genetic or environmental factors. The information about correlation might help to identify family members at increased risk of disease development, and may lead to initiating treatment to slow or stop the progression to disease. Approaches appropriate to a case-control design matched by family identification, must be able to accommodate both the correlation inherent in the design in correctly estimating the biomarker’s ability to differentiate between cases and controls, as well as to handle estimation from a matched case control design. This talk will review some developed methods for ROC curve estimation in settings with correlated data from case control design and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using Conditional ROC curves will be demonstrated, to provide appropriate ROC curves for correlated paired data. The proposed approach will use the information about the correlation among biomarker values, producing conditional ROC curves that evaluate the ability of a biomarker to discriminate between diseased and non-diseased subjects in a familial paired design.

Keywords: biomarker, correlation, familial paired design, ROC curve

Procedia PDF Downloads 208
1237 Sensitivity Analysis of Movable Bed Roughness Formula in Sandy Rivers

Authors: Mehdi Fuladipanah

Abstract:

Sensitivity analysis as a technique is applied to determine influential input factors on model output. Variance-based sensitivity analysis method has more application compared to other methods because of including linear and non-linear models. In this paper, van Rijn’s movable bed roughness formula was selected to evaluate because of its reasonable results in sandy rivers. This equation contains four variables as: flow depth, sediment size,bBed form height and bed form length. These variable’s importance was determined using the first order of Fourier Amplitude Sensitivity Test. Sensitivity index was applied to evaluate importance of factors. The first order FAST based sensitivity indices test, explain 90% of the total variance that is indicating acceptance criteria of FAST application. More value of this index is indicating more important variable. Results show that bed form height, bed form length, sediment size and flow depth are more influential factors with sensitivity index: 32%, 24%, 19% and 15% respectively.

Keywords: sdensitivity analysis, variance, movable bed roughness formula, Sandy River

Procedia PDF Downloads 233
1236 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 104
1235 Applying Multivariate and Univariate Analysis of Variance on Socioeconomic, Health, and Security Variables in Jordan

Authors: Faisal G. Khamis, Ghaleb A. El-Refae

Abstract:

Many researchers have studied socioeconomic, health, and security variables in the developed countries; however, very few studies used multivariate analysis in developing countries. The current study contributes to the scarce literature about the determinants of the variance in socioeconomic, health, and security factors. Questions raised were whether the independent variables (IVs) of governorate and year impact the socioeconomic, health, and security dependent variables (DVs) in Jordan, whether the marginal mean of each DV in each governorate and in each year is significant, which governorates are similar in difference means of each DV, and whether these DVs vary. The main objectives were to determine the source of variances in DVs, collectively and separately, testing which governorates are similar and which diverge for each DV. The research design was time series and cross-sectional analysis. The main hypotheses are that IVs affect DVs collectively and separately. Multivariate and univariate analyses of variance were carried out to test these hypotheses. The population of 12 governorates in Jordan and the available data of 15 years (2000–2015) accrued from several Jordanian statistical yearbooks. We investigated the effect of two factors of governorate and year on the four DVs of divorce rate, mortality rate, unemployment percentage, and crime rate. All DVs were transformed to multivariate normal distribution. We calculated descriptive statistics for each DV. Based on the multivariate analysis of variance, we found a significant effect in IVs on DVs with p < .001. Based on the univariate analysis, we found a significant effect of IVs on each DV with p < .001, except the effect of the year factor on unemployment was not significant with p = .642. The grand and marginal means of each DV in each governorate and each year were significant based on a 95% confidence interval. Most governorates are not similar in DVs with p < .001. We concluded that the two factors produce significant effects on DVs, collectively and separately. Based on these findings, the government can distribute its financial and physical resources to governorates more efficiently. By identifying the sources of variance that contribute to the variation in DVs, insights can help inform focused variation prevention efforts.

Keywords: ANOVA, crime, divorce, governorate, hypothesis test, Jordan, MANOVA, means, mortality, unemployment, year

Procedia PDF Downloads 250
1234 Secure Multiparty Computations for Privacy Preserving Classifiers

Authors: M. Sumana, K. S. Hareesha

Abstract:

Secure computations are essential while performing privacy preserving data mining. Distributed privacy preserving data mining involve two to more sites that cannot pool in their data to a third party due to the violation of law regarding the individual. Hence in order to model the private data without compromising privacy and information loss, secure multiparty computations are used. Secure computations of product, mean, variance, dot product, sigmoid function using the additive and multiplicative homomorphic property is discussed. The computations are performed on vertically partitioned data with a single site holding the class value.

Keywords: homomorphic property, secure product, secure mean and variance, secure dot product, vertically partitioned data

Procedia PDF Downloads 393
1233 Multivariate Statistical Analysis of Heavy Metals Pollution of Dietary Vegetables in Swabi, Khyber Pakhtunkhwa, Pakistan

Authors: Fawad Ali

Abstract:

Toxic heavy metal contamination has a negative impact on soil quality which ultimately pollutes the agriculture system. In the current work, we analyzed uptake of various heavy metals by dietary vegetables grown in wastewater irrigated areas of Swabi city. The samples of soil and vegetables were analyzed for heavy metals viz Cd, Cr, Mn, Fe, Ni, Cu, Zn and Pb using Atomic Absorption Spectrophotometer. High levels of metals were found in wastewater irrigated soil and vegetables in the study area. Especially the concentrations of Pb and Cd in the dietary vegetable crossed the permissible level of World Health Organization. Substantial positive correlation was found among the soil and vegetable contamination. Transfer factor for some metals including Cr, Zn, Mn, Ni, Cd and Cu was greater than 0.5 which shows enhanced accumulation of these metals due to contamination by domestic discharges and industrial effluents. Linear regression analysis indicated significant correlation of heavy metals viz Pb, Cr, Cd, Ni, Zn, Cu, Fe and Mn in vegetables with concentration in soil of 0.964 at P≤0.001. Abelmoschus esculentus indicated Health Risk Index (HRI) of Pb >1 in adults and children. The source identification analysis carried out by Principal Component Analysis (PCA) and Cluster Analysis (CA) showed that ground water and soil were being polluted by the trace metals coming out from industries and domestic wastes. Hierarchical cluster analysis (HCA) divided metals into two clusters for wastewater and soil but into five clusters for soil of control area. PCA extracted two factors for wastewater, each contributing 61.086 % and 16.229 % of the total 77.315 % variance. PCA extracted two factors, for soil samples, having total variance of 79.912 % factor 1 and factor 2 contributed 63.889 % and 16.023 % of the total variance. PCA for sub soil extracted two factors with a total variance of 76.136 % factor 1 being 61.768 % and factor 2 being 14.368 %of the total variance. High pollution load index for vegetables in the study area due to metal polluted soil has opened a study area for proper legislation to protect further contamination of vegetables. This work would further reveal serious health risks to human population of the study area.

Keywords: health risk, vegetables, wastewater, atomic absorption sepctrophotometer

Procedia PDF Downloads 39
1232 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 103
1231 Economic Valuation of Forest Landscape Function Using a Conditional Logit Model

Authors: A. J. Julius, E. Imoagene, O. A. Ganiyu

Abstract:

The purpose of this study is to estimate the economic value of the services and functions rendered by the forest landscape using a conditional logit model. For this study, attributes and levels of forest landscape were chosen; specifically, attributes include topographical forest type, forest type, forest density, recreational factor (side trip, accessibility of valley), and willingness to participate (WTP). Based on these factors, 48 choices sets with balanced and orthogonal form using statistical analysis system (SAS) 9.1 was adopted. The efficiency of the questionnaire was 6.02 (D-Error. 0.1), and choice set and socio-economic variables were analyzed. To reduce the cognitive load of respondents, the 48 choice sets were divided into 4 types in the questionnaire, so that respondents could respond to 12 choice sets, respectively. The study populations were citizens from seven metropolitan cities including Ibadan, Ilorin, Osogbo, etc. and annual WTP per household was asked by using the interview questionnaire, a total of 267 copies were recovered. As a result, Oshogbo had 0.45, and the statistical similarities could not be found except for urban forests, forest density, recreational factor, and level of WTP. Average annual WTP per household for forest landscape was 104,758 Naira (Nigerian currency) based on the outcome from this model, total economic value of the services and functions enjoyed from Nigerian forest landscape has reached approximately 1.6 trillion Naira.

Keywords: economic valuation, urban cities, services, forest landscape, logit model, nigeria

Procedia PDF Downloads 100
1230 Mobile Traffic Management in Congested Cells using Fuzzy Logic

Authors: A. A. Balkhi, G. M. Mir, Javid A. Sheikh

Abstract:

To cater the demands of increasing traffic with new applications the cellular mobile networks face new changes in deployment in infrastructure for making cellular networks heterogeneous. To reduce overhead processing the densely deployed cells require smart behavior with self-organizing capabilities with high adaptation to the neighborhood. We propose self-organization of unused resources usually excessive unused channels of neighbouring cells with densely populated cells to reduce handover failure rates. The neighboring cells share unused channels after fulfilling some conditional candidature criterion using threshold values so that they are not suffered themselves for starvation of channels in case of any abrupt change in traffic pattern. The cells are classified as ‘red’, ‘yellow’, or ‘green’, as per the available channels in cell which is governed by traffic pattern and thresholds. To combat the deficiency of channels in red cell, migration of unused channels from under-loaded cells, hierarchically from the qualified candidate neighboring cells is explored. The resources are returned back when the congested cell is capable of self-contained traffic management. In either of the cases conditional sharing of resources is executed for enhanced traffic management so that User Equipment (UE) is provided uninterrupted services with high Quality of Service (QoS). The fuzzy logic-based simulation results show that the proposed algorithm is efficiently in coincidence with improved successful handoffs.

Keywords: candidate cell, channel sharing, fuzzy logic, handover, small cells

Procedia PDF Downloads 100
1229 Fault Tolerant (n,k)-star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems

Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj K. Biswas, Frank Ferrese

Abstract:

This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.

Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system

Procedia PDF Downloads 486
1228 Fault Tolerant (n, k)-Star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems

Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj Biswas, Frank Ferrese

Abstract:

This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.

Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system

Procedia PDF Downloads 439
1227 A Comparative Analysis of Global Minimum Variance and Naïve Portfolios: Performance across Stock Market Indices and Selected Economic Regimes Using Various Risk-Return Metrics

Authors: Lynmar M. Didal, Ramises G. Manzano Jr., Jacque Bon-Isaac C. Aboy

Abstract:

This study analyzes the performance of global minimum variance and naive portfolios across different economic periods, using monthly stock returns from the Philippine Stock Exchange Index (PSEI), S&P 500, and Dow Jones Industrial Average (DOW). The performance is evaluated through the Sharpe ratio, Sortino ratio, Jensen’s Alpha, Treynor ratio, and Information ratio. Additionally, the study investigates the impact of short selling on portfolio performance. Six-time periods are defined for analysis, encompassing events such as the global financial crisis and the COVID-19 pandemic. Findings indicate that the Naive portfolio generally outperforms the GMV portfolio in the S&P 500, signifying higher returns with increased volatility. Conversely, in the PSEI and DOW, the GMV portfolio shows more efficient risk-adjusted returns. Short selling significantly impacts the GMV portfolio during mid-GFC and mid-COVID periods. The study offers insights for investors, suggesting the Naive portfolio for higher risk tolerance and the GMV portfolio as a conservative alternative.

Keywords: portfolio performance, global minimum variance, naïve portfolio, risk-adjusted metrics, short-selling

Procedia PDF Downloads 64
1226 Bayesian Flexibility Modelling of the Conditional Autoregressive Prior in a Disease Mapping Model

Authors: Davies Obaromi, Qin Yongsong, James Ndege, Azeez Adeboye, Akinwumi Odeyemi

Abstract:

The basic model usually used in disease mapping, is the Besag, York and Mollie (BYM) model and which combines the spatially structured and spatially unstructured priors as random effects. Bayesian Conditional Autoregressive (CAR) model is a disease mapping method that is commonly used for smoothening the relative risk of any disease as used in the Besag, York and Mollie (BYM) model. This model (CAR), which is also usually assigned as a prior to one of the spatial random effects in the BYM model, successfully uses information from adjacent sites to improve estimates for individual sites. To our knowledge, there are some unrealistic or counter-intuitive consequences on the posterior covariance matrix of the CAR prior for the spatial random effects. In the conventional BYM (Besag, York and Mollie) model, the spatially structured and the unstructured random components cannot be seen independently, and which challenges the prior definitions for the hyperparameters of the two random effects. Therefore, the main objective of this study is to construct and utilize an extended Bayesian spatial CAR model for studying tuberculosis patterns in the Eastern Cape Province of South Africa, and then compare for flexibility with some existing CAR models. The results of the study revealed the flexibility and robustness of this alternative extended CAR to the commonly used CAR models by comparison, using the deviance information criteria. The extended Bayesian spatial CAR model is proved to be a useful and robust tool for disease modeling and as a prior for the structured spatial random effects because of the inclusion of an extra hyperparameter.

Keywords: Besag2, CAR models, disease mapping, INLA, spatial models

Procedia PDF Downloads 250
1225 Financial Markets Integration between Morocco and France: Implications on International Portfolio Diversification

Authors: Abdelmounaim Lahrech, Hajar Bousfiha

Abstract:

This paper examines equity market integration between Morocco and France and its consequent implications on international portfolio diversification. In the absence of stock market linkages, Morocco can act as a diversification destination to European investors, allowing higher returns at a comparable level of risk in developed markets. In contrast, this attractiveness is limited if both financial markets show significant linkage. The research empirically measures financial market’s integration in by capturing the conditional correlation between the two markets using the Generalized Autoregressive Conditionally Heteroscedastic (GARCH) model. Then, the research uses the Dynamic Conditional Correlation (DCC) model of Engle (2002) to track the correlations. The research findings show that there is no important increase over the years in the correlation between the Moroccan and the French equity markets, even though France is considered Morocco’s first trading partner. Failing to prove evidence of the stock index linkage between the two countries, the volatility series of each market were assumed to change over time separately. Yet, the study reveals that despite the important historical and economic linkages between Morocco and France, there is no evidence that equity markets follow. The small correlations and their stationarity over time show that over the 10 years studied, correlations were fluctuating around a stable mean with no significant change at their level. Different explanations can be attributed to the absence of market linkage between the two equity markets.

Keywords: equity market linkage, DCC GARCH, international portfolio diversification, Morocco, France

Procedia PDF Downloads 420
1224 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain

Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki

Abstract:

The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.

Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy

Procedia PDF Downloads 49
1223 Genetic Analysis of Iron, Phosphorus, Potassium and Zinc Concentration in Peanut

Authors: Ajay B. C., Meena H. N., Dagla M. C., Narendra Kumar, Makwana A. D., Bera S. K., Kalariya K. A., Singh A. L.

Abstract:

The high-energy value, protein content and minerals makes peanut a rich source of nutrition at comparatively low cost. Basic information on genetics and inheritance of these mineral elements is very scarce. Hence, in the present study inheritance (using additive-dominance model) and association of mineral elements was studied in two peanut crosses. Dominance variance (H) played an important role in the inheritance of P, K, Fe and Zn in peanut pods. Average degree of dominance for most of the traits was greater than unity indicating over dominance for these traits. Significant associations were also observed among mineral elements both in F2 and F3 generations but pod yield had no associations with mineral elements (with few exceptions). Di-allele/bi-parental mating could be followed to identify high yielding and mineral dense segregates.

Keywords: correlation, dominance variance, mineral elements, peanut

Procedia PDF Downloads 389
1222 A Profile of an Exercise Addict: The Relationship between Exercise Addiction and Personality

Authors: Klary Geisler, Dalit Lev-Arey, Yael Hacohen

Abstract:

It is a well-known fact that exercise has favorable effects on people's physical health, as well as mental well-being. However, as for as excessive exercise, it may likely elevate negative consequences (e.g., physical injuries, negligence of everyday responsibilities such as work, family life). Lately, there is a growing interest in exercise addiction, sometimes referred to as exercise dependence, which is defined as a craving for physical activity that results in extreme work-out sessions and generates negative physiological and psychological symptoms (e.g., withdrawal symptoms, tolerance, social conflict). Exercise addiction is considered a behavioral addiction, yet it was not included in the latest editions of the diagnostic and statistical manual of mental disorders (DSM-IV), due to lack of significant research. Specifically, there is scarce research on the relationship between exercise addiction and personality dimensions. The purpose of the current research was to examine the relationship between primary exercise addiction symptoms and the big five dimensions, perfectionism (high performance expectations and self-critical performance evaluations) and subjective affect. participants were 152 trainees on a variety of aerobic sports activities (running, cycling, swimming) that were recruited through sports groups and trainers. 88% of participants trained for at least 5 hours per week, 24% of the participants trained above 10 hours per week. To test the predictive ability of the IVs a hierarchical linear regression with forced block entry was performed. It was found that Neuroticism significantly predicted exercise addiction symptoms (20% of the variance, p<0.001), while consciousness was negatively correlated with exercise addiction symptoms (14% of variance p<0.05); both had a unique contribution. Other dimensions of the big five (agreeableness, openness and extraversion) did not have any contribution to the dependent. Moreover, maladaptive perfectionism (self-critical performance evaluations) significantly predicted exercise addiction symptoms as well (10% of the variance P < 0.05). The overall regression model explained 54% of variance.

Keywords: big five, consciousness, excessive exercise, exercise addiction, neuroticism, perfectionism, personality

Procedia PDF Downloads 198
1221 Combined Localization, Beamforming, and Interference Threshold Estimation in Underlay Cognitive System

Authors: Omar Nasr, Yasser Naguib, Mohamed Hafez

Abstract:

This paper aims at providing an innovative solution for blind interference threshold estimation in an underlay cognitive network to be used in adaptive beamforming by secondary user Transmitter and Receiver. For the task of threshold estimation, blind detection of modulation and SNR are used. For the sake of beamforming several localization algorithms are compared to settle on best one for cognitive environment. Beamforming algorithms as LCMV (Linear Constraint Minimum Variance) and MVDR (Minimum Variance Distortion less) are also proposed and compared. The idea of just nulling the primary user after knowledge of its location is discussed against the idea of working under interference threshold.

Keywords: cognitive radio, underlay, beamforming, MUSIC, MVDR, LCMV, threshold estimation

Procedia PDF Downloads 559
1220 The Study of Rapeseed Characteristics by Factor Analysis under Normal and Drought Stress Conditions

Authors: Ali Bakhtiari Gharibdosti, Mohammad Hosein Bijeh Keshavarzi, Samira Alijani

Abstract:

To understand internal characteristics relationships and determine factors which explain under consideration characteristics in rapeseed varieties, 10 rapeseed genotypes were implemented in complete accidental plot with three-time repetitions under drought stress in 2009-2010 in research field of agriculture college, Islamic Azad University, Karaj branch. In this research, 11 characteristics include of characteristics related to growth, production and functions stages was considered. Variance analysis results showed that there is a significant difference among rapeseed varieties characteristics. By calculating simple correlation coefficient under both conditions, normal and drought stress indicate that seed function characteristics in plant and pod number have positive and significant correlation in 1% probable level with seed function and selection on the base of these characteristics was effective for improving this function. Under normal and drought stress, analyzing the main factors showed that numbers of factors which have more than one amount, had five factors under normal conditions which were 82.72% of total variance totally, but under drought stress four factors diagnosed which were 76.78% of total variance. By considering total results of this research and by assessing effective characteristics for factor analysis and selecting different components of these characteristics, they can be used for modifying works to select applicable and tolerant genotypes in drought stress conditions.

Keywords: correlation, drought stress, factor analysis, rapeseed

Procedia PDF Downloads 156
1219 EWMA and MEWMA Control Charts for Monitoring Mean and Variance in Industrial Processes

Authors: L. A. Toro, N. Prieto, J. J. Vargas

Abstract:

There are many control charts for monitoring mean and variance. Among these, the X y R, X y S, S2 Hotteling and Shewhart control charts, for mentioning some, are widely used for monitoring mean a variance in industrial processes. In particular, the Shewhart charts are based on the information about the process contained in the current observation only and ignore any information given by the entire sequence of points. Moreover, that the Shewhart chart is a control chart without memory. Consequently, Shewhart control charts are found to be less sensitive in detecting smaller shifts, particularly smaller than 1.5 times of the standard deviation. These kind of small shifts are important in many industrial applications. In this study and effective alternative to Shewhart control chart was implemented. In case of univariate process an Exponentially Moving Average (EWMA) control chart was developed and Multivariate Exponentially Moving Average (MEWMA) control chart in case of multivariate process. Both of these charts were based on memory and perform better that Shewhart chart while detecting smaller shifts. In these charts, information the past sample is cumulated up the current sample and then the decision about the process control is taken. The mentioned characteristic of EWMA and MEWMA charts, are of the paramount importance when it is necessary to control industrial process, because it is possible to correct or predict problems in the processes before they come to a dangerous limit.

Keywords: control charts, multivariate exponentially moving average (MEWMA), exponentially moving average (EWMA), industrial control process

Procedia PDF Downloads 329