Search results for: multivariate probit model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16730

Search results for: multivariate probit model

16640 Some Generalized Multivariate Estimators for Population Mean under Multi Phase Stratified Systematic Sampling

Authors: Muqaddas Javed, Muhammad Hanif

Abstract:

The generalized multivariate ratio and regression type estimators for population mean are suggested under multi-phase stratified systematic sampling (MPSSS) using multi auxiliary information. Estimators are developed under the two different situations of availability of auxiliary information. The expressions of bias and mean square error (MSE) are developed. Special cases of suggested estimators are also discussed and simulation study is conducted to observe the performance of estimators.

Keywords: generalized estimators, multi-phase sampling, stratified random sampling, systematic sampling

Procedia PDF Downloads 701
16639 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 331
16638 Irrigation Water Quality Evaluation Based on Multivariate Statistical Analysis: A Case Study of Jiaokou Irrigation District

Authors: Panpan Xu, Qiying Zhang, Hui Qian

Abstract:

Groundwater is main source of water supply in the Guanzhong Basin, China. To investigate the quality of groundwater for agricultural purposes in Jiaokou Irrigation District located in the east of the Guanzhong Basin, 141 groundwater samples were collected for analysis of major ions (K+, Na+, Mg2+, Ca2+, SO42-, Cl-, HCO3-, and CO32-), pH, and total dissolved solids (TDS). Sodium percentage (Na%), residual sodium carbonate (RSC), magnesium hazard (MH), and potential salinity (PS) were applied for irrigation water quality assessment. In addition, multivariate statistical techniques were used to identify the underlying hydrogeochemical processes. Results show that the content of TDS mainly depends on Cl-, Na+, Mg2+, and SO42-, and the HCO3- content is generally high except for the eastern sand area. These are responsible for complex hydrogeochemical processes, such as dissolution of carbonate minerals (dolomite and calcite), gypsum, halite, and silicate minerals, the cation exchange, as well as evaporation and concentration. The average evaluation levels of Na%, RSC, MH, and PS for irrigation water quality are doubtful, good, unsuitable, and injurious to unsatisfactory, respectively. Therefore, it is necessary for decision makers to comprehensively consider the indicators and thus reasonably evaluate the irrigation water quality.

Keywords: irrigation water quality, multivariate statistical analysis, groundwater, hydrogeochemical process

Procedia PDF Downloads 116
16637 Ranking Effective Factors on Strategic Planning to Achieve Organization Objectives in Fuzzy Multivariate Decision-Making Technique

Authors: Elahe Memari, Ahmad Aslizadeh, Ahmad Memari

Abstract:

Today strategic planning is counted as the most important duties of senior directors in each organization. Strategic planning allows the organizations to implement compiled strategies and reach higher competitive benefits than their competitors. The present research work tries to prepare and rank the strategies form effective factors on strategic planning in fulfillment of the State Road Management and Transportation Organization in order to indicate the role of organizational factors in efficiency of the process to organization managers. Connection between six main factors in fulfillment of State Road Management and Transportation Organization were studied here, including Improvement of Strategic Thinking in senior managers, improvement of the organization business process, rationalization of resources allocation in different parts of the organization, coordination and conformity of strategic plan with organization needs, adjustment of organization activities with environmental changes, reinforcement of organizational culture. All said factors approved by implemented tests and then ranked using fuzzy multivariate decision-making technique.

Keywords: Fuzzy TOPSIS, improvement of organization business process, multivariate decision-making, strategic planning

Procedia PDF Downloads 376
16636 Applying Multivariate and Univariate Analysis of Variance on Socioeconomic, Health, and Security Variables in Jordan

Authors: Faisal G. Khamis, Ghaleb A. El-Refae

Abstract:

Many researchers have studied socioeconomic, health, and security variables in the developed countries; however, very few studies used multivariate analysis in developing countries. The current study contributes to the scarce literature about the determinants of the variance in socioeconomic, health, and security factors. Questions raised were whether the independent variables (IVs) of governorate and year impact the socioeconomic, health, and security dependent variables (DVs) in Jordan, whether the marginal mean of each DV in each governorate and in each year is significant, which governorates are similar in difference means of each DV, and whether these DVs vary. The main objectives were to determine the source of variances in DVs, collectively and separately, testing which governorates are similar and which diverge for each DV. The research design was time series and cross-sectional analysis. The main hypotheses are that IVs affect DVs collectively and separately. Multivariate and univariate analyses of variance were carried out to test these hypotheses. The population of 12 governorates in Jordan and the available data of 15 years (2000–2015) accrued from several Jordanian statistical yearbooks. We investigated the effect of two factors of governorate and year on the four DVs of divorce rate, mortality rate, unemployment percentage, and crime rate. All DVs were transformed to multivariate normal distribution. We calculated descriptive statistics for each DV. Based on the multivariate analysis of variance, we found a significant effect in IVs on DVs with p < .001. Based on the univariate analysis, we found a significant effect of IVs on each DV with p < .001, except the effect of the year factor on unemployment was not significant with p = .642. The grand and marginal means of each DV in each governorate and each year were significant based on a 95% confidence interval. Most governorates are not similar in DVs with p < .001. We concluded that the two factors produce significant effects on DVs, collectively and separately. Based on these findings, the government can distribute its financial and physical resources to governorates more efficiently. By identifying the sources of variance that contribute to the variation in DVs, insights can help inform focused variation prevention efforts.

Keywords: ANOVA, crime, divorce, governorate, hypothesis test, Jordan, MANOVA, means, mortality, unemployment, year

Procedia PDF Downloads 239
16635 Supervised-Component-Based Generalised Linear Regression with Multiple Explanatory Blocks: THEME-SCGLR

Authors: Bry X., Trottier C., Mortier F., Cornu G., Verron T.

Abstract:

We address component-based regularization of a Multivariate Generalized Linear Model (MGLM). A set of random responses Y is assumed to depend, through a GLM, on a set X of explanatory variables, as well as on a set T of additional covariates. X is partitioned into R conceptually homogeneous blocks X1, ... , XR , viewed as explanatory themes. Variables in each Xr are assumed many and redundant. Thus, Generalised Linear Regression (GLR) demands regularization with respect to each Xr. By contrast, variables in T are assumed selected so as to demand no regularization. Regularization is performed searching each Xr for an appropriate number of orthogonal components that both contribute to model Y and capture relevant structural information in Xr. We propose a very general criterion to measure structural relevance (SR) of a component in a block, and show how to take SR into account within a Fisher-scoring-type algorithm in order to estimate the model. We show how to deal with mixed-type explanatory variables. The method, named THEME-SCGLR, is tested on simulated data.

Keywords: Component-Model, Fisher Scoring Algorithm, GLM, PLS Regression, SCGLR, SEER, THEME

Procedia PDF Downloads 373
16634 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 102
16633 Use of Multivariate Statistical Techniques for Water Quality Monitoring Network Assessment, Case of Study: Jequetepeque River Basin

Authors: Jose Flores, Nadia Gamboa

Abstract:

A proper water quality management requires the establishment of a monitoring network. Therefore, evaluation of the efficiency of water quality monitoring networks is needed to ensure high-quality data collection of critical quality chemical parameters. Unfortunately, in some Latin American countries water quality monitoring programs are not sustainable in terms of recording historical data or environmentally representative sites wasting time, money and valuable information. In this study, multivariate statistical techniques, such as principal components analysis (PCA) and hierarchical cluster analysis (HCA), are applied for identifying the most significant monitoring sites as well as critical water quality parameters in the monitoring network of the Jequetepeque River basin, in northern Peru. The Jequetepeque River basin, like others in Peru, shows socio-environmental conflicts due to economical activities developed in this area. Water pollution by trace elements in the upper part of the basin is mainly related with mining activity, and agricultural land lost due to salinization is caused by the extensive use of groundwater in the lower part of the basin. Since the 1980s, the water quality in the basin has been non-continuously assessed by public and private organizations, and recently the National Water Authority had established permanent water quality networks in 45 basins in Peru. Despite many countries use multivariate statistical techniques for assessing water quality monitoring networks, those instruments have never been applied for that purpose in Peru. For this reason, the main contribution of this study is to demonstrate that application of the multivariate statistical techniques could serve as an instrument that allows the optimization of monitoring networks using least number of monitoring sites as well as the most significant water quality parameters, which would reduce costs concerns and improve the water quality management in Peru. Main socio-economical activities developed and the principal stakeholders related to the water management in the basin are also identified. Finally, water quality management programs will also be discussed in terms of their efficiency and sustainability.

Keywords: PCA, HCA, Jequetepeque, multivariate statistical

Procedia PDF Downloads 329
16632 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data

Authors: Natalia Feruleva

Abstract:

The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.

Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data

Procedia PDF Downloads 87
16631 Multivariate Output-Associative RVM for Multi-Dimensional Affect Predictions

Authors: Achut Manandhar, Kenneth D. Morton, Peter A. Torrione, Leslie M. Collins

Abstract:

The current trends in affect recognition research are to consider continuous observations from spontaneous natural interactions in people using multiple feature modalities, and to represent affect in terms of continuous dimensions, incorporate spatio-temporal correlation among affect dimensions, and provide fast affect predictions. These research efforts have been propelled by a growing effort to develop affect recognition system that can be implemented to enable seamless real-time human-computer interaction in a wide variety of applications. Motivated by these desired attributes of an affect recognition system, in this work a multi-dimensional affect prediction approach is proposed by integrating multivariate Relevance Vector Machine (MVRVM) with a recently developed Output-associative Relevance Vector Machine (OARVM) approach. The resulting approach can provide fast continuous affect predictions by jointly modeling the multiple affect dimensions and their correlations. Experiments on the RECOLA database show that the proposed approach performs competitively with the OARVM while providing faster predictions during testing.

Keywords: dimensional affect prediction, output-associative RVM, multivariate regression, fast testing

Procedia PDF Downloads 261
16630 Facility Anomaly Detection with Gaussian Mixture Model

Authors: Sunghoon Park, Hank Kim, Jinwon An, Sungzoon Cho

Abstract:

Internet of Things allows one to collect data from facilities which are then used to monitor them and even predict malfunctions in advance. Conventional quality control methods focus on setting a normal range on a sensor value defined between a lower control limit and an upper control limit, and declaring as an anomaly anything falling outside it. However, interactions among sensor values are ignored, thus leading to suboptimal performance. We propose a multivariate approach which takes into account many sensor values at the same time. In particular Gaussian Mixture Model is used which is trained to maximize likelihood value using Expectation-Maximization algorithm. The number of Gaussian component distributions is determined by Bayesian Information Criterion. The negative Log likelihood value is used as an anomaly score. The actual usage scenario goes like a following. For each instance of sensor values from a facility, an anomaly score is computed. If it is larger than a threshold, an alarm will go off and a human expert intervenes and checks the system. A real world data from Building energy system was used to test the model.

Keywords: facility anomaly detection, gaussian mixture model, anomaly score, expectation maximization algorithm

Procedia PDF Downloads 244
16629 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 434
16628 The Role of Institutional Quality and Institutional Quality Distance on Trade: The Case of Agricultural Trade within the Southern African Development Community Region

Authors: Kgolagano Mpejane

Abstract:

The study applies a New Institutional Economics (NIE) analytical framework to trade in developing economies by assessing the impacts of institutional quality and institutional quality distance on agricultural trade using a panel data of 15 Southern African Development Community (SADC) countries from the years 1991-2010. The issue of institutions on agricultural trade has not been accorded the necessary attention in the literature, particularly in developing economies. Therefore, the paper empirically tests the gravity model of international trade by measuring the impact of political, economic and legal institutions on intra SADC agricultural trade. The gravity model is noted for its exploratory power and strong theoretical foundation. However, the model has statistical shortcomings in dealing with zero trade values and heteroscedasticity residuals leading to biased results. Therefore, this study employs a two stage Heckman selection model with a Probit equation to estimate the influence of institutions on agricultural trade. The selection stages include the inverse Mills ratio to account for the variable bias of the gravity model. The Heckman model accounts for zero trade values and is robust in the presence of heteroscedasticity. The empirical results of the study support the NIE theory premise that institutions matter in trade. The results demonstrate that institutions determine bilateral agricultural trade on different margins with political institutions having positive and significant influence on bilateral agricultural trade flows within the SADC region. Legal and economic institutions have significant and negative effects on SADC trade. Furthermore, the results of this study confirm that institutional quality distance influences agricultural trade. Legal and political institutional distance have a positive and significant influence on bilateral agricultural trade while the influence of economic, institutional quality is negative and insignificant. The results imply that nontrade barriers, in the form of institutional quality and institutional quality distance, are significant factors limiting intra SADC agricultural trade. Therefore, gains from intra SADC agricultural trade can be attained through the improvement of institutions within the region.

Keywords: agricultural trade, institutions, gravity model, SADC

Procedia PDF Downloads 128
16627 Prediction of Marine Ecosystem Changes Based on the Integrated Analysis of Multivariate Data Sets

Authors: Prozorkevitch D., Mishurov A., Sokolov K., Karsakov L., Pestrikova L.

Abstract:

The current body of knowledge about the marine environment and the dynamics of marine ecosystems includes a huge amount of heterogeneous data collected over decades. It generally includes a wide range of hydrological, biological and fishery data. Marine researchers collect these data and analyze how and why the ecosystem changes from past to present. Based on these historical records and linkages between the processes it is possible to predict future changes. Multivariate analysis of trends and their interconnection in the marine ecosystem may be used as an instrument for predicting further ecosystem evolution. A wide range of information about the components of the marine ecosystem for more than 50 years needs to be used to investigate how these arrays can help to predict the future.

Keywords: barents sea ecosystem, abiotic, biotic, data sets, trends, prediction

Procedia PDF Downloads 84
16626 Effect of Micro Credit Access on Poverty Reduction among Small Scale Women Entrepreneurs in Ondo State, Nigeria

Authors: Adewale Oladapo, C. A. Afolami

Abstract:

The study analyzed the effect of micro credit access on poverty reduction among small scale women entrepreneurs in Ondo state, Nigeria. Primary data were collected in a cross-sectional survey of 100 randomly selected woman entrepreneurs. These were drawn in multistage sampling process covering four local government areas (LGAS). Data collected include socio economics characteristics of respondents, access to micro credit, sources of micro credit, and constraints faced by the entrepreneur in sourcing for micro credit. Data were analyzed using descriptive statistics, Foster, Greer and Thorbecke (FGT) index of poverty measure, Gini coefficients and probit regression analysis. The study found that respondents sampled for the survey were within the age range of 31-40 years with mean age 38.6%. Mostly (56.0%) of the respondents were educated to the tune of primary school. Majority (87.0%) of the respondents were married with fairly large household size of (4-5). The poverty index analysis revealed that most (67%) of the sample respondents were poor. The result of the Probit regression analyzed showed that income was a significant variable in micro credit access, while the result of the Gini coefficient revealed a very high income inequality among the respondents. The study concluded that most of the respondents were poor and return on investment (income) was an important variable that increased the chance of respondents in sourcing for micro-credit loan and recommended that income realized by entrepreneur should be properly documented to facilitate loan accessibility.

Keywords: entrepreneurs, income, micro-credit, poverty

Procedia PDF Downloads 99
16625 Multivariate Analytical Insights into Spatial and Temporal Variation in Water Quality of a Major Drinking Water Reservoir

Authors: Azadeh Golshan, Craig Evans, Phillip Geary, Abigail Morrow, Zoe Rogers, Marcel Maeder

Abstract:

22 physicochemical variables have been determined in water samples collected weekly from January to December in 2013 from three sampling stations located within a major drinking water reservoir. Classical Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) analysis was used to investigate the environmental factors associated with the physico-chemical variability of the water samples at each of the sampling stations. Matrix augmentation MCR-ALS (MA-MCR-ALS) was also applied, and the two sets of results were compared for interpretative clarity. Links between these factors, reservoir inflows and catchment land-uses were investigated and interpreted in relation to chemical composition of the water and their resolved geographical distribution profiles. The results suggested that the major factors affecting reservoir water quality were those associated with agricultural runoff, with evidence of influence on algal photosynthesis within the water column. Water quality variability within the reservoir was also found to be strongly linked to physical parameters such as water temperature and the occurrence of thermal stratification. The two methods applied (MCR-ALS and MA-MCR-ALS) led to similar conclusions; however, MA-MCR-ALS appeared to provide results more amenable to interpretation of temporal and geological variation than those obtained through classical MCR-ALS.

Keywords: drinking water reservoir, multivariate analysis, physico-chemical parameters, water quality

Procedia PDF Downloads 255
16624 A Multivariate Statistical Approach for Water Quality Assessment of River Hindon, India

Authors: Nida Rizvi, Deeksha Katyal, Varun Joshi

Abstract:

River Hindon is an important river catering the demand of highly populated rural and industrial cluster of western Uttar Pradesh, India. Water quality of river Hindon is deteriorating at an alarming rate due to various industrial, municipal and agricultural activities. The present study aimed at identifying the pollution sources and quantifying the degree to which these sources are responsible for the deteriorating water quality of the river. Various water quality parameters, like pH, temperature, electrical conductivity, total dissolved solids, total hardness, calcium, chloride, nitrate, sulphate, biological oxygen demand, chemical oxygen demand and total alkalinity were assessed. Water quality data obtained from eight study sites for one year has been subjected to the two multivariate techniques, namely, principal component analysis and cluster analysis. Principal component analysis was applied with the aim to find out spatial variability and to identify the sources responsible for the water quality of the river. Three Varifactors were obtained after varimax rotation of initial principal components using principal component analysis. Cluster analysis was carried out to classify sampling stations of certain similarity, which grouped eight different sites into two clusters. The study reveals that the anthropogenic influence (municipal, industrial, waste water and agricultural runoff) was the major source of river water pollution. Thus, this study illustrates the utility of multivariate statistical techniques for analysis and elucidation of multifaceted data sets, recognition of pollution sources/factors and understanding temporal/spatial variations in water quality for effective river water quality management.

Keywords: cluster analysis, multivariate statistical techniques, river Hindon, water quality

Procedia PDF Downloads 430
16623 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 89
16622 Risk of Androgen Deprivation Therapy-Induced Metabolic Syndrome-Related Complications for Prostate Cancer in Taiwan

Authors: Olivia Rachel Hwang, Yu-Hsuan Joni Shao

Abstract:

Androgen Deprivation Therapy (ADT) has been a primary treatment for patients with advanced prostate cancer. However, it is associated with numerous adverse effects related to Metabolic Syndrome (MetS), including hypertension, diabetes, hyperlipidaemia, heart diseases and ischemic strokes. However, complications associated with ADT for prostate cancer in Taiwan is not well documented. The purpose of this study is to utilize the data from NHIRD (National Health Insurance Research Database) to examine the trajectory changes of MetS-related complications in men receiving ADT. The risks of developing complications after the treatment were analyzed with multivariate Cox regression model. Covariates including in the model were the complications before the diagnosis of prostate cancer, the age, and the year at cancer diagnosis. A total number of 17268 patients from 1997-2013 were included in this study. The exclusion criteria were patients with any other types of cancer or with the existing MetS-related complications. Changes in MetS-related complications were observed among two treatment groups: 1) ADT (n=9042), and 2) non-ADT (n=8226). The ADT group appeared to have an increased risk in hypertension (hazard ratio 1.08, 95% confidence interval 1.03-1.13, P = 0.001) and hyperlipidemia (hazard ratio 1.09, 95% confidence interval 1.01-1.17, P = 0.02) when compared with non-ADT group in the multivariate Cox regression analyses. In the risk of diabetes, heart diseases, and ischemic strokes, ADT group appeared to have an increased but not significant hazard ratio. In conclusion, ADT was associated with an increased risk in hypertension and hyperlipidemia in prostate cancer patients in Taiwan. The risk of hypertension and hyperlipidemia should be considered while deciding on ADT, especially those with the known history of hypertension and hyperlipidemia.

Keywords: androgen deprivation therapy, ADT, complications, metabolic syndrome, MetS, prostate cancer

Procedia PDF Downloads 262
16621 Importance of Health and Social Capital to Employment Status of Indigenous Peoples in Canada

Authors: Belayet Hossain, Laura Lamb

Abstract:

The study investigates the importance of health and social capital in determining the labour force status of Canada’s Indigenous population using data from 2006 Aboriginal Peoples Survey. An instrumental variable ordered probit model has been specified and estimated. The study finds that health status and social capital are important in determining Indigenous peoples’ employment status along with other factors. The results of the study imply that human resource development initiatives of Indigenous Peoples need to be broadened by including health status and social capital. Poor health and low degree of inclusion of the Indigenous Peoples need to be addressed in order to improve employment status of Canada’s Indigenous Peoples.

Keywords: labour force, human capital, social capital, aboriginal people, Canada

Procedia PDF Downloads 269
16620 EWMA and MEWMA Control Charts for Monitoring Mean and Variance in Industrial Processes

Authors: L. A. Toro, N. Prieto, J. J. Vargas

Abstract:

There are many control charts for monitoring mean and variance. Among these, the X y R, X y S, S2 Hotteling and Shewhart control charts, for mentioning some, are widely used for monitoring mean a variance in industrial processes. In particular, the Shewhart charts are based on the information about the process contained in the current observation only and ignore any information given by the entire sequence of points. Moreover, that the Shewhart chart is a control chart without memory. Consequently, Shewhart control charts are found to be less sensitive in detecting smaller shifts, particularly smaller than 1.5 times of the standard deviation. These kind of small shifts are important in many industrial applications. In this study and effective alternative to Shewhart control chart was implemented. In case of univariate process an Exponentially Moving Average (EWMA) control chart was developed and Multivariate Exponentially Moving Average (MEWMA) control chart in case of multivariate process. Both of these charts were based on memory and perform better that Shewhart chart while detecting smaller shifts. In these charts, information the past sample is cumulated up the current sample and then the decision about the process control is taken. The mentioned characteristic of EWMA and MEWMA charts, are of the paramount importance when it is necessary to control industrial process, because it is possible to correct or predict problems in the processes before they come to a dangerous limit.

Keywords: control charts, multivariate exponentially moving average (MEWMA), exponentially moving average (EWMA), industrial control process

Procedia PDF Downloads 327
16619 Resistance and Sub-Resistances of RC Beams Subjected to Multiple Failure Modes

Authors: F. Sangiorgio, J. Silfwerbrand, G. Mancini

Abstract:

Geometric and mechanical properties all influence the resistance of RC structures and may, in certain combination of property values, increase the risk of a brittle failure of the whole system. This paper presents a statistical and probabilistic investigation on the resistance of RC beams designed according to Eurocodes 2 and 8, and subjected to multiple failure modes, under both the natural variation of material properties and the uncertainty associated with cross-section and transverse reinforcement geometry. A full probabilistic model based on JCSS Probabilistic Model Code is derived. Different beams are studied through material nonlinear analysis via Monte Carlo simulations. The resistance model is consistent with Eurocode 2. Both a multivariate statistical evaluation and the data clustering analysis of outcomes are then performed. Results show that the ultimate load behaviour of RC beams subjected to flexural and shear failure modes seems to be mainly influenced by the combination of the mechanical properties of both longitudinal reinforcement and stirrups, and the tensile strength of concrete, of which the latter appears to affect the overall response of the system in a nonlinear way. The model uncertainty of the resistance model used in the analysis plays undoubtedly an important role in interpreting results.

Keywords: modelling, Monte Carlo simulations, probabilistic models, data clustering, reinforced concrete members, structural design

Procedia PDF Downloads 447
16618 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 160
16617 Learning Performance of Sports Education Model Based on Self-Regulated Learning Approach

Authors: Yi-Hsiang Pan, Ching-Hsiang Chen, Wei-Ting Hsu

Abstract:

The purpose of this study was to compare the learning effects of the sports education model (SEM) to those of the traditional teaching model (TTM) in physical education classes in terms of students learning motivation, action control, learning strategies, and learning performance. A quasi-experimental design was utilized in this study, and participants included two physical educators and four classes with a total of 94 students in grades 5 and 6 of elementary schools. Two classes implemented the SEM (n=47, male=24, female=23; age=11.89, SD=0.78) and two classes implemented the TTM (n=47, male=25, female=22, age=11.77; SD=0.66). Data were collected from these participants using a self-report questionnaire (including a learning motivation scale, action control scale, and learning strategy scale) and a game performance assessment instrument, and multivariate analysis of covariance was used to conduct statistical analysis. The findings of the study revealed that the SEM was significantly better than the TTM in promoting students learning motivation, action control, learning strategies, and game performance. It was concluded that the SEM could promote the mechanics of students self-regulated learning process, and thereby improve students movement performance.

Keywords: self-regulated learning theory, learning process, curriculum model, physical education

Procedia PDF Downloads 316
16616 Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization

Authors: K. Umbleja, M. Ichino, H. Yaguchi

Abstract:

In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object.

Keywords: data visualization, dissimilarity-based coloring, proximity-based coloring, symbolic data

Procedia PDF Downloads 137
16615 Estimation of Functional Response Model by Supervised Functional Principal Component Analysis

Authors: Hyon I. Paek, Sang Rim Kim, Hyon A. Ryu

Abstract:

In functional linear regression, one typical problem is to reduce dimension. Compared with multivariate linear regression, functional linear regression is regarded as an infinite-dimensional case, and the main task is to reduce dimensions of functional response and functional predictors. One common approach is to adapt functional principal component analysis (FPCA) on functional predictors and then use a few leading functional principal components (FPC) to predict the functional model. The leading FPCs estimated by the typical FPCA explain a major variation of the functional predictor, but these leading FPCs may not be mostly correlated with the functional response, so they may not be significant in the prediction for response. In this paper, we propose a supervised functional principal component analysis method for a functional response model with FPCs obtained by considering the correlation of the functional response. Our method would have a better prediction accuracy than the typical FPCA method.

Keywords: supervised, functional principal component analysis, functional response, functional linear regression

Procedia PDF Downloads 40
16614 Automated Process Quality Monitoring and Diagnostics for Large-Scale Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Continuous monitoring of industrial plants is one of necessary tasks when it comes to ensuring high-quality final products. In terms of monitoring and diagnosis, it is quite critical and important to detect some incipient abnormal events of manufacturing processes in order to improve safety and reliability of operations involved and to reduce related losses. In this work a new multivariate statistical online diagnostic method is presented using a case study. For building some reference models an empirical discriminant model is constructed based on various past operation runs. When a fault is detected on-line, an on-line diagnostic module is initiated. Finally, the status of the current operating conditions is compared with the reference model to make a diagnostic decision. The performance of the presented framework is evaluated using a dataset from complex industrial processes. It has been shown that the proposed diagnostic method outperforms other techniques especially in terms of incipient detection of any faults occurred.

Keywords: data mining, empirical model, on-line diagnostics, process fault, process monitoring

Procedia PDF Downloads 367
16613 An Approach for Pattern Recognition and Prediction of Information Diffusion Model on Twitter

Authors: Amartya Hatua, Trung Nguyen, Andrew Sung

Abstract:

In this paper, we study the information diffusion process on Twitter as a multivariate time series problem. Our model concerns three measures (volume, network influence, and sentiment of tweets) based on 10 features, and we collected 27 million tweets to build our information diffusion time series dataset for analysis. Then, different time series clustering techniques with Dynamic Time Warping (DTW) distance were used to identify different patterns of information diffusion. Finally, we built the information diffusion prediction models for new hashtags which comprise two phrases: The first phrase is recognizing the pattern using k-NN with DTW distance; the second phrase is building the forecasting model using the traditional Autoregressive Integrated Moving Average (ARIMA) model and the non-linear recurrent neural network of Long Short-Term Memory (LSTM). Preliminary results of performance evaluation between different forecasting models show that LSTM with clustering information notably outperforms other models. Therefore, our approach can be applied in real-world applications to analyze and predict the information diffusion characteristics of selected topics or memes (hashtags) in Twitter.

Keywords: ARIMA, DTW, information diffusion, LSTM, RNN, time series clustering, time series forecasting, Twitter

Procedia PDF Downloads 361
16612 Spatial Interpolation Technique for the Optimisation of Geometric Programming Problems

Authors: Debjani Chakraborty, Abhijit Chatterjee, Aishwaryaprajna

Abstract:

Posynomials, a special type of polynomials, having singularities, pose difficulties while solving geometric programming problems. In this paper, a methodology has been proposed and used to obtain extreme values for geometric programming problems by nth degree polynomial interpolation technique. Here the main idea to optimise the posynomial is to fit a best polynomial which has continuous gradient values throughout the range of the function. The approximating polynomial is smoothened to remove the discontinuities present in the feasible region and the objective function. This spatial interpolation method is capable to optimise univariate and multivariate geometric programming problems. An example is solved to explain the robustness of the methodology by considering a bivariate nonlinear geometric programming problem. This method is also applicable for signomial programming problem.

Keywords: geometric programming problem, multivariate optimisation technique, posynomial, spatial interpolation

Procedia PDF Downloads 331
16611 Application of Deep Learning in Top Pair and Single Top Quark Production at the Large Hadron Collider

Authors: Ijaz Ahmed, Anwar Zada, Muhammad Waqas, M. U. Ashraf

Abstract:

We demonstrate the performance of a very efficient tagger applies on hadronically decaying top quark pairs as signal based on deep neural network algorithms and compares with the QCD multi-jet background events. A significant enhancement of performance in boosted top quark events is observed with our limited computing resources. We also compare modern machine learning approaches and perform a multivariate analysis of boosted top-pair as well as single top quark production through weak interaction at √s = 14 TeV proton-proton Collider. The most relevant known background processes are incorporated. Through the techniques of Boosted Decision Tree (BDT), likelihood and Multlayer Perceptron (MLP) the analysis is trained to observe the performance in comparison with the conventional cut based and count approach

Keywords: top tagger, multivariate, deep learning, LHC, single top

Procedia PDF Downloads 78