Search results for: poisson regression model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18276

Search results for: poisson regression model

18096 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 457
18095 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 61
18094 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 181
18093 Factors for Entry Timing Choices Using Principal Axis Factorial Analysis and Logistic Regression Model

Authors: C. M. Mat Isa, H. Mohd Saman, S. R. Mohd Nasir, A. Jaapar

Abstract:

International market expansion involves a strategic process of market entry decision through which a firm expands its operation from domestic to the international domain. Hence, entry timing choices require the needs to balance the early entry risks and the problems in losing opportunities as a result of late entry into a new market. Questionnaire surveys administered to 115 Malaysian construction firms operating in 51 countries worldwide have resulted in 39.1 percent response rate. Factor analysis was used to determine the most significant factors affecting entry timing choices of the firms to penetrate the international market. A logistic regression analysis used to examine the firms’ entry timing choices, indicates that the model has correctly classified 89.5 per cent of cases as late movers. The findings reveal that the most significant factor influencing the construction firms’ choices as late movers was the firm factor related to the firm’s international experience, resources, competencies and financing capacity. The study also offers valuable information to construction firms with intention to internationalize their businesses.

Keywords: factors, early movers, entry timing choices, late movers, logistic regression model, principal axis factorial analysis, Malaysian construction firms

Procedia PDF Downloads 352
18092 Heart Attack Prediction Using Several Machine Learning Methods

Authors: Suzan Anwar, Utkarsh Goyal

Abstract:

Heart rate (HR) is a predictor of cardiovascular, cerebrovascular, and all-cause mortality in the general population, as well as in patients with cardio and cerebrovascular diseases. Machine learning (ML) significantly improves the accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment while avoiding unnecessary treatment of others. This research examines relationship between the individual's various heart health inputs like age, sex, cp, trestbps, thalach, oldpeaketc, and the likelihood of developing heart disease. Machine learning techniques like logistic regression and decision tree, and Python are used. The results of testing and evaluating the model using the Heart Failure Prediction Dataset show the chance of a person having a heart disease with variable accuracy. Logistic regression has yielded an accuracy of 80.48% without data handling. With data handling (normalization, standardscaler), the logistic regression resulted in improved accuracy of 87.80%, decision tree 100%, random forest 100%, and SVM 100%.

Keywords: heart rate, machine learning, SVM, decision tree, logistic regression, random forest

Procedia PDF Downloads 113
18091 A Model for Diagnosis and Prediction of Coronavirus Using Neural Network

Authors: Sajjad Baghernezhad

Abstract:

Meta-heuristic and hybrid algorithms have high adeer in modeling medical problems. In this study, a neural network was used to predict covid-19 among high-risk and low-risk patients. This study was conducted to collect the applied method and its target population consisting of 550 high-risk and low-risk patients from the Kerman University of medical sciences medical center to predict the coronavirus. In this study, the memetic algorithm, which is a combination of a genetic algorithm and a local search algorithm, has been used to update the weights of the neural network and develop the accuracy of the neural network. The initial study showed that the accuracy of the neural network was 88%. After updating the weights, the memetic algorithm increased by 93%. For the proposed model, sensitivity, specificity, positive predictivity value, value/accuracy to 97.4, 92.3, 95.8, 96.2, and 0.918, respectively; for the genetic algorithm model, 87.05, 9.20 7, 89.45, 97.30 and 0.967 and for logistic regression model were 87.40, 95.20, 93.79, 0.87 and 0.916. Based on the findings of this study, neural network models have a lower error rate in the diagnosis of patients based on individual variables and vital signs compared to the regression model. The findings of this study can help planners and health care providers in signing programs and early diagnosis of COVID-19 or Corona.

Keywords: COVID-19, decision support technique, neural network, genetic algorithm, memetic algorithm

Procedia PDF Downloads 48
18090 Implementation of a Non-Poissonian Model in a Low-Seismicity Area

Authors: Ludivine Saint-Mard, Masato Nakajima, Gloria Senfaute

Abstract:

In areas with low to moderate seismicity, the probabilistic seismic hazard analysis frequently uses a Poisson approach, which assumes independence in time and space of events to determine the annual probability of earthquake occurrence. Nevertheless, in countries with high seismic rate, such as Japan, it is frequently use non-poissonian model which assumes that next earthquake occurrence depends on the date of previous one. The objective of this paper is to apply a non-poissonian models in a region of low to moderate seismicity to get a feedback on the following questions: can we overcome the lack of data to determine some key parameters?, and can we deal with uncertainties to apply largely this methodology on an industrial context?. The Brownian-Passage-Time model was applied to a fault located in France and conclude that even if the lack of data can be overcome with some calculations, the amount of uncertainties and number of scenarios leads to a numerous branches in PSHA, making this method difficult to apply on a large scale of low to moderate seismicity areas and in an industrial context.

Keywords: probabilistic seismic hazard, non-poissonian model, earthquake occurrence, low seismicity

Procedia PDF Downloads 27
18089 Multi-Linear Regression Based Prediction of Mass Transfer by Multiple Plunging Jets

Authors: S. Deswal, M. Pal

Abstract:

The paper aims to compare the performance of vertical and inclined multiple plunging jets and to model and predict their mass transfer capacity by multi-linear regression based approach. The multiple vertical plunging jets have jet impact angle of θ = 90O; whereas, multiple inclined plunging jets have jet impact angle of θ = 600. The results of the study suggests that mass transfer is higher for multiple jets, and inclined multiple plunging jets have up to 1.6 times higher mass transfer than vertical multiple plunging jets under similar conditions. The derived relationship, based on multi-linear regression approach, has successfully predicted the volumetric mass transfer coefficient (KLa) from operational parameters of multiple plunging jets with a correlation coefficient of 0.973, root mean square error of 0.002 and coefficient of determination of 0.946. The results suggests that predicted overall mass transfer coefficient is in good agreement with actual experimental values; thereby suggesting the utility of derived relationship based on multi-linear regression based approach and can be successfully employed in modelling mass transfer by multiple plunging jets.

Keywords: mass transfer, multiple plunging jets, multi-linear regression, earth sciences

Procedia PDF Downloads 431
18088 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 58
18087 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data

Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill

Abstract:

Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.

Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function

Procedia PDF Downloads 251
18086 Economic Analysis of Cowpea (Unguiculata spp) Production in Northern Nigeria: A Case Study of Kano Katsina and Jigawa States

Authors: Yakubu Suleiman, S. A. Musa

Abstract:

Nigeria is the largest cowpea producer in the world, accounting for about 45%, followed by Brazil with about 17%. Cowpea is grown in Kano, Bauchi, Katsina, Borno in the north, Oyo in the west, and to the lesser extent in Enugu in the east. This study was conducted to determine the input–output relationship of Cowpea production in Kano, Katsina, and Jigawa states of Nigeria. The data were collected with the aid of 1000 structured questionnaires that were randomly distributed to Cowpea farmers in the three states mentioned above of the study area. The data collected were analyzed using regression analysis (Cobb–Douglass production function model). The result of the regression analysis revealed the coefficient of multiple determinations, R2, to be 72.5% and the F ration to be 106.20 and was found to be significant (P < 0.01). The regression coefficient of constant is 0.5382 and is significant (P < 0.01). The regression coefficient with respect to labor and seeds were 0.65554 and 0.4336, respectively, and they are highly significant (P < 0.01). The regression coefficient with respect to fertilizer is 0.26341 which is significant (P < 0.05). This implies that a unit increase of any one of the variable inputs used while holding all other variables inputs constants, will significantly increase the total Cowpea output by their corresponding coefficient. This indicated that farmers in the study area are operating in stage II of the production function. The result revealed that Cowpea farmer in Kano, Jigawa and Katsina States realized a profit of N15,997, N34,016 and N19,788 per hectare respectively. It is hereby recommended that more attention should be given to Cowpea production by government and research institutions.

Keywords: coefficient, constant, inputs, regression

Procedia PDF Downloads 387
18085 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 338
18084 Free Fatty Acid Assessment of Crude Palm Oil Using a Non-Destructive Approach

Authors: Siti Nurhidayah Naqiah Abdull Rani, Herlina Abdul Rahim, Rashidah Ghazali, Noramli Abdul Razak

Abstract:

Near infrared (NIR) spectroscopy has always been of great interest in the food and agriculture industries. The development of prediction models has facilitated the estimation process in recent years. In this study, 110 crude palm oil (CPO) samples were used to build a free fatty acid (FFA) prediction model. 60% of the collected data were used for training purposes and the remaining 40% used for testing. The visible peaks on the NIR spectrum were at 1725 nm and 1760 nm, indicating the existence of the first overtone of C-H bands. Principal component regression (PCR) was applied to the data in order to build this mathematical prediction model. The optimal number of principal components was 10. The results showed R2=0.7147 for the training set and R2=0.6404 for the testing set.

Keywords: palm oil, fatty acid, NIRS, regression

Procedia PDF Downloads 478
18083 Electrohydrodynamic Study of Microwave Plasma PECVD Reactor

Authors: Keltoum Bouherine, Olivier Leroy

Abstract:

The present work is dedicated to study a three–dimensional (3D) self-consistent fluid simulation of microwave discharges of argon plasma in PECVD reactor. The model solves the Maxwell’s equations, continuity equations for charged species and the electron energy balance equation, coupled with Poisson’s equation, and Navier-Stokes equations by finite element method, using COMSOL Multiphysics software. In this study, the simulations yield the profiles of plasma components as well as the charge densities and electron temperature, the electric field, the gas velocity, and gas temperature. The results show that the microwave plasma reactor is outside of local thermodynamic equilibrium.The present work is dedicated to study a three–dimensional (3D) self-consistent fluid simulation of microwave discharges of argon plasma in PECVD reactor. The model solves the Maxwell’s equations, continuity equations for charged species and the electron energy balance equation, coupled with Poisson’s equation, and Navier-Stokes equations by finite element method, using COMSOL Multiphysics software. In this study, the simulations yield the profiles of plasma components as well as the charge densities and electron temperature, the electric field, the gas velocity, and gas temperature. The results show that the microwave plasma reactor is outside of local thermodynamic equilibrium.

Keywords: electron density, electric field, microwave plasma reactor, gas velocity, non-equilibrium plasma

Procedia PDF Downloads 302
18082 Non-Methane Hydrocarbons Emission during the Photocopying Process

Authors: Kiurski S. Jelena, Aksentijević M. Snežana, Kecić S. Vesna, Oros B. Ivana

Abstract:

The prosperity of electronic equipment in photocopying environment not only has improved work efficiency, but also has changed indoor air quality. Considering the number of photocopying employed, indoor air quality might be worse than in general office environments. Determining the contribution from any type of equipment to indoor air pollution is a complex matter. Non-methane hydrocarbons are known to have an important role of air quality due to their high reactivity. The presence of hazardous pollutants in indoor air has been detected in one photocopying shop in Novi Sad, Serbia. Air samples were collected and analyzed for five days, during 8-hr working time in three-time intervals, whereas three different sampling points were determined. Using multiple linear regression model and software package STATISTICA 10 the concentrations of occupational hazards and micro-climates parameters were mutually correlated. Based on the obtained multiple coefficients of determination (0.3751, 0.2389, and 0.1975), a weak positive correlation between the observed variables was determined. Small values of parameter F indicated that there was no statistically significant difference between the concentration levels of non-methane hydrocarbons and micro-climates parameters. The results showed that variable could be presented by the general regression model: y = b0 + b1xi1+ b2xi2. Obtained regression equations allow to measure the quantitative agreement between the variation of variables and thus obtain more accurate knowledge of their mutual relations.

Keywords: non-methane hydrocarbons, photocopying process, multiple regression analysis, indoor air quality, pollutant emission

Procedia PDF Downloads 350
18081 A Machine Learning Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

There has been a need in recent years to predict student academic achievement prior to graduation. This is to assist them in improving their grades, especially for those who have struggled in the past. The purpose of this research is to use supervised learning techniques to create a model that predicts student academic progress. Many scholars have developed models that predict student academic achievement based on characteristics including smoking, demography, culture, social media, parent educational background, parent finances, and family background, to mention a few. This element, as well as the model used, could have misclassified the kids in terms of their academic achievement. As a prerequisite to predicting if the student will perform well in the future on related courses, this model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester. With a 96.7 percent accuracy, the model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost. This model is offered as a desktop application with user-friendly interfaces for forecasting student academic progress for both teachers and students. As a result, both students and professors are encouraged to use this technique to predict outcomes better.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 85
18080 Minimizing the Impact of Covariate Detection Limit in Logistic Regression

Authors: Shahadut Hossain, Jacek Wesolowski, Zahirul Hoque

Abstract:

In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.

Keywords: environmental exposure, detection limit, left truncation, bias, ad-hoc substitution

Procedia PDF Downloads 211
18079 Poverty Dynamics in Thailand: Evidence from Household Panel Data

Authors: Nattabhorn Leamcharaskul

Abstract:

This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.

Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer

Procedia PDF Downloads 83
18078 Multicollinearity and MRA in Sustainability: Application of the Raise Regression

Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez

Abstract:

Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.

Keywords: multicollinearity, MRA, interaction, raise

Procedia PDF Downloads 73
18077 Binary Logistic Regression Model in Predicting the Employability of Senior High School Graduates

Authors: Cromwell F. Gopo, Joy L. Picar

Abstract:

This study aimed to predict the employability of senior high school graduates for S.Y. 2018- 2019 in the Davao del Norte Division through quantitative research design using the descriptive status and predictive approaches among the indicated parameters, namely gender, school type, academics, academic award recipient, skills, values, and strand. The respondents of the study were the 33 secondary schools offering senior high school programs identified through simple random sampling, which resulted in 1,530 cases of graduates’ secondary data, which were analyzed using frequency, percentage, mean, standard deviation, and binary logistic regression. Results showed that the majority of the senior high school graduates who come from large schools were females. Further, less than half of these graduates received any academic award in any semester. In general, the graduates’ performance in academics, skills, and values were proficient. Moreover, less than half of the graduates were not employed. Then, those who were employed were either contractual, casual, or part-time workers dominated by GAS graduates. Further, the predictors of employability were gender and the Information and Communications Technology (ICT) strand, while the remaining variables did not add significantly to the model. The null hypothesis had been rejected as the coefficients of the predictors in the binary logistic regression equation did not take the value of 0. After utilizing the model, it was concluded that Technical-Vocational-Livelihood (TVL) graduates except ICT had greater estimates of employability.

Keywords: employability, senior high school graduates, Davao del Norte, Philippines

Procedia PDF Downloads 110
18076 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 355
18075 Determining the Factors Affecting Social Media Addiction (Virtual Tolerance, Virtual Communication), Phubbing, and Perception of Addiction in Nurses

Authors: Fatima Zehra Allahverdi, Nukhet Bayer

Abstract:

Objective: Three questions were formulated to examine stressful working units (intensive care units, emergency unit nurses) utilizing the self-perception theory and social support theory. This study provides a distinctive input by inspecting the combination of variables regarding stressful working environments. Method: The descriptive research was conducted with the participation of 400 nurses working at Ankara City Hospital. The study used Multivariate Analysis of Variance (MANOVA), regression analysis, and a mediation model. Hypothesis one used MANOVA followed by a Scheffe post hoc test. Hypothesis two utilized regression analysis using a hierarchical linear regression model. Hypothesis three used a mediation model. Result: The study utilized mediation analyses. Findings supported the hypotheses that intensive care units have significantly high scores in virtual communication and virtual tolerance. The number of years on the job, virtual communication, virtual tolerance, and phubbing significantly predicted 51% of the variance of perception of addiction. Interestingly, the number of years on the job, while significant, was negatively related to perception of addiction. Conclusion: The reasoning behind these findings and the lack of significance in the emergency unit is discussed. Around 7% of the variance of phubbing was accounted for through working in intensive care units. The model accounted for 26.80 % of the differences in the perception of addiction.

Keywords: phubbing, social media, working units, years on the job, stress

Procedia PDF Downloads 20
18074 An Epsilon Hierarchical Fuzzy Twin Support Vector Regression

Authors: Arindam Chaudhuri

Abstract:

The research presents epsilon- hierarchical fuzzy twin support vector regression (epsilon-HFTSVR) based on epsilon-fuzzy twin support vector regression (epsilon-FTSVR) and epsilon-twin support vector regression (epsilon-TSVR). Epsilon-FTSVR is achieved by incorporating trapezoidal fuzzy numbers to epsilon-TSVR which takes care of uncertainty existing in forecasting problems. Epsilon-FTSVR determines a pair of epsilon-insensitive proximal functions by solving two related quadratic programming problems. The structural risk minimization principle is implemented by introducing regularization term in primal problems of epsilon-FTSVR. This yields dual stable positive definite problems which improves regression performance. Epsilon-FTSVR is then reformulated as epsilon-HFTSVR consisting of a set of hierarchical layers each containing epsilon-FTSVR. Experimental results on both synthetic and real datasets reveal that epsilon-HFTSVR has remarkable generalization performance with minimum training time.

Keywords: regression, epsilon-TSVR, epsilon-FTSVR, epsilon-HFTSVR

Procedia PDF Downloads 332
18073 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 303
18072 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 341
18071 Econometric Analysis of West African Countries’ Container Terminal Throughput and Gross Domestic Products

Authors: Kehinde Peter Oyeduntan, Kayode Oshinubi

Abstract:

The west African ports have been experiencing large inflow and outflow of containerized cargo in the last decades, and this has created a quest amongst the countries to attain the status of hub port for the sub-region. This study analyzed the relationship between the container throughput and Gross Domestic Products (GDP) of nine west African countries, using Simple Linear Regression (SLR), Polynomial Regression Model (PRM) and Support Vector Machines (SVM) with a time series of 20 years. The results showed that there exists a high correlation between the GDP and container throughput. The model also predicted the container throughput in west Africa for the next 20 years. The findings and recommendations presented in this research will guide policy makers and help improve the management of container ports and terminals in west Africa, thereby boosting the economy.

Keywords: container, ports, terminals, throughput

Procedia PDF Downloads 180
18070 Effects of Heat Treatment on the Elastic Constants of Cedar Wood

Authors: Tugba Yilmaz Aydin, Ergun Guntekin, Murat Aydin

Abstract:

Effects of heat treatment on the elastic constants of cedar wood (Cedrus libani) were investigated. Specimens were exposed to heat under atmospheric pressure at four different temperatures (120, 150, 180, 210 °C) and three different time levels (2, 5, 8 hours). Three Young’s modulus (EL, ER, ET) and six Poisson ratios (μLR, μLT, μRL, μRT, μTL, μTR) were determined from compression test using bi-axial extensometer at constant moisture content (12 %). Three shear modulus were determined using ultrasound. Six shear wave velocities propagating along the principal axes of anisotropy were measured using EPOCH 650 ultrasonic flaw detector with 1 MHz transverse transducers. The properties of the samples tested were significantly affected by heat treatment by different degree. As a result, softer treatments yielded some amount of increase in Young modulus and shear modulus values, but increase of time and temperature resulted in significant decrease for both values. Poisson ratios seemed insensitive to heat treatment.

Keywords: cedar wood, elastic constants, heat treatment, ultrasound

Procedia PDF Downloads 352
18069 BART Matching Method: Using Bayesian Additive Regression Tree for Data Matching

Authors: Gianna Zou

Abstract:

Propensity score matching (PSM), introduced by Paul R. Rosenbaum and Donald Rubin in 1983, is a popular statistical matching technique which tries to estimate the treatment effects by taking into account covariates that could impact the efficacy of study medication in clinical trials. PSM can be used to reduce the bias due to confounding variables. However, PSM assumes that the response values are normally distributed. In some cases, this assumption may not be held. In this paper, a machine learning method - Bayesian Additive Regression Tree (BART), is used as a more robust method of matching. BART can work well when models are misspecified since it can be used to model heterogeneous treatment effects. Moreover, it has the capability to handle non-linear main effects and multiway interactions. In this research, a BART Matching Method (BMM) is proposed to provide a more reliable matching method over PSM. By comparing the analysis results from PSM and BMM, BMM can perform well and has better prediction capability when the response values are not normally distributed.

Keywords: BART, Bayesian, matching, regression

Procedia PDF Downloads 115
18068 The Relationship between Coping Styles and Internet Addiction among High School Students

Authors: Adil Kaval, Digdem Muge Siyez

Abstract:

With the negative effects of internet use in a person's life, the use of the Internet has become an issue. This subject was mostly considered as internet addiction, and it was investigated. In literature, it is noteworthy that some theoretical models have been proposed to explain the reasons for internet addiction. In addition to these theoretical models, it may be thought that the coping style for stressing events can be a predictor of internet addiction. It was aimed to test with logistic regression the effect of high school students' coping styles on internet addiction levels. Sample of the study consisted of 770 Turkish adolescents (471 girls, 299 boys) selected from high schools in the 2017-2018 academic year in İzmir province. Internet Addiction Test, Coping Scale for Child and Adolescents and a demographic information form were used in this study. The results of the logistic regression analysis indicated that the model of coping styles predicted internet addiction provides a statistically significant prediction of internet addiction. Gender does not predict whether or not to be addicted to the internet. The active coping style is not effective on internet addiction levels, while the avoiding and negative coping style are effective on internet addiction levels. With this model, % 79.1 of internet addiction in high school is estimated. The Negelkerke pseudo R2 indicated that the model accounted for %35 of the total variance. The results of this study on Turkish adolescents are similar to the results of other studies in the literature. It can be argued that avoiding and negative coping styles are important risk factors in the development of internet addiction.

Keywords: adolescents, coping, internet addiction, regression analysis

Procedia PDF Downloads 147
18067 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 424