Search results for: mixed effect logistic regression model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32022

Search results for: mixed effect logistic regression model

31572 Marketing Mixed Factors Affecting on Commercial Transactions Expectations through Social Networks

Authors: Ladaporn Pithuk

Abstract:

This study aims to investigate the marketing mixed factors that affecting on expectations about commercial transactions through social networks. The research method will using quantitative research, data was collected by questionnaires to person have experience access to trading over the internet for 400 sample by purposive sampling method. Data was analyzed by descriptive statistic including percentage, mean, standard deviation and using quality function deployment for hypothesis testing. Finding the most significant interrelationship between marketing mixed factors and commercial transactions expectations through social networks are product and place the relationship of five ties product and place (location) is involved in almost all will make the site a model that meets the needs of the user visit. In terms of price, the promotion, privacy, personalization and providing a process technical. This will make operations more efficient, reduce confusion, duplication, delays in data transmission, including the creation of different elements in products and services.

Keywords: commercial transactions expectations, marketing mixed factors, social networks, consumer behavior

Procedia PDF Downloads 239
31571 An Efficient Discrete Chaos in Generalized Logistic Maps with Applications in Image Encryption

Authors: Ashish Ashish

Abstract:

In the last few decades, the discrete chaos of difference equations has gained a massive attention of academicians and scholars due to its tremendous applications in each and every branch of science, such as cryptography, traffic control models, secure communications, weather forecasting, and engineering. In this article, a generalized logistic discrete map is established and discrete chaos is reported through period doubling bifurcation, period three orbit and Lyapunov exponent. It is interesting to see that the generalized logistic map exhibits superior chaos due to the presence of an extra degree of freedom of an ordered parameter. The period doubling bifurcation and Lyapunov exponent are demonstrated for some particular values of parameter and the discrete chaos is determined in the sense of Devaney's definition of chaos theoretically as well as numerically. Moreover, the study discusses an extended chaos based image encryption and decryption scheme in cryptography using this novel system. Surprisingly, a larger key space for coding and more sensitive dependence on initial conditions are examined for encryption and decryption of text messages, images and videos which secure the system strongly from external cyber attacks, coding attacks, statistic attacks and differential attacks.

Keywords: chaos, period-doubling, logistic map, Lyapunov exponent, image encryption

Procedia PDF Downloads 155
31570 Prevalence of Cerebral Microbleeds in Apparently Healthy, Elderly Population: A Meta-Analysis

Authors: Vidishaa Jali, Amit Sinha, Kameshwar Prasad

Abstract:

Background and Objective: Cerebral microbleeds are frequently found in healthy elderly individuals. We performed a meta- analysis to determine the prevalence of cerebral microbleeds in apparently healthy, elderly population and to determine the effect of age, smoking and hypertension on the occurrence of cerebral microbleeds. Methods: Relevant literature was searched using electronic databases such as MEDLINE, EMBASE, PubMed, Cochrane database, Google scholar to identify studies on the prevalence of cerebral microbleeds in general elderly population till March 2016. STATA version 13 software was used for analysis. Fixed effect model was used if heterogeneity was less than 50%. Otherwise, random effect model was used. Meta- regression analysis was performed to check any effect of important variables such as age, smoking, hypertension. Selection Criteria: We included cross-sectional studies performed in apparently healthy elderly population, who had age more than 50 years. Results: The pooled proportion of cerebral microbleeds in healthy population is 12% (95% CI, 0.11 to 0.13). No significant effect of age was found on the prevalence of cerebral microbleeds (p= 0.99). A linear relationship between increase in hypertension and the prevalence of cerebral microbleeds was found, however, this linear relationship was not statistically significant (p=0.16). Similarly, A linear relationship between increase in smoking and the prevalence of cerebral microbleeds was found, however, this linear relationship was also not statistically significant (p=0.21). Conclusion: Presence of cerebral microbleeds is evident in apparently healthy, elderly population, in more than 10% of individuals.

Keywords: apparently healthy, elderly, prevalence, cerebral microbleeds

Procedia PDF Downloads 296
31569 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 232
31568 Qsar Studies of Certain Novel Heterocycles Derived From bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, triazoles, novel heterocycles

Procedia PDF Downloads 444
31567 Investigating the Influence of the Ferro Alloys Consumption on the Slab Product Standard Cost with Different Grades Using Regression Analysis (A Case Study of Iran's Iron and Steel Industry)

Authors: Iman Fakhrian, Ali Salehi Manzari

Abstract:

Consistent Profitability is one of the most important priorities in manufacturing companies. One of the fundamental factors for increasing the companies profitability is cost management. Isfahan's mobarakeh steel company is one of the largest producers of the slab product grades in the middle east. Raw material cost constitutes about 70% of the company's expenditures. The costs of the ferro alloys have a remarkable contribution of the raw material costs. This research aims to determine the ferro alloys which have significant effect on the variability of the standard cost of the slab product grades. Used data in this study were collected from standard costing system of isfahan's mobarakeh steel company in 2022. The results of conducting the regression analysis model show that expense items: 03020, 03045, 03125, 03130 and 03150 have dominant role in variability of the standard cost of the slab product grades. In other words, the mentioned ferro alloys have noticeable and significant role in variability of the standard cost of the slab product grades.

Keywords: consistent profitability, ferro alloys, slab product grades, regression analysis

Procedia PDF Downloads 72
31566 Competitiveness of African Countries through Open Quintuple Helix Model

Authors: B. G. C. Ahodode, S. Fekkaklouhail

Abstract:

Following the triple helix theory, this study aims to evaluate the innovation system effect on African countries’ competitiveness by taking into account external contributions; according to the extent that developing countries (especially African countries) are characterized by weak innovation systems whose synergy operates more at the foreign level than domestic and global. To do this, we used the correlation test, parsimonious regression techniques, and panel estimation between 2013 and 2016. Results show that the degree of innovation synergy has a significant effect on competitiveness in Africa. Specifically, while the opening system (OPESYS) and social system (SOCSYS) contribute respectively in importance order to 0.634 and 0.284 (at 1%) significant points of increase in the GCI, the political system (POLSYS) and educational system (EDUSYS) only increase it to 0.322 and 0.169 at 5% significance level while the effect of the economic system (ECOSYS) is not significant on Global Competitiveness Index.

Keywords: innovation system, innovation, competitiveness, Africa

Procedia PDF Downloads 71
31565 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units

Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz

Abstract:

Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.

Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting

Procedia PDF Downloads 223
31564 The Effect of Artificial Intelligence on Construction Development

Authors: Shady Gamal Aziz Shehata

Abstract:

Difficulty in defining construction quality arises due to perception based on the nature and requirements of the market, the different partners themselves and the results they want. Quantitative research was used in this constructivist research. A case-based study was conducted to assess the structures of positive attitudes and expectations in the context of quality improvement. A survey based on expert opinions was analyzed among construction organizations/companies operating in the construction industry in Pakistan. The financial strength, management structure and construction experience of the construction companies formed the basis of their selection. A good concept is visible at the project level and is seen as the most valuable part of the construction project. Each quality improvement technique was expected to increase the user's profits by improving the efficiency of the construction project. The Survey is useful for construction professionals to evaluate current construction concepts and expectations for the application of quality improvement techniques in construction projects.

Keywords: correlation analysis, lean construction tools, lean construction, logistic regression analysis, risk management, safety construction quality, expectation, improvement, perception

Procedia PDF Downloads 61
31563 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 103
31562 Modelling the Indonesian Goverment Securities Yield Curve Using Nelson-Siegel-Svensson and Support Vector Regression

Authors: Jamilatuzzahro, Rezzy Eko Caraka

Abstract:

The yield curve is the plot of the yield to maturity of zero-coupon bonds against maturity. In practice, the yield curve is not observed but must be extracted from observed bond prices for a set of (usually) incomplete maturities. There exist many methodologies and theory to analyze of yield curve. We use two methods (the Nelson-Siegel Method, the Svensson Method, and the SVR method) in order to construct and compare our zero-coupon yield curves. The objectives of this research were: (i) to study the adequacy of NSS model and SVR to Indonesian government bonds data, (ii) to choose the best optimization or estimation method for NSS model and SVR. To obtain that objective, this research was done by the following steps: data preparation, cleaning or filtering data, modeling, and model evaluation.

Keywords: support vector regression, Nelson-Siegel-Svensson, yield curve, Indonesian government

Procedia PDF Downloads 246
31561 Assessment of Level of Sedation and Associated Factors Among Intubated Critically Ill Children in Pediatric Intensive Care Unit of Jimma University Medical Center: A Fourteen Months Prospective Observation Study, 2023

Authors: Habtamu Wolde Engudai

Abstract:

Background: Sedation can be provided to facilitate a procedure or to stabilize patients admitted in pediatric intensive care unit (PICU). Sedation is often necessary to maintain optimal care for critically ill children requiring mechanical ventilation. However, if sedation is too deep or too light, it has its own adverse effects, and hence, it is important to monitor the level of sedation and maintain an optimal level. Objectives: The objective is to assess the level of sedation and associated factors among intubated critically ill children admitted to PICU of JUMC, Jimma. Methods: A prospective observation study was conducted in the PICU of JUMC in September 2021 in 105 patients who were going to be admitted to the PICU aged less than 14 and with GCS >8. Data was collected by residents and nurses working in PICU. Data entry was done by Epi data manager (version 4.6.0.2). Statistical analysis and the creation of charts is going to be performed using SPSS version 26. Data was presented as mean, percentage and standard deviation. The assumption of logistic regression and the result of the assumption will be checked. To find potential predictors, bi-variable logistic regression was used for each predictor and outcome variable. A p value of <0.05 was considered as statistically significant. Finally, findings have been presented using figures, AOR, percentages, and a summary table. Result: in this study, 105 critically ill children had been involved who were started on continuous or intermittent forms of sedative drugs. Sedation level was assessed using a comfort scale three times per day. Based on this observation, we got a 44.8% level of suboptimal sedation at the baseline, a 36.2% level of suboptimal sedation at eight hours, and a 24.8% level of suboptimal sedation at sixteen hours. There is a significant association between suboptimal sedation and duration of stay with mechanical ventilation and the rate of unplanned extubation, which was shown by P < 0.05 using the Hosmer-Lemeshow test of goodness of fit (p> 0.44).

Keywords: level of sedation, critically ill children, Pediatric intensive care unit, Jimma university

Procedia PDF Downloads 61
31560 Long-Term Indoor Air Monitoring for Students with Emphasis on Particulate Matter (PM2.5) Exposure

Authors: Seyedtaghi Mirmohammadi, Jamshid Yazdani, Syavash Etemadi Nejad

Abstract:

One of the main indoor air parameters in classrooms is dust pollution and it depends on the particle size and exposure duration. However, there is a lake of data about the exposure level to PM2.5 concentrations in rural area classrooms. The objective of the current study was exposure assessment for PM2.5 for students in the classrooms. One year monitoring was carried out for fifteen schools by time-series sampling to evaluate the indoor air PM2.5 in the rural district of Sari city, Iran. A hygrometer and thermometer were used to measure some psychrometric parameters (temperature, relative humidity, and wind speed) and Real-Time Dust Monitor, (MicroDust Pro, Casella, UK) was used to monitor particulate matters (PM2.5) concentration. The results show the mean indoor PM2.5 concentration in the studied classrooms was 135µg/m3. The regression model indicated that a positive correlation between indoor PM2.5 concentration and relative humidity, also with distance from city center and classroom size. Meanwhile, the regression model revealed that the indoor PM2.5 concentration, the relative humidity, and dry bulb temperature was significant at 0.05, 0.035, and 0.05 levels, respectively. A statistical predictive model was obtained from multiple regressions modeling for indoor PM2.5 concentration and indoor psychrometric parameters conditions.

Keywords: classrooms, concentration, humidity, particulate matters, regression

Procedia PDF Downloads 337
31559 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions

Authors: Oscar E. Cariceo, Claudia V. Casal

Abstract:

Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.

Keywords: cyberbullying, evidence based practice, machine learning, social work research

Procedia PDF Downloads 169
31558 Neighborhood Linking Social Capital as a Predictor of Drug Abuse: A Swedish National Cohort Study

Authors: X. Li, J. Sundquist, C. Sjöstedt, M. Winkleby, K. S. Kendler, K. Sundquist

Abstract:

Aims: This study examines the association between the incidence of drug abuse (DA) and linking (communal) social capital, a theoretical concept describing the amount of trust between individuals and societal institutions. Methods: We present results from an 8-year population-based cohort study that followed all residents in Sweden, aged 15-44, from 2003 through 2010, for a total of 1,700,896 men and 1,642,798 women. Social capital was conceptualized as the proportion of people in a geographically defined neighborhood who voted in local government elections. Multilevel logistic regression was used to estimate odds ratios (ORs) and between-neighborhood variance. Results: We found robust associations between linking social capital (scored as a three level variable) and DA in men and women. For men, the OR for DA in the crude model was 2.11 [95% confidence interval (CI) 2.02-2.21] for those living in areas with the lowest vs. highest level of social capital. After accounting for neighborhood-level deprivation, the OR fell to 1.59 (1.51-1-68), indicating that neighborhood deprivation lies in the pathway between linking social capital and DA. The ORs remained significant after accounting for age, sex, family income, marital status, country of birth, education level, and region of residence, and after further accounting for comorbidities and family history of comorbidities and family history of DA. For women, the OR decreased from 2.15 (2.03-2.27) in the crude model to 1.31 (1.22-1.40) in the final model, adjusted for multiple neighborhood-level and individual-level variables. Conclusions: Our study suggests that low linking social capital may have important independent effects on DA.

Keywords: drug abuse, social linking capital, environment, family

Procedia PDF Downloads 473
31557 Using Predictive Analytics to Identify First-Year Engineering Students at Risk of Failing

Authors: Beng Yew Low, Cher Liang Cha, Cheng Yong Teoh

Abstract:

Due to a lack of continual assessment or grade related data, identifying first-year engineering students in a polytechnic education at risk of failing is challenging. Our experience over the years tells us that there is no strong correlation between having good entry grades in Mathematics and the Sciences and excelling in hardcore engineering subjects. Hence, identifying students at risk of failure cannot be on the basis of entry grades in Mathematics and the Sciences alone. These factors compound the difficulty of early identification and intervention. This paper describes the development of a predictive analytics model in the early detection of students at risk of failing and evaluates its effectiveness. Data from continual assessments conducted in term one, supplemented by data of student psychological profiles such as interests and study habits, were used. Three classification techniques, namely Logistic Regression, K Nearest Neighbour, and Random Forest, were used in our predictive model. Based on our findings, Random Forest was determined to be the strongest predictor with an Area Under the Curve (AUC) value of 0.994. Correspondingly, the Accuracy, Precision, Recall, and F-Score were also highest among these three classifiers. Using this Random Forest Classification technique, students at risk of failure could be identified at the end of term one. They could then be assigned to a Learning Support Programme at the beginning of term two. This paper gathers the results of our findings. It also proposes further improvements that can be made to the model.

Keywords: continual assessment, predictive analytics, random forest, student psychological profile

Procedia PDF Downloads 136
31556 An Alternative Richards’ Growth Model Based on Hyperbolic Sine Function

Authors: Samuel Oluwafemi Oyamakin, Angela Unna Chukwu

Abstract:

Richrads growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richards growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richards growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richards nonlinear growth models better than the classical Richards growth model.

Keywords: height, diameter at breast height, DBH, hyperbolic sine function, Pinus caribaea, Richards' growth model

Procedia PDF Downloads 395
31555 A Research on Tourism Market Forecast and Its Evaluation

Authors: Min Wei

Abstract:

The traditional prediction methods of the forecast for tourism market are paid more attention to the accuracy of the forecasts, ignoring the results of the feasibility of forecasting and predicting operability, which had made it difficult to predict the results of scientific testing. With the application of Linear Regression Model, this paper attempts to construct a scientific evaluation system for predictive value, both to ensure the accuracy, stability of the predicted value, and to ensure the feasibility of forecasting and predicting the results of operation. The findings show is that a scientific evaluation system can implement the scientific concept of development, the harmonious development of man and nature co-ordinate.

Keywords: linear regression model, tourism market, forecast, tourism economics

Procedia PDF Downloads 333
31554 Determining Antecedents of Employee Turnover: A Study on Blue Collar vs White Collar Workers on Marco Level

Authors: Evy Rombaut, Marie-Anne Guerry

Abstract:

Predicting voluntary turnover of employees is an important topic of study, both in academia and industry. Researchers try to uncover determinants for a broader understanding and possible prevention of turnover. In the current study, we use a data set based approach to reveal determinants for turnover, differing for blue and white collar workers. Our data set based approach made it possible to study actual turnover for more than 500000 employees in 15692 Belgian corporations. We use logistic regression to calculate individual turnover probabilities and test the goodness of our model with the AUC (area under the ROC-curve) method. The results of the study confirm the relationship of known determinants to employee turnover such as age, seniority, pay and work distance. In addition, the study unravels unknown and verifies known differences between blue and white collar workers. It shows opposite relationships to turnover for gender, marital status, the number of children, nationality, and pay.

Keywords: employee turnover, blue collar, white collar, dataset analysis

Procedia PDF Downloads 294
31553 Using Plant Oils in Total Mixed Ration on Voluntary Feed Intake and Blood Metabolize of Crossbred Thai Native X American Brahman Cattle

Authors: Wantanee Polviset, N. Prakobsaeng, N. Wetchakama, C. Yuangklang

Abstract:

The aim of this study was to evaluate the effect of soybean oil, palm oil and sunflower oil supplementations in total mixed ration on voluntary feed intake, dry matter (DM) digestibility and blood metabolize in crossbred Thai native x American Brahman Cattle. Three Thai native x American Brahman cattle, one-year-old with liveweight of 116±22.59 kg, were randomly assigned according to a 3 x 3 latin square design. Each period of feeding lasted for 21 days to receive three dietary treatments were soybean oil, palm oil and sunflower oil supplementation at 5%. During the experimental periods, all cattle were fed a diet with total mixed ration containing roughage to concentrate ratio of 40:60 and rice straw was used as a roughage source. Based on the present study, the results revealed that voluntary feed intake (kgDM/head/day) and %BW DM intake were not affected (P>0.05), whereas percentage of dry matter digestibility was greater with the soybean oil supplementation (P<0.01). It was also found that blood glucose, blood urea nitrogen, cholesterol, triglyceride, high density lipoprotein and low density lipoprotein in plasma were similar among treatments. Based on this study, supplementing 5% soybean oil in total mixed ration (TMR) diets was suitable in beef cattle without any effect dry matter digestibility and blood metabolites.

Keywords: plant oils, feed intake, blood metabolize, crossbred Thai native x Brahman cattle

Procedia PDF Downloads 322
31552 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 493
31551 Mixed Hydrotropic Zaleplon Oral Tablets: Formulation and Neuropharmacological Effect on Plasma GABA Level

Authors: Ghada A. Abdelbary, Maha M. Amin, Mostafa Abdelmoteleb

Abstract:

Zaleplon (ZP) is a non-benzodiazepine poorly soluble hypnotic drug indicated for the short term treatment of insomnia having a bioavailability of about 30%. The aim of the present study is to enhance the solubility and consequently the bioavailability of ZP using hydrotropic agents (HA). Phase solubility diagrams of ZP in presence of different molar concentrations of HA (Sodium benzoate, Urea, Ascorbic acid, Resorcinol, Nicotinamide, and Piperazine) were constructed. ZP/Sodium benzoate and Resorcinol microparticles were prepared adopting melt, solvent evaporation and melt-evaporation techniques followed by XRD. Directly compressed mixed hydrotropic ZP tablets of Sodium benzoate and Resorcinol in different weight ratios were prepared and evaluated compared to the commercially available tablets (Sleep aid® 5 mg). The effect of shelf and accelerated stability storage (40°C ± 2°C/75%RH ± 5%RH) on the optimum tablet formula (F5) for six months were studied. The enhancement of ZP solubility follows the order of: Resorcinol > Sodium benzoate > Ascorbic acid > Piperazine > Urea > Nicotinamide with about 350 and 2000 fold increase using 1M of Sodium benzoate and Resorcinol respectively. ZP/HA microparticles exhibit the order of: Solvent evaporation > melt-solvent evaporation > melt > physical mixture which was further confirmed by the complete conversion of ZP into amorphous form. Mixed hydrotropic tablet formula (F5) composed of ZP/(Resorcinol: Sodium benzoate 4:1w/w) microparticles prepared by solvent evaporation exhibits in-vitro dissolution of 31.7±0.11% after five minutes (Q5min) compared to 10.0±0.10% for Sleep aid® (5 mg) respectively. F5 showed significantly higher GABA concentration of 122.5±5.5mg/mL in plasma compared to 118±1.00 and 27.8±1.5 mg/mL in case of Sleep aid® (5 mg) and control taking only saline respectively suggesting a higher neuropharmacological effect of ZP following hydrotropic solubilization.

Keywords: zaleplon, hydrotropic solubilization, plasma GABA level, mixed hydrotropy

Procedia PDF Downloads 446
31550 Self-Image of Police Officers

Authors: Leo Carlo B. Rondina

Abstract:

Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image.

Keywords: police image, exploratory factor analysis, ordinal regression, Galatea effect

Procedia PDF Downloads 289
31549 Survey of Methods for Solutions of Spatial Covariance Structures and Their Limitations

Authors: Joseph Thomas Eghwerido, Julian I. Mbegbu

Abstract:

In modelling environment processes, we apply multidisciplinary knowledge to explain, explore and predict the Earth's response to natural human-induced environmental changes. Thus, the analysis of spatial-time ecological and environmental studies, the spatial parameters of interest are always heterogeneous. This often negates the assumption of stationarity. Hence, the dispersion of the transportation of atmospheric pollutants, landscape or topographic effect, weather patterns depends on a good estimate of spatial covariance. The generalized linear mixed model, although linear in the expected value parameters, its likelihood varies nonlinearly as a function of the covariance parameters. As a consequence, computing estimates for a linear mixed model requires the iterative solution of a system of simultaneous nonlinear equations. In other to predict the variables at unsampled locations, we need to know the estimate of the present sampled variables. The geostatistical methods for solving this spatial problem assume covariance stationarity (locally defined covariance) and uniform in space; which is not apparently valid because spatial processes often exhibit nonstationary covariance. Hence, they have globally defined covariance. We shall consider different existing methods of solutions of spatial covariance of a space-time processes at unsampled locations. This stationary covariance changes with locations for multiple time set with some asymptotic properties.

Keywords: parametric, nonstationary, Kernel, Kriging

Procedia PDF Downloads 256
31548 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 88
31547 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 84
31546 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques

Authors: Jonathan Iworiso

Abstract:

Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.

Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains

Procedia PDF Downloads 108
31545 Spectral Mixture Model Applied to Cannabis Parcel Determination

Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara

Abstract:

Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.

Keywords: Gaussian mixture discriminant analysis, spectral mixture model, Worldview-2, land parcels

Procedia PDF Downloads 197
31544 Effects of Cash Transfers Mitigation Impacts in the Face of Socioeconomic External Shocks: Evidence from Egypt

Authors: Basma Yassa

Abstract:

Evidence on cash transfers’ effectiveness in mitigating macro and idiosyncratic shocks’ impacts has been mixed and is mostly concentrated in Latin America, Sub-Saharan Africa, and South Asia with very limited evidence from the MENA region. Yet conditional cash transfers schemes have been continually used, especially in Egypt, as the main social protection tool in response to the recent socioeconomic crises and macro shocks. We use 2 panel datasets and 1 cross-sectional dataset to estimate the effectiveness of cash transfers as a shock-mitigative mechanism in the Egyptian context. In this paper, the results from the different models (Panel Fixed Effects model and the Regression Discontinuity Design (RDD) model) confirm that micro and macro shocks lead to significant decline in several household-level welfare outcomes and that Takaful cash transfers have a significant positive impact in mitigating the negative shock impacts, especially on households’ debt incidence, debt levels, and asset ownership, but not necessarily on food, and non-food expenditure levels. The results indicate large positive significant effects on decreasing household incidence of debt by up to 12.4 percent and lowered the debt size by approximately 18 percent among Takaful beneficiaries compared to non-beneficiaries’. Similar evidence is found on asset ownership levels, as the RDD model shows significant positive effects on total asset ownership and productive asset ownership, but the model failed to detect positive impacts on per capita food and non-food expenditures. Further extensions are still in progress to compare the models’ results with the DID model results when using a nationally representative ELMPS panel data (2018/2024) rounds. Finally, our initial analysis suggests that conditional cash transfers are effective in buffering the negative shock impacts on certain welfare indicators even after successive macro-economic shocks in 2022 and 2023 in the Egyptian Context.

Keywords: cash transfers, fixed effects, household welfare, household debt, micro shocks, regression discontinuity design

Procedia PDF Downloads 47
31543 Determination of Small Shear Modulus of Clayey Sand Using Bender Element Test

Authors: R. Sadeghzadegan, S. A. Naeini, A. Mirzaii

Abstract:

In this article, the results of a series of carefully conducted laboratory test program were represented to determine the small strain shear modulus of sand mixed with a range of kaolinite including zero to 30%. This was experimentally achieved using a triaxial cell equipped with bender element. Results indicate that small shear modulus tends to increase, while clay content decreases and effective confining pressure increases. The exponent of stress in the power model regression analysis was not sensitive to the amount of clay content for all sand clay mixtures, while coefficient A was directly affected by change in clay content.

Keywords: small shear modulus, bender element test, plastic fines, sand

Procedia PDF Downloads 474