Search results for: the linear regression model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20127

Search results for: the linear regression model

19737 Liquefaction Susceptibility of Tailing Storage Facility-Comparison of National Centre for Earthquake Engineering Research and Finite Element Methods

Authors: Mehdi Ghatei, Masoomeh Lorestani

Abstract:

Upstream Tailings Storage Facilities (TSFs) may experience slope instabilities due to soil liquefaction, especially in regions known to be seismically active. In this study, liquefaction susceptibility of an upstream-raised TSF in Western Australia was assessed using two different approaches. The first approach assessed liquefaction susceptibility using Cone Penetration Tests with pore pressure measurement (CPTu) as described by the National Centre for Earthquake Engineering Research (NCEER). This assessment was based on the four CPTu tests that were conducted on the perimeter embankment of the TSF. The second approach used the Finite Element (FE) method with application of an equivalent linear model to predict the undrained cyclic behavior, the pore water pressure and the liquefaction of the materials. The tailings parameters were estimated from the CPTu profiles and from the laboratory tests. The cyclic parameters were estimated from the literature where test results of similar material were available. The results showed that there was a good agreement, in the liquefaction susceptibility of the tailings material, between the NCEER and FE methods with equivalent linear model.

Keywords: liquefaction , CPTU, NCEER, finite element method, equivalent linear model

Procedia PDF Downloads 266
19736 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data

Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill

Abstract:

Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.

Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function

Procedia PDF Downloads 273
19735 Use of Transportation Networks to Optimize The Profit Dynamics of the Product Distribution

Authors: S. Jayasinghe, R. B. N. Dissanayake

Abstract:

Optimization modelling together with the Network models and Linear Programming techniques is a powerful tool in problem solving and decision making in real world applications. This study developed a mathematical model to optimize the net profit by minimizing the transportation cost. This model focuses the transportation among decentralized production plants to a centralized distribution centre and then the distribution among island wide agencies considering the customer satisfaction as a requirement. This company produces basically 9 types of food items with 82 different varieties and 4 types of non-food items with 34 different varieties. Among 6 production plants, 4 were located near the city of Mawanella and the other 2 were located in Galewala and Anuradhapura cities which are 80 km and 150 km away from Mawanella respectively. The warehouse located in the Mawanella was the main production plant and also the only distribution plant. This plant distributes manufactured products to 39 agencies island-wide. The average values and average amount of the goods for 6 consecutive months from May 2013 to October 2013 were collected and then average demand values were calculated. The following constraints are used as the necessary requirement to satisfy the optimum condition of the model; there was one source, 39 destinations and supply and demand for all the agencies are equal. Using transport cost for a kilometer, total transport cost was calculated. Then the model was formulated using distance and flow of the distribution. Network optimization and linear programming techniques were used to originate the model while excel solver is used in solving. Results showed that company requires total transport cost of Rs. 146, 943, 034.50 to fulfil the customers’ requirement for a month. This is very much less when compared with data without using the model. Model also proved that company can reduce their transportation cost by 6% when distributing to island-wide customers. Company generally satisfies their customers’ requirements by 85%. This satisfaction can be increased up to 97% by using this model. Therefore this model can be used by other similar companies in order to reduce the transportation cost.

Keywords: mathematical model, network optimization, linear programming

Procedia PDF Downloads 342
19734 Risk Factors for High Resistance of Ciprofloxacin Against Escherichia coli in Complicated Urinary Tract Infection

Authors: Liaqat Ali, Khalid Farooq, Shafieullah Khan, Nasir Orakzai, Qudratullah

Abstract:

Objectives: To determine the risk factors for high resistance of ciprofloxacin in complicated urinary tract infections. Materials and Methods: It is an analytical study that was conducted in the department of Urology (Team ‘C’) at Institute of Kidney Diseases Hayatabad Peshawar from 1st June 2012 till 31st December 2012. Total numbers of 100 patients with complicated UTI was selected in the study. Multivariate analysis and linear regression were performed for the detection of risk factors. All the data was recorded on structured Proforma and was analyzed on SPSS version 17. Results: The mean age of the patient was 55.6 years (Range 3-82 years). 62 patients were male while 38 patients were female. 66 isolates of E-Coli were found sensitive to ciprofloxacin while 34 isolates were found Resistant for ciprofloxacin. Using multivariate analysis and linear regression, an increasing age above 50 (p=0.002) History of urinary catheterization especially for bladder outflow obstruction (p=0.001) and previous multiple use of ciprofloxacin (p=0.001) and poor brand of ciprofloxacin were found to be independent risk factors for high resistance of ciprofloxacin. Conclusion: UTI is common illness across the globe with increasing trend of antimicrobial resistance for ciprofloxacin against E Coli in complicated UTI. The risk factors for emerging resistance are increasing age, urinary catheterization and multiple use and poor brand of ciprofloxacin.

Keywords: urinary tract infection, ciprofloxacin, urethral catheterization, antimicrobial resistance

Procedia PDF Downloads 345
19733 The Influence of Interest, Beliefs, and Identity with Mathematics on Achievement

Authors: Asma Alzahrani, Elizabeth Stojanovski

Abstract:

This study investigated factors that influence mathematics achievement based on a sample of ninth-grade students (N  =  21,444) from the High School Longitudinal Study of 2009 (HSLS09). Key aspects studied included efficacy in mathematics, interest and enjoyment of mathematics, identity with mathematics and future utility beliefs and how these influence mathematics achievement. The predictability of mathematics achievement based on these factors was assessed using correlation coefficients and multiple linear regression. Spearman rank correlations and multiple regression analyses indicated positive and statistically significant relationships between the explanatory variables: mathematics efficacy, identity with mathematics, interest in and future utility beliefs with the response variable, achievement in mathematics.

Keywords: Mathematics achievement, math efficacy, mathematics interest, factors influence

Procedia PDF Downloads 141
19732 Analysis of Spatial Heterogeneity of Residential Prices in Guangzhou: An Actual Study Based on Point of Interest Geographically Weighted Regression Model

Authors: Zichun Guo

Abstract:

Guangzhou's house price has long been lower than the other three major cities; with the gradual increase in Guangzhou's house price, the influencing factors of house price have gradually been paid attention to; this paper tries to use house price data and POI (Point of Interest) data, and explores the distribution of house price and influencing factors by applying the Kriging spatial interpolation method and geographically weighted regression model in ArcGIS. The results show that the interpolation result of house price has a significant relationship with the economic development and development potential of the region and that different POI types have different impacts on the growth of house prices in different regions.

Keywords: POI, house price, spatial heterogeneity, Guangzhou

Procedia PDF Downloads 41
19731 Developing Performance Model for Road Side Elements Receiving Periodic Maintenance

Authors: Ayman M. Othman, Hassan Y. Ahmed, Tallat A. Ali

Abstract:

Inadequate maintenance programs and funds allocated for highway networks in the developed countries have led to fast deterioration of road side elements. Therefore, this research focuses on developing a performance model for road side elements periodic maintenance activities. Road side elements that receive periodic maintenance include; earthen shoulder, road signs and traffic markings. Using the level of service concept, the developed model can determine the optimal periodic maintenance intervals for those elements based on a selected level of service suitable with the available periodic maintenance budget. Data related to time periods for progressive deterioration stages for the chosen elements were collected. Ten maintenance experts in Aswan, Sohag and Assiut cities were interviewed for that purpose. Time in months related to 10%, 25%, 40%, 50%, 75%, 90% and 100% deterioration of each road side element was estimated based on the experts opinion. Least square regression analysis has shown that a power function represents the best fit for earthen shoulders edge drop-off and damage of road signs with time. It was also evident that, the progressive dirtiness of road signs could be represented by a quadratic function an a linear function could represent the paint degradation nature of both traffic markings and road signs. Actual measurements of earthen shoulder edge drop-off agree considerably with the developed model.

Keywords: deterioration, level of service, periodic maintenance, performance model, road side element

Procedia PDF Downloads 565
19730 Factorial Design Analysis for Quality of Video on MANET

Authors: Hyoup-Sang Yoon

Abstract:

The quality of video transmitted by mobile ad hoc networks (MANETs) can be influenced by several factors, including protocol layers; parameter settings of each protocol. In this paper, we are concerned with understanding the functional relationship between these influential factors and objective video quality in MANETs. We illustrate a systematic statistical design of experiments (DOE) strategy can be used to analyse MANET parameters and performance. Using a 2k factorial design, we quantify the main and interactive effects of 7 factors on a response metric (i.e., mean opinion score (MOS) calculated by PSNR with Evalvid package) we then develop a first-order linear regression model between the influential factors and the performance metric.

Keywords: evalvid, full factorial design, mobile ad hoc networks, ns-2

Procedia PDF Downloads 408
19729 Quantitative Structure-Activity Relationship Modeling of Detoxication Properties of Some 1,2-Dithiole-3-Thione Derivatives

Authors: Nadjib Melkemi, Salah Belaidi

Abstract:

Quantitative Structure-Activity Relationship (QSAR) studies have been performed on nineteen molecules of 1,2-dithiole-3-thione analogues. The compounds used are the potent inducers of enzymes involved in the maintenance of reduced glutathione pools as well as phase-2 enzymes important to electrophile detoxication. A multiple linear regression (MLR) procedure was used to design the relationships between molecular descriptor and detoxication properties of the 1,2-dithiole-3-thione derivatives. The predictivity of the model was estimated by cross-validation with the leave-one-out method. Our results suggest a QSAR model based of the following descriptors: qS2, qC3, qC5, qS6, DM, Pol, log P, MV, SAG, HE and EHOMO for the specific activity of quinone reductase; qS1, qS2, qC3, qC4, qC5, qS6, DM, Pol, logP, MV, SAG, HE and EHOMO for the production of growth hormone. To confirm the predictive power of the models, an external set of molecules was used. High correlation between experimental and predicted activity values was observed, indicating the validation and the good quality of the derived QSAR models.

Keywords: QSAR, quinone reductase activity, production of growth hormone, MLR

Procedia PDF Downloads 344
19728 Plackett-Burman Design to Evaluate the Influence of Operating Parameters on Anaerobic Orthophosphate Release from Enhanced Biological Phosphorus Removal Sludge

Authors: Reza Salehi, Peter L. Dold, Yves Comeau

Abstract:

The aim of the present study was to investigate the effect of a total of 6 operating parameters including pH (X1), temperature (X2), stirring speed (X3), chemical oxygen demand (COD) (X4), volatile suspended solids (VSS) (X5) and time (X6) on anaerobic orthophosphate release from enhanced biological phosphorus removal (EBPR) sludge. An 8-run Plackett Burman design was applied and the statistical analysis of the experimental data was performed using Minitab16.2.4 software package. The Analysis of variance (ANOVA) results revealed that temperature, COD, VSS and time had a significant effect with p-values of less than 0.05 whereas pH and stirring speed were identified as non-significant parameters, but influenced orthophosphate release from the EBPR sludge. The mathematic expression obtained by the first-order multiple linear regression model between orthophosphate release from the EBPR sludge (Y) and the operating parameters (X1-X6) was Y=18.59+1.16X1-3.11X2-0.81X3+3.79X4+9.89X5+4.01X6. The model p-value and coefficient of determination (R2) value were 0.026 and of 99.87%, respectively, which indicates the model is significant and the predicted values of orthophosphate release from the EBPR sludge have been excellently correlated with the observed values.

Keywords: anaerobic, operating parameters, orthophosphate release, Plackett-Burman design

Procedia PDF Downloads 275
19727 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 238
19726 Sustainability of Green Supply Chain for a Steel Industry Using Mixed Linear Programing Model

Authors: Ameen Alawneh

Abstract:

The cost of material management across the supply chain represents a major contributor to the overall cost of goods in many companies both manufacturing and service sectors. This fact combined with the fierce competition make supply chains more efficient and cost effective. It also requires the companies to improve the quality of the products and services, increase the effectiveness of supply chain operations, focus on customer needs, reduce wastes and costs across the supply chain. As a heavy industry, steel manufacturing companies in particular are nowadays required to be more environmentally conscious due to their contribution to air, soil, and water pollution that results from emissions and wastes across their supply chains. Steel companies are increasingly looking for methods to reduce or cost cut in the operations and provide extra value to their customers to stay competitive under the current low margins. In this research we develop a green framework model for the sustainability of a steel company supply chain using Mixed integer Linear programming.

Keywords: Supply chain, Mixed Integer linear programming, heavy industry, water pollution

Procedia PDF Downloads 442
19725 Free Fatty Acid Assessment of Crude Palm Oil Using a Non-Destructive Approach

Authors: Siti Nurhidayah Naqiah Abdull Rani, Herlina Abdul Rahim, Rashidah Ghazali, Noramli Abdul Razak

Abstract:

Near infrared (NIR) spectroscopy has always been of great interest in the food and agriculture industries. The development of prediction models has facilitated the estimation process in recent years. In this study, 110 crude palm oil (CPO) samples were used to build a free fatty acid (FFA) prediction model. 60% of the collected data were used for training purposes and the remaining 40% used for testing. The visible peaks on the NIR spectrum were at 1725 nm and 1760 nm, indicating the existence of the first overtone of C-H bands. Principal component regression (PCR) was applied to the data in order to build this mathematical prediction model. The optimal number of principal components was 10. The results showed R2=0.7147 for the training set and R2=0.6404 for the testing set.

Keywords: palm oil, fatty acid, NIRS, regression

Procedia PDF Downloads 499
19724 Assessment of Forest Resource Exploitation in the Rural Communities of District Jhelum

Authors: Rubab Zafar Kahlon, Ibtisam Butt

Abstract:

Forest resources are deteriorating and experiencing decline around the globe due to unsustainable use and over exploitation. The present study was an attempt to determine the relationship between human activities, forest resource utilization, extraction methods and practices of forest resource exploitation in the district Jhelum of Pakistan. For this purpose, primary sources of data were used which were collected from 8 villages through structured questionnaire and tabulated in Microsoft Excel 365 and SPSS 22 was used for multiple linear regression analysis. The results revealed that farming, wood cutting, animal husbandry and agro-forestry were the major occupations in the study area. Most commonly used resources included timber 26%, fuelwood 25% and fodder 19%. Methods used for resource extraction included gathering 49%, plucking 34% trapping 11% and cutting 6%. Population growth, increased demand of fuelwood and land conversion were the main reasons behind forest degradation. Results for multiple linear regression revealed that Forest based activities, sources of energy production, methods used for wood harvesting and resource extraction and use of fuelwood for energy production contributed significantly towards extensive forest resource exploitation with p value <0.5 within the study area. The study suggests that effective measures should be taken by forest department to control the unsustainable use of forest resources by stringent management interventions and awareness campaigns in Jhelum district.

Keywords: forest resource, biodiversity, expliotation, human activities

Procedia PDF Downloads 84
19723 Minimizing the Impact of Covariate Detection Limit in Logistic Regression

Authors: Shahadut Hossain, Jacek Wesolowski, Zahirul Hoque

Abstract:

In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.

Keywords: environmental exposure, detection limit, left truncation, bias, ad-hoc substitution

Procedia PDF Downloads 229
19722 Airy Wave Packet for a Particle in a Time-Dependant Linear Potential

Authors: M. Berrehail, F. Benamira

Abstract:

We study the quantum motion of a particle in the presence of a time- dependent linear potential using an operator invariant that is quadratic in p and linear in q within the framework of the Lewis-Riesenfeld invariant, The special invariant operator proposed in this work is demonstrated to be an Hermitian operator which has an Airy wave packet as its Eigenfunction

Keywords: airy wave packet, ivariant, time-dependent linear potential, unitary transformation

Procedia PDF Downloads 485
19721 Work Ability Index (WAI) and Its Health-Related Detriments among Iranian Farmers Working in the Small Farm Enterprises

Authors: Akbar Rostamabadi, Adel Mazloumi, Abbas Rahimi Foroushani

Abstract:

This study aimed to determine the Work Ability Index (WAI) and examine the influence of health dimensions and demographic variables on the work ability of Iranian farmers working in small farm enterprises. A cross-sectional study was conducted among 294 male farmers. The WAI and SF-36 questionnaires were used to determine work ability and health status. The effect of demographics variables on the work ability index was investigated with the independent samples t-test and one-way ANOVA. Also, multiple linear regression analysis was used to test the association between the mean WAI score and the SF-36 scales. The mean WAI score was 35.1 (SD=10.6). One-way ANOVA revealed a significant relationship between the mean WAI and age. Multiple linear regression analysis showed that work ability was more influenced by physical scales of the health dimensions, such as physical function, role-physical, and general health, whereas a lower association was found for mental scales such as mental health. The average WAI was at a moderate work ability level for the sample population of farmers in this study. Based on the WAI guidelines, improvement of work ability and identification of factors affecting it should be considered a priority in interventional programs. Given the influence of health dimensions on WAI, any intervention program for preservation and promotion work ability among the studied farmers should be based on balancing and optimizing the physical and psychosocial work environments, with a special focus on reducing physical work load.

Keywords: farmers, SF-36, Work Ability Index (WAI), Iran

Procedia PDF Downloads 434
19720 A Cheap Mesoporous Silica from Fly Ash as an Adsorbent for Sulfate in Water

Authors: Ximena Castillo, Jaime Pizarro

Abstract:

This research describes the development of a very cheap mesoporous silica material similar to hexagonal mesoporous silica (HMS) and using a silicate extract as precursor. This precursor is obtained from cheap fly ash by an easy calcination process at 850 °C and a green extraction with water. The obtained mesoporous fly ash material had a surface area of 282 m2 g-1 and a pore size of 5.7 nm. It was functionalized with ethylene diamino moieties via the well-known SAMMS method, followed by a DRIFT analysis that clearly showed the successful functionalization. An excellent adsorbent was obtained for the adsorption of sulfate anions by the solid’s modification with copper forming a copper-ethylenediamine complex. The adsorption of sulfates was studied in a batch system ( experimental conditions: pH=8.0; 5 min). The kinetics data were adjusted according to a pseudo-second order model with a high coefficient of linear regression at different initial concentrations. The adsorption isotherm that best fitted the experimental data was the Freundlich model. The maximum sulfate adsorption capacity of this very cheap fly ash based adsorbent was 146.1 mg g-1, 3 times greater than the values reported in literature and commercial adsorbent materials.

Keywords: fly ash, mesoporous materials, SAMMS, sulfate

Procedia PDF Downloads 169
19719 An Epsilon Hierarchical Fuzzy Twin Support Vector Regression

Authors: Arindam Chaudhuri

Abstract:

The research presents epsilon- hierarchical fuzzy twin support vector regression (epsilon-HFTSVR) based on epsilon-fuzzy twin support vector regression (epsilon-FTSVR) and epsilon-twin support vector regression (epsilon-TSVR). Epsilon-FTSVR is achieved by incorporating trapezoidal fuzzy numbers to epsilon-TSVR which takes care of uncertainty existing in forecasting problems. Epsilon-FTSVR determines a pair of epsilon-insensitive proximal functions by solving two related quadratic programming problems. The structural risk minimization principle is implemented by introducing regularization term in primal problems of epsilon-FTSVR. This yields dual stable positive definite problems which improves regression performance. Epsilon-FTSVR is then reformulated as epsilon-HFTSVR consisting of a set of hierarchical layers each containing epsilon-FTSVR. Experimental results on both synthetic and real datasets reveal that epsilon-HFTSVR has remarkable generalization performance with minimum training time.

Keywords: regression, epsilon-TSVR, epsilon-FTSVR, epsilon-HFTSVR

Procedia PDF Downloads 363
19718 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations

Authors: Nanine Fouche

Abstract:

The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.

Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance

Procedia PDF Downloads 171
19717 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 371
19716 Optimization of Tundish Geometry for Minimizing Dead Volume Using OpenFOAM

Authors: Prateek Singh, Dilshad Ahmad

Abstract:

Growing demand for high-quality steel products has inspired researchers to investigate the unit operations involved in the manufacturing of these products (slabs, rods, sheets, etc.). One such operation is tundish operation, in which a vessel (tundish) acts as a buffer of molten steel for the solidification operation in mold. It is observed that tundish also plays a crucial role in the quality and cleanliness of the steel produced, besides merely acting as a reservoir for the mold. It facilitates removal of dissolved oxygen (inclusions) from the molten steel thus improving its cleanliness. Inclusion removal can be enhanced by increasing the residence time of molten steel in the tundish by incorporation of flow modifiers like dams, weirs, turbo-pad, etc. These flow modifiers also help in reducing the dead or short circuit zones within the tundish which is significant for maintaining thermal and chemical homogeneity of molten steel. Thus, it becomes important to analyze the flow of molten steel in the tundish for different configuration of flow modifiers. In the present work, effect of varying positions and heights/depths of dam and weir on the dead volume in tundish is studied. Steady state thermal and flow profiles of molten steel within the tundish are obtained using OpenFOAM. Subsequently, Residence Time Distribution analysis is performed to obtain the percentage of dead volume in the tundish. Design of Experiment method is then used to configure different tundish geometries for varying positions and heights/depths of dam and weir, and dead volume for each tundish design is obtained. A second-degree polynomial with two-term interactions of independent variables to predict the dead volume in the tundish with positions and heights/depths of dam and weir as variables are computed using Multiple Linear Regression model. This polynomial is then used in an optimization framework to obtain the optimal tundish geometry for minimizing dead volume using Sequential Quadratic Programming optimization.

Keywords: design of experiments, multiple linear regression, OpenFOAM, residence time distribution, sequential quadratic programming optimization, steel, tundish

Procedia PDF Downloads 197
19715 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 368
19714 A Machine Learning Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

There has been a need in recent years to predict student academic achievement prior to graduation. This is to assist them in improving their grades, especially for those who have struggled in the past. The purpose of this research is to use supervised learning techniques to create a model that predicts student academic progress. Many scholars have developed models that predict student academic achievement based on characteristics including smoking, demography, culture, social media, parent educational background, parent finances, and family background, to mention a few. This element, as well as the model used, could have misclassified the kids in terms of their academic achievement. As a prerequisite to predicting if the student will perform well in the future on related courses, this model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester. With a 96.7 percent accuracy, the model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost. This model is offered as a desktop application with user-friendly interfaces for forecasting student academic progress for both teachers and students. As a result, both students and professors are encouraged to use this technique to predict outcomes better.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 101
19713 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 376
19712 A Fuzzy Programming Approach for Solving Intuitionistic Fuzzy Linear Fractional Programming Problem

Authors: Sujeet Kumar Singh, Shiv Prasad Yadav

Abstract:

This paper develops an approach for solving intuitionistic fuzzy linear fractional programming (IFLFP) problem where the cost of the objective function, the resources, and the technological coefficients are triangular intuitionistic fuzzy numbers. Here, the IFLFP problem is transformed into an equivalent crisp multi-objective linear fractional programming (MOLFP) problem. By using fuzzy mathematical programming approach the transformed MOLFP problem is reduced into a single objective linear programming (LP) problem. The proposed procedure is illustrated through a numerical example.

Keywords: triangular intuitionistic fuzzy number, linear programming problem, multi objective linear programming problem, fuzzy mathematical programming, membership function

Procedia PDF Downloads 558
19711 Poverty Dynamics in Thailand: Evidence from Household Panel Data

Authors: Nattabhorn Leamcharaskul

Abstract:

This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.

Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer

Procedia PDF Downloads 107
19710 Binary Logistic Regression Model in Predicting the Employability of Senior High School Graduates

Authors: Cromwell F. Gopo, Joy L. Picar

Abstract:

This study aimed to predict the employability of senior high school graduates for S.Y. 2018- 2019 in the Davao del Norte Division through quantitative research design using the descriptive status and predictive approaches among the indicated parameters, namely gender, school type, academics, academic award recipient, skills, values, and strand. The respondents of the study were the 33 secondary schools offering senior high school programs identified through simple random sampling, which resulted in 1,530 cases of graduates’ secondary data, which were analyzed using frequency, percentage, mean, standard deviation, and binary logistic regression. Results showed that the majority of the senior high school graduates who come from large schools were females. Further, less than half of these graduates received any academic award in any semester. In general, the graduates’ performance in academics, skills, and values were proficient. Moreover, less than half of the graduates were not employed. Then, those who were employed were either contractual, casual, or part-time workers dominated by GAS graduates. Further, the predictors of employability were gender and the Information and Communications Technology (ICT) strand, while the remaining variables did not add significantly to the model. The null hypothesis had been rejected as the coefficients of the predictors in the binary logistic regression equation did not take the value of 0. After utilizing the model, it was concluded that Technical-Vocational-Livelihood (TVL) graduates except ICT had greater estimates of employability.

Keywords: employability, senior high school graduates, Davao del Norte, Philippines

Procedia PDF Downloads 144
19709 Predictors of Glycaemic Variability and Its Association with Mortality in Critically Ill Patients with or without Diabetes

Authors: Haoming Ma, Guo Yu, Peiru Zhou

Abstract:

Background: Previous studies show that dysglycemia, mostly hyperglycemia, hypoglycemia and glycemic variability(GV), are associated with excess mortality in critically ill patients, especially those without diabetes. Glycemic variability is an increasingly important measure of glucose control in the intensive care unit (ICU) due to this association. However, there is limited data pertaining to the relationship between different clinical factors and glycemic variability and clinical outcomes categorized by their DM status. This retrospective study of 958 intensive care unit(ICU) patients was conducted to investigate the relationship between GV and outcome in critically ill patients and further to determine the significant factors that contribute to the glycemic variability. Aim: We hypothesize that the factors contributing to mortality and the glycemic variability are different from critically ill patients with or without diabetes. And the primary aim of this study was to determine which dysglycemia (hyperglycemia\hypoglycemia\glycemic variability) is independently associated with an increase in mortality among critically ill patients in different groups (DM/Non-DM). Secondary objectives were to further investigate any factors affecting the glycemic variability in two groups. Method: A total of 958 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The glycemic variability was defined as the coefficient of variation (CV) of blood glucose. The main outcome was death during hospitalization. The secondary outcome was GV. The logistic regression model was used to identify factors associated with mortality. The relationships between GV and other variables were investigated using linear regression analysis. Results: Information on age, APACHE II score, GV, gender, in-ICU treatment and nutrition was available for 958 subjects. Predictors remaining in the final logistic regression model for mortality were significantly different in DM/Non-DM groups. Glycemic variability was associated with an increase in mortality in both DM(odds ratio 1.05; 95%CI:1.03-1.08,p<0.001) or Non-DM group(odds ratio 1.07; 95%CI:1.03-1.11,p=0.002). For critically ill patients without diabetes, factors associated with glycemic variability included APACHE II score(regression coefficient, 95%CI:0.29,0.22-0.36,p<0.001), Mean BG(0.73,0.46-1.01,p<0.001), total parenteral nutrition(2.87,1.57-4.17,p<0.001), serum albumin(-0.18,-0.271 to -0.082,p<0.001), insulin treatment(2.18,0.81-3.55,p=0.002) and duration of ventilation(0.006,0.002-1.010,p=0.003).However, for diabetes patients, APACHE II score(0.203,0.096-0.310,p<0.001), mean BG(0.503,0.138-0.869,p=0.007) and duration of diabetes(0.167,0.033-0.301,p=0.015) remained as independent risk factors of GV. Conclusion: We found that the relation between dysglycemia and mortality is different in the diabetes and non-diabetes groups. And we confirm that GV was associated with excess mortality in DM or Non-DM patients. Furthermore, APACHE II score, Mean BG, total parenteral nutrition, serum albumin, insulin treatment and duration of ventilation were significantly associated with an increase in GV in Non-DM patients. While APACHE II score, mean BG and duration of diabetes (years) remained as independent risk factors of increased GV in DM patients. These findings provide important context for further prospective trials investigating the effect of different clinical factors in critically ill patients with or without diabetes.

Keywords: diabetes, glycemic variability, predictors, severe disease

Procedia PDF Downloads 183
19708 Simultaneous Determination of Methotrexate and Aspirin Using Fourier Transform Convolution Emission Data under Non-Parametric Linear Regression Method

Authors: Marwa A. A. Ragab, Hadir M. Maher, Eman I. El-Kimary

Abstract:

Co-administration of methotrexate (MTX) and aspirin (ASP) can cause a pharmacokinetic interaction and a subsequent increase in blood MTX concentrations which may increase the risk of MTX toxicity. Therefore, it is important to develop a sensitive, selective, accurate and precise method for their simultaneous determination in urine. A new hybrid chemometric method has been applied to the emission response data of the two drugs. Spectrofluorimetric method for determination of MTX through measurement of its acid-degradation product, 4-amino-4-deoxy-10-methylpteroic acid (4-AMP), was developed. Moreover, the acid-catalyzed degradation reaction enables the spectrofluorimetric determination of ASP through the formation of its active metabolite salicylic acid (SA). The proposed chemometric method deals with convolution of emission data using 8-points sin xi polynomials (discrete Fourier functions) after the derivative treatment of these emission data. The first and second derivative curves (D1 & D2) were obtained first then convolution of these curves was done to obtain first and second derivative under Fourier functions curves (D1/FF) and (D2/FF). This new application was used for the resolution of the overlapped emission bands of the degradation products of both drugs to allow their simultaneous indirect determination in human urine. Not only this chemometric approach was applied to the emission data but also the obtained data were subjected to non-parametric linear regression analysis (Theil’s method). The proposed method was fully validated according to the ICH guidelines and it yielded linearity ranges as follows: 0.05-0.75 and 0.5-2.5 µg mL-1 for MTX and ASP respectively. It was found that the non-parametric method was superior over the parametric one in the simultaneous determination of MTX and ASP after the chemometric treatment of the emission spectra of their degradation products. The work combines the advantages of derivative and convolution using discrete Fourier function together with the reliability and efficacy of the non-parametric analysis of data. The achieved sensitivity along with the low values of LOD (0.01 and 0.06 µg mL-1) and LOQ (0.04 and 0.2 µg mL-1) for MTX and ASP respectively, by the second derivative under Fourier functions (D2/FF) were promising and guarantee its application for monitoring the two drugs in patients’ urine samples.

Keywords: chemometrics, emission curves, derivative, convolution, Fourier transform, human urine, non-parametric regression, Theil’s method

Procedia PDF Downloads 426