Search results for: stochastic regression
3502 A Hybrid Model Tree and Logistic Regression Model for Prediction of Soil Shear Strength in Clay
Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari
Abstract:
Without a doubt, soil shear strength is the most important property of the soil. The majority of fatal and catastrophic geological accidents are related to shear strength failure of the soil. Therefore, its prediction is a matter of high importance. However, acquiring the shear strength is usually a cumbersome task that might need complicated laboratory testing. Therefore, prediction of it based on common and easy to get soil properties can simplify the projects substantially. In this paper, A hybrid model based on the classification and regression tree algorithm and logistic regression is proposed where each leaf of the tree is an independent regression model. A database of 189 points for clay soil, including Moisture content, liquid limit, plastic limit, clay content, and shear strength, is collected. The performance of the developed model compared to the existing models and equations using root mean squared error and coefficient of correlation.Keywords: model tree, CART, logistic regression, soil shear strength
Procedia PDF Downloads 1943501 A Regression Model for Residual-State Creep Failure
Authors: Deepak Raj Bhat, Ryuichi Yatabe
Abstract:
In this study, a residual-state creep failure model was developed based on the residual-state creep test results of clayey soils. To develop the proposed model, the regression analyses were done by using the R. The model results of the failure time (tf) and critical displacement (δc) were compared with experimental results and found in close agreements to each others. It is expected that the proposed regression model for residual-state creep failure will be more useful for the prediction of displacement of different clayey soils in the future.Keywords: regression model, residual-state creep failure, displacement prediction, clayey soils
Procedia PDF Downloads 4053500 A Fuzzy Nonlinear Regression Model for Interval Type-2 Fuzzy Sets
Authors: O. Poleshchuk, E. Komarov
Abstract:
This paper presents a regression model for interval type-2 fuzzy sets based on the least squares estimation technique. Unknown coefficients are assumed to be triangular fuzzy numbers. The basic idea is to determine aggregation intervals for type-1 fuzzy sets, membership functions of whose are low membership function and upper membership function of interval type-2 fuzzy set. These aggregation intervals were called weighted intervals. Low and upper membership functions of input and output interval type-2 fuzzy sets for developed regression models are considered as piecewise linear functions.Keywords: interval type-2 fuzzy sets, fuzzy regression, weighted interval
Procedia PDF Downloads 3723499 Stochastic Response of an Airfoil and Its Effects on Limit Cycle Oscillations’ Behavior under Stall Flutter Regime
Authors: Ketseas Dimitris
Abstract:
In this work, we investigate the effect of noise on a classical two-degree-of-freedom pitch-plunge aeroelastic system. The inlet velocity of the flow is modelled as a stochastically varying parameter by the Ornstein-Uhlenbeck (OU) stochastic process. The system is a 2D airfoil, and the elastic problem is simulated using linear springs. We study the manifestation of Limit Cycle Oscillations (LCO) that correspond to the varying fluid velocity under the dynamic stall regime. We aim to delve into the unexplored facets of the classical pitch-plunge aeroelastic system, seeking a comprehensive understanding of how parametric noise influences the occurrence of LCO and expands the boundaries of its known behavior.Keywords: aerodynamics, aeroelasticity, computational fluid mechanics, stall flutter, stochastical processes, limit cycle oscillation
Procedia PDF Downloads 603498 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index
Authors: Shih-Pin Chen, Shih-Syuan You
Abstract:
This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.Keywords: dissemblance index, forecasting, fuzzy sets, linear regression
Procedia PDF Downloads 3603497 Stochastic Edge Based Anomaly Detection for Supervisory Control and Data Acquisitions Systems: Considering the Zambian Power Grid
Authors: Lukumba Phiri, Simon Tembo, Kumbuso Joshua Nyoni
Abstract:
In Zambia recent initiatives by various power operators like ZESCO, CEC, and consumers like the mines to upgrade power systems into smart grids target an even tighter integration with information technologies to enable the integration of renewable energy sources, local and bulk generation, and demand response. Thus, for the reliable operation of smart grids, its information infrastructure must be secure and reliable in the face of both failures and cyberattacks. Due to the nature of the systems, ICS/SCADA cybersecurity and governance face additional challenges compared to the corporate networks, and critical systems may be left exposed. There exist control frameworks internationally such as the NIST framework, however, there are generic and do not meet the domain-specific needs of the SCADA systems. Zambia is also lagging in cybersecurity awareness and adoption, therefore there is a concern about securing ICS controlling key infrastructure critical to the Zambian economy as there are few known facts about the true posture. In this paper, we introduce a stochastic Edged-based Anomaly Detection for SCADA systems (SEADS) framework for threat modeling and risk assessment. SEADS enables the calculation of steady-steady probabilities that are further applied to establish metrics like system availability, maintainability, and reliability.Keywords: anomaly, availability, detection, edge, maintainability, reliability, stochastic
Procedia PDF Downloads 1083496 Image Compression Based on Regression SVM and Biorthogonal Wavelets
Authors: Zikiou Nadia, Lahdir Mourad, Ameur Soltane
Abstract:
In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.Keywords: image compression, 2D discrete wavelet transform (DWT-2D), support vector regression (SVR), SVM Kernels, run-length, arithmetic coding
Procedia PDF Downloads 3803495 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria
Procedia PDF Downloads 2633494 Application and Verification of Regression Model to Landslide Susceptibility Mapping
Authors: Masood Beheshtirad
Abstract:
Identification of regions having potential for landslide occurrence is one of the basic measures in natural resources management. Different landslide hazard mapping models are proposed based on the environmental condition and goals. In this research landslide hazard map using multiple regression model were provided and applicability of this model is investigated in Baghdasht watershed. Dependent variable is landslide inventory map and independent variables consist of information layers as Geology, slope, aspect, distance from river, distance from road, fault and land use. For doing this, existing landslides have been identified and an inventory map made. The landslide hazard map is based on the multiple regression provided. The level of similarity potential hazard classes and figures of this model were compared with the landslide inventory map in the SPSS environments. Results of research showed that there is a significant correlation between the potential hazard classes and figures with area of the landslides. The multiple regression model is suitable for application in the Baghdasht Watershed.Keywords: landslide, mapping, multiple model, regression
Procedia PDF Downloads 3223493 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling
Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra
Abstract:
Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model
Procedia PDF Downloads 4263492 Forecasting Silver Commodity Prices Using Geometric Brownian Motion: A Stochastic Approach
Authors: Sina Dehghani, Zhikang Rong
Abstract:
Historically, a variety of approaches have been taken to forecast commodity prices due to the significant implications of these values on the global economy. An accurate forecasting tool for a valuable commodity would significantly benefit investors and governmental agencies. Silver, in particular, has grown significantly as a commodity in recent years due to its use in healthcare and technology. This manuscript aims to utilize the Geometric Brownian Motion predictive model to forecast silver commodity prices over multiple 3-year periods. The results of the study indicate that the model has several limitations, particularly its inability to work effectively over longer periods of time, but still was extremely effective over shorter time frames. This study sets a baseline for silver commodity forecasting with GBM, and the model could be further strengthened with refinement.Keywords: geometric Brownian motion, commodity, risk management, volatility, stochastic behavior, price forecasting
Procedia PDF Downloads 203491 Predicting Bridge Pier Scour Depth with SVM
Authors: Arun Goel
Abstract:
Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper, attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly and Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly and Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicates the improvement in the performance of SVM (Poly and Rbf) in comparison to dimensional form of scour.Keywords: modeling, pier scour, regression, prediction, SVM (Poly and Rbf kernels)
Procedia PDF Downloads 4503490 Arabic Character Recognition Using Regression Curves with the Expectation Maximization Algorithm
Authors: Abdullah A. AlShaher
Abstract:
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We then proceed by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.Keywords: character recognition, regression curves, handwritten Arabic letters, expectation maximization algorithm
Procedia PDF Downloads 1433489 Reminiscence Therapy for Alzheimer’s Disease Restrained on Logistic Regression Based Linear Bootstrap Aggregating
Authors: P. S. Jagadeesh Kumar, Mingmin Pan, Xianpei Li, Yanmin Yuan, Tracy Lin Huan
Abstract:
Researchers are doing enchanting research into the inherited features of Alzheimer’s disease and probable consistent therapies. In Alzheimer’s, memories are extinct in reverse order; memories formed lately are more transitory than those from formerly. Reminiscence therapy includes the conversation of past actions, trials and knowledges with another individual or set of people, frequently with the help of perceptible reminders such as photos, household and other acquainted matters from the past, music and collection of tapes. In this manuscript, the competence of reminiscence therapy for Alzheimer’s disease is measured using logistic regression based linear bootstrap aggregating. Logistic regression is used to envisage the experiential features of the patient’s memory through various therapies. Linear bootstrap aggregating shows better stability and accuracy of reminiscence therapy used in statistical classification and regression of memories related to validation therapy, supportive psychotherapy, sensory integration and simulated presence therapy.Keywords: Alzheimer’s disease, linear bootstrap aggregating, logistic regression, reminiscence therapy
Procedia PDF Downloads 3073488 Designing Emergency Response Network for Rail Hazmat Shipments
Authors: Ali Vaezi, Jyotirmoy Dalal, Manish Verma
Abstract:
The railroad is one of the primary transportation modes for hazardous materials (hazmat) shipments in North America. Installing an emergency response network capable of providing a commensurate response is one of the primary levers to contain (or mitigate) the adverse consequences from rail hazmat incidents. To this end, we propose a two-stage stochastic program to determine the location of and equipment packages to be stockpiled at each response facility. The raw input data collected from publicly available reports were processed, fed into the proposed optimization program, and then tested on a realistic railroad network in Ontario (Canada). From the resulting analyses, we conclude that the decisions based only on empirical datasets would undermine the effectiveness of the resulting network; coverage can be improved by redistributing equipment in the network, purchasing equipment with higher containment capacity, and making use of a disutility multiplier factor.Keywords: hazmat, rail network, stochastic programming, emergency response
Procedia PDF Downloads 1803487 Predicting Survival in Cancer: How Cox Regression Model Compares to Artifial Neural Networks?
Authors: Dalia Rimawi, Walid Salameh, Amal Al-Omari, Hadeel AbdelKhaleq
Abstract:
Predication of Survival time of patients with cancer, is a core factor that influences oncologist decisions in different aspects; such as offered treatment plans, patients’ quality of life and medications development. For a long time proportional hazards Cox regression (ph. Cox) was and still the most well-known statistical method to predict survival outcome. But due to the revolution of data sciences; new predication models were employed and proved to be more flexible and provided higher accuracy in that type of studies. Artificial neural network is one of those models that is suitable to handle time to event predication. In this study we aim to compare ph Cox regression with artificial neural network method according to data handling and Accuracy of each model.Keywords: Cox regression, neural networks, survival, cancer.
Procedia PDF Downloads 1983486 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 4393485 Machine Vision System for Measuring the Quality of Bulk Sun-dried Organic Raisins
Authors: Navab Karimi, Tohid Alizadeh
Abstract:
An intelligent vision-based system was designed to measure the quality and purity of raisins. A machine vision setup was utilized to capture the images of bulk raisins in ranges of 5-50% mixed pure-impure berries. The textural features of bulk raisins were extracted using Grey-level Histograms, Co-occurrence Matrix, and Local Binary Pattern (a total of 108 features). Genetic Algorithm and neural network regression were used for selecting and ranking the best features (21 features). As a result, the GLCM features set was found to have the highest accuracy (92.4%) among the other sets. Followingly, multiple feature combinations of the previous stage were fed into the second regression (linear regression) to increase accuracy, wherein a combination of 16 features was found to be the optimum. Finally, a Support Vector Machine (SVM) classifier was used to differentiate the mixtures, producing the best efficiency and accuracy of 96.2% and 97.35%, respectively.Keywords: sun-dried organic raisin, genetic algorithm, feature extraction, ann regression, linear regression, support vector machine, south azerbaijan.
Procedia PDF Downloads 723484 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method
Authors: Luh Eka Suryani, Purhadi
Abstract:
Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion
Procedia PDF Downloads 1583483 Determining the Causality Variables in Female Genital Mutilation: A Factor Screening Approach
Authors: Ekele Alih, Enejo Jalija
Abstract:
Female Genital Mutilation (FGM) is made up of three types namely: Clitoridectomy, Excision and Infibulation. In this study, we examine the factors responsible for FGM in order to identify the causality variables in a logistic regression approach. From the result of the survey conducted by the Public Health Division, Nigeria Institute of Medical Research, Yaba, Lagos State, the tau statistic, τ was used to screen 9 factors that causes FGM in order to select few of the predictors before multiple regression equation is obtained. The need for this may be that the sample size may not be able to sustain having a regression with all the predictors or to avoid multi-collinearity. A total of 300 respondents, comprising 150 adult males and 150 adult females were selected for the household survey based on the multi-stage sampling procedure. The tau statistic,Keywords: female genital mutilation, logistic regression, tau statistic, African society
Procedia PDF Downloads 2603482 A Monte Carlo Fuzzy Logistic Regression Framework against Imbalance and Separation
Authors: Georgios Charizanos, Haydar Demirhan, Duygu Icen
Abstract:
Two of the most impactful issues in classical logistic regression are class imbalance and complete separation. These can result in model predictions heavily leaning towards the imbalanced class on the binary response variable or over-fitting issues. Fuzzy methodology offers key solutions for handling these problems. However, most studies propose the transformation of the binary responses into a continuous format limited within [0,1]. This is called the possibilistic approach within fuzzy logistic regression. Following this approach is more aligned with straightforward regression since a logit-link function is not utilized, and fuzzy probabilities are not generated. In contrast, we propose a method of fuzzifying binary response variables that allows for the use of the logit-link function; hence, a probabilistic fuzzy logistic regression model with the Monte Carlo method. The fuzzy probabilities are then classified by selecting a fuzzy threshold. Different combinations of fuzzy and crisp input, output, and coefficients are explored, aiming to understand which of these perform better under different conditions of imbalance and separation. We conduct numerical experiments using both synthetic and real datasets to demonstrate the performance of the fuzzy logistic regression framework against seven crisp machine learning methods. The proposed framework shows better performance irrespective of the degree of imbalance and presence of separation in the data, while the considered machine learning methods are significantly impacted.Keywords: fuzzy logistic regression, fuzzy, logistic, machine learning
Procedia PDF Downloads 713481 Global Direct Search Optimization of a Tuned Liquid Column Damper Subject to Stochastic Load
Authors: Mansour H. Alkmim, Adriano T. Fabro, Marcus V. G. De Morais
Abstract:
In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of an undamped primary system under white noise excitation. Finally, a numerical example considering a simplified wind turbine model is given to illustrate the efficacy of the TLCD. Results from the random vibration analysis are shown for four types of random excitation wind model where the response PSDs obtained showed good vibration attenuation.Keywords: generalized pattern search, parameter optimization, random vibration analysis, vibration suppression
Procedia PDF Downloads 2743480 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis
Authors: Tefera Kebede Leyu
Abstract:
The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step
Procedia PDF Downloads 743479 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco
Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui
Abstract:
The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).Keywords: landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate
Procedia PDF Downloads 1863478 Efficiency of Secondary Schools by ICT Intervention in Sylhet Division of Bangladesh
Authors: Azizul Baten, Kamrul Hossain, Abdullah-Al-Zabir
Abstract:
The objective of this study is to develop an appropriate stochastic frontier secondary schools efficiency model by ICT Intervention and to examine the impact of ICT challenges on secondary schools efficiency in the Sylhet division in Bangladesh using stochastic frontier analysis. The Translog stochastic frontier model was found an appropriate than the Cobb-Douglas model in secondary schools efficiency by ICT Intervention. Based on the results of the Cobb-Douglas model, it is found that the coefficient of the number of teachers, the number of students, and teaching ability had a positive effect on increasing the level of efficiency. It indicated that these are related to technical efficiency. In the case of inefficiency effects for both Cobb-Douglas and Translog models, the coefficient of the ICT lab decreased secondary school inefficiency, but the online class in school was found to increase the level of inefficiency. The coefficients of teacher’s preference for ICT tools like multimedia projectors played a contributor role in decreasing the secondary school inefficiency in the Sylhet division of Bangladesh. The interaction effects of the number of teachers and the classrooms, and the number of students and the number of classrooms, the number of students and teaching ability, and the classrooms and teaching ability of the teachers were recorded with the positive values and these have a positive impact on increasing the secondary school efficiency. The overall mean efficiency of urban secondary schools was found at 84.66% for the Translog model, while it was 83.63% for the Cobb-Douglas model. The overall mean efficiency of rural secondary schools was found at 80.98% for the Translog model, while it was 81.24% for the Cobb-Douglas model. So, the urban secondary schools performed better than the rural secondary schools in the Sylhet division. It is observed from the results of the Tobit model that the teacher-student ratio had a positive influence on secondary school efficiency. The teaching experiences of those who have 1 to 5 years and 10 years above, MPO type school, conventional teaching method have had a negative and significant influence on secondary school efficiency. The estimated value of σ-square (0.0625) was different from Zero, indicating a good fit. The value of γ (0.9872) was recorded as positive and it can be interpreted as follows: 98.72 percent of random variation around in secondary school outcomes due to inefficiency.Keywords: efficiency, secondary schools, ICT, stochastic frontier analysis
Procedia PDF Downloads 1493477 Robust Variable Selection Based on Schwarz Information Criterion for Linear Regression Models
Authors: Shokrya Saleh A. Alshqaq, Abdullah Ali H. Ahmadini
Abstract:
The Schwarz information criterion (SIC) is a popular tool for selecting the best variables in regression datasets. However, SIC is defined using an unbounded estimator, namely, the least-squares (LS), which is highly sensitive to outlying observations, especially bad leverage points. A method for robust variable selection based on SIC for linear regression models is thus needed. This study investigates the robustness properties of SIC by deriving its influence function and proposes a robust SIC based on the MM-estimation scale. The aim of this study is to produce a criterion that can effectively select accurate models in the presence of vertical outliers and high leverage points. The advantages of the proposed robust SIC is demonstrated through a simulation study and an analysis of a real dataset.Keywords: influence function, robust variable selection, robust regression, Schwarz information criterion
Procedia PDF Downloads 1383476 Generalized Additive Model for Estimating Propensity Score
Authors: Tahmidul Islam
Abstract:
Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching
Procedia PDF Downloads 3653475 A Comparison of Neural Network and DOE-Regression Analysis for Predicting Resource Consumption of Manufacturing Processes
Authors: Frank Kuebler, Rolf Steinhilper
Abstract:
Artificial neural networks (ANN) as well as Design of Experiments (DOE) based regression analysis (RA) are mainly used for modeling of complex systems. Both methodologies are commonly applied in process and quality control of manufacturing processes. Due to the fact that resource efficiency has become a critical concern for manufacturing companies, these models needs to be extended to predict resource-consumption of manufacturing processes. This paper describes an approach to use neural networks as well as DOE based regression analysis for predicting resource consumption of manufacturing processes and gives a comparison of the achievable results based on an industrial case study of a turning process.Keywords: artificial neural network, design of experiments, regression analysis, resource efficiency, manufacturing process
Procedia PDF Downloads 5233474 Logistic Regression Model versus Additive Model for Recurrent Event Data
Authors: Entisar A. Elgmati
Abstract:
Recurrent infant diarrhea is studied using daily data collected in Salvador, Brazil over one year and three months. A logistic regression model is fitted instead of Aalen's additive model using the same covariates that were used in the analysis with the additive model. The model gives reasonably similar results to that using additive regression model. In addition, the problem with the estimated conditional probabilities not being constrained between zero and one in additive model is solved here. Also martingale residuals that have been used to judge the goodness of fit for the additive model are shown to be useful for judging the goodness of fit of the logistic model.Keywords: additive model, cumulative probabilities, infant diarrhoea, recurrent event
Procedia PDF Downloads 6333473 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles
Authors: Nozar Kishi, Babak Kamrani, Filmon Habte
Abstract:
Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM
Procedia PDF Downloads 268