Search results for: generalized regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3774

Search results for: generalized regression

3564 Efficient Model Selection in Linear and Non-Linear Quantile Regression by Cross-Validation

Authors: Yoonsuh Jung, Steven N. MacEachern

Abstract:

Check loss function is used to define quantile regression. In the prospect of cross validation, it is also employed as a validation function when underlying truth is unknown. However, our empirical study indicates that the validation with check loss often leads to choosing an over estimated fits. In this work, we suggest a modified or L2-adjusted check loss which rounds the sharp corner in the middle of check loss. It has a large effect of guarding against over fitted model in some extent. Through various simulation settings of linear and non-linear regressions, the improvement of check loss by L2 adjustment is empirically examined. This adjustment is devised to shrink to zero as sample size grows.

Keywords: cross-validation, model selection, quantile regression, tuning parameter selection

Procedia PDF Downloads 407
3563 Applying Serious Game Design Frameworks to Existing Games for Integration of Custom Learning Objectives

Authors: Jonathan D. Moore, Mark G. Reith, David S. Long

Abstract:

Serious games (SGs) have been shown to be an effective teaching tool in many contexts. Because of the success of SGs, several design frameworks have been created to expedite the process of making original serious games to teach specific learning objectives (LOs). Even with these frameworks, the time required to create a custom SG from conception to implementation can range from months to years. Furthermore, it is even more difficult to design a game framework that allows an instructor to create customized game variants supporting multiple LOs within the same field. This paper proposes a refactoring methodology to apply the theoretical principles from well-established design frameworks to a pre-existing serious game. The expected result is a generalized game that can be quickly customized to teach LOs not originally targeted by the game. This methodology begins by describing the general components in a game, then uses a combination of two SG design frameworks to extract the teaching elements present in the game. The identified teaching elements are then used as the theoretical basis to determine the range of LOs that can be taught by the game. This paper evaluates the proposed methodology by presenting a case study of refactoring the serious game Battlespace Next (BSN) to teach joint military capabilities. The range of LOs that can be taught by the generalized BSN are identified, and examples of creating custom LOs are given. Survey results from users of the generalized game are also provided. Lastly, the expected impact of this work is discussed and a road map for future work and evaluation is presented.

Keywords: serious games, learning objectives, game design, learning theory, game framework

Procedia PDF Downloads 76
3562 Bivariate Generalization of q-α-Bernstein Polynomials

Authors: Tarul Garg, P. N. Agrawal

Abstract:

We propose to define the q-analogue of the α-Bernstein Kantorovich operators and then introduce the q-bivariate generalization of these operators to study the approximation of functions of two variables. We obtain the rate of convergence of these bivariate operators by means of the total modulus of continuity, partial modulus of continuity and the Peetre’s K-functional for continuous functions. Further, in order to study the approximation of functions of two variables in a space bigger than the space of continuous functions, i.e. Bögel space; the GBS (Generalized Boolean Sum) of the q-bivariate operators is considered and degree of approximation is discussed for the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.

Keywords: Bögel continuous, Bögel differentiable, generalized Boolean sum, K-functional, mixed modulus of smoothness

Procedia PDF Downloads 354
3561 Instability Index Method and Logistic Regression to Assess Landslide Susceptibility in County Route 89, Taiwan

Authors: Y. H. Wu, Ji-Yuan Lin, Yu-Ming Liou

Abstract:

This study aims to set up the landslide susceptibility map of County Route 89 at Ren-Ai Township in Nantou County using the Instability Index Method and Logistic regression. Seven susceptibility factors including Slope Angle, Aspect, Elevation, Distance to fold, Distance to River, Distance to Road and Accumulated Rainfall were obtained by GIS based on the Typhoon Toraji landslide area identified by Industrial Technology Research Institute in 2001. To calculate the landslide percentage of each factor and acquire the weight and grade the grid by means of Instability Index Method. In this study, landslide susceptibility can be classified into four grades: high, medium high, medium low and low, in order to determine the advantages and disadvantages of the two models. The precision of this model is verified by classification error matrix and SRC curve. These results suggest that the logistic regression model is a preferred method than instability index in the assessment of landslide susceptibility. It is suitable for the landslide prediction and precaution in this area in the future.

Keywords: instability index method, logistic regression, landslide susceptibility, SRC curve

Procedia PDF Downloads 259
3560 Financial Inclusion and Modernization: Secure Energy Performance in Shanghai Cooperation Organization

Authors: Shama Urooj

Abstract:

The present work investigates the relationship among financial inclusion, modernization, and energy performance in SCO member countries during the years 2011–2021. PCA is used to create composite indexes of financial inclusion, modernization, and energy performance. We used panel regression models that are both reliable and heteroscedasticity-consistent to look at the relationship among variables. The findings indicate that financial inclusion (FI) and modernization, along with the increased FDI, all appear to contribute to the energy performance in the SCO member countries. However, per capita GDP has a negative impact on energy performance. These results are unbiased and consistent with the robust results obtained by applying different econometric models. Feasible Generalized Least Square (FGLS) estimation is also used for checking the uniformity of the main model results. This research work concludes that there has been no policy coherence in SCO member countries regarding the coordination of growing financial inclusion and modernization for energy sustainability in recent years. In order to improve energy performance with modern development, policies regarding financial inclusion and modernization need be integrated both at national as well as international levels.

Keywords: financial inclusion, energy performance, modernization, technological development, SCO.

Procedia PDF Downloads 45
3559 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 423
3558 The Strengths and Limitations of the Statistical Modeling of Complex Social Phenomenon: Focusing on SEM, Path Analysis, or Multiple Regression Models

Authors: Jihye Jeon

Abstract:

This paper analyzes the conceptual framework of three statistical methods, multiple regression, path analysis, and structural equation models. When establishing research model of the statistical modeling of complex social phenomenon, it is important to know the strengths and limitations of three statistical models. This study explored the character, strength, and limitation of each modeling and suggested some strategies for accurate explaining or predicting the causal relationships among variables. Especially, on the studying of depression or mental health, the common mistakes of research modeling were discussed.

Keywords: multiple regression, path analysis, structural equation models, statistical modeling, social and psychological phenomenon

Procedia PDF Downloads 601
3557 QSRR Analysis of 17-Picolyl and 17-Picolinylidene Androstane Derivatives Based on Partial Least Squares and Principal Component Regression

Authors: Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Lidija Jevrić, Evgenija Djurendić, Jovana Ajduković

Abstract:

There are several methods for determination of the lipophilicity of biologically active compounds, however chromatography has been shown as a very suitable method for this purpose. Chromatographic (C18-RP-HPLC) analysis of a series of 24 17-picolyl and 17-picolinylidene androstane derivatives was carried out. The obtained retention indices (logk, methanol (90%) / water (10%)) were correlated with calculated physicochemical and lipophilicity descriptors. The QSRR analysis was carried out applying principal component regression (PCR) and partial least squares regression (PLS). The PCR and PLS model were selected on the basis of the highest variance and the lowest root mean square error of cross-validation. The obtained PCR and PLS model successfully correlate the calculated molecular descriptors with logk parameter indicating the significance of the lipophilicity of compounds in chromatographic process. On the basis of the obtained results it can be concluded that the obtained logk parameters of the analyzed androstane derivatives can be considered as their chromatographic lipophilicity. These results are the part of the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina and CMST COST Action CM1105.

Keywords: androstane derivatives, chromatography, molecular structure, principal component regression, partial least squares regression

Procedia PDF Downloads 240
3556 Minimizing the Impact of Covariate Detection Limit in Logistic Regression

Authors: Shahadut Hossain, Jacek Wesolowski, Zahirul Hoque

Abstract:

In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.

Keywords: environmental exposure, detection limit, left truncation, bias, ad-hoc substitution

Procedia PDF Downloads 211
3555 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 61
3554 Comparative Study od Three Artificial Intelligence Techniques for Rain Domain in Precipitation Forecast

Authors: Nabilah Filzah Mohd Radzuan, Andi Putra, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Precipitation forecast is important to avoid natural disaster incident which can cause losses in the involved area. This paper reviews three techniques logistic regression, decision tree, and random forest which are used in making precipitation forecast. These combination techniques through the vector auto-regression (VAR) model help in finding the advantages and strengths of each technique in the forecast process. The data-set contains variables of the rain’s domain. Adaptation of artificial intelligence techniques involved in rain domain enables the forecast process to be easier and systematic for precipitation forecast.

Keywords: logistic regression, decisions tree, random forest, VAR model

Procedia PDF Downloads 417
3553 Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes

Authors: Septimia Sarbu

Abstract:

The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.

Keywords: generalized entropies, Sharma-Mittal entropy rate, Gaussian processes, eigenvalues of the covariance matrix, squeeze theorem

Procedia PDF Downloads 476
3552 A Study of User Awareness and Attitudes Towards Civil-ID Authentication in Oman’s Electronic Services

Authors: Raya Al Khayari, Rasha Al Jassim, Muna Al Balushi, Fatma Al Moqbali, Said El Hajjar

Abstract:

This study utilizes linear regression analysis to investigate the correlation between user account passwords and the probability of civil ID exposure, offering statistical insights into civil ID security. The study employs multiple linear regression (MLR) analysis to further investigate the elements that influence consumers’ views of civil ID security. This aims to increase awareness and improve preventive measures. The results obtained from the MLR analysis provide a thorough comprehension and can guide specific educational and awareness campaigns aimed at promoting improved security procedures. In summary, the study’s results offer significant insights for improving existing security measures and developing more efficient tactics to reduce risks related to civil ID security in Oman. By identifying key factors that impact consumers’ perceptions, organizations can tailor their strategies to address vulnerabilities effectively. Additionally, the findings can inform policymakers on potential regulatory changes to enhance civil ID security in the country.

Keywords: civil-id disclosure, awareness, linear regression, multiple regression

Procedia PDF Downloads 5
3551 A Research on Inference from Multiple Distance Variables in Hedonic Regression Focus on Three Variables

Authors: Yan Wang, Yasushi Asami, Yukio Sadahiro

Abstract:

In urban context, urban nodes such as amenity or hazard will certainly affect house price, while classic hedonic analysis will employ distance variables measured from each urban nodes. However, effects from distances to facilities on house prices generally do not represent the true price of the property. Distance variables measured on the same surface are suffering a problem called multicollinearity, which is usually presented as magnitude variance and mean value in regression, errors caused by instability. In this paper, we provided a theoretical framework to identify and gather the data with less bias, and also provided specific sampling method on locating the sample region to avoid the spatial multicollinerity problem in three distance variable’s case.

Keywords: hedonic regression, urban node, distance variables, multicollinerity, collinearity

Procedia PDF Downloads 440
3550 A Problem on Homogeneous Isotropic Microstretch Thermoelastic Half Space with Mass Diffusion Medium under Different Theories

Authors: Devinder Singh, Rajneesh Kumar, Arvind Kumar

Abstract:

The present investigation deals with generalized model of the equations for a homogeneous isotropic microstretch thermoelastic half space with mass diffusion medium. Theories of generalized thermoelasticity Lord-Shulman (LS) Green-Lindsay (GL) and Coupled Theory (CT) theories are applied to investigate the problem. The stresses in the considered medium have been studied due to normal force and tangential force. The normal mode analysis technique is used to calculate the normal stress, shear stress, couple stresses and microstress. A numerical computation has been performed on the resulting quantity. The computed numerical results are shown graphically.

Keywords: microstretch, thermoelastic, normal mode analysis, normal and tangential force, microstress force

Procedia PDF Downloads 508
3549 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions

Authors: Timothy Kayode Samson, Adedoyin Isola Lawal

Abstract:

The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.

Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH

Procedia PDF Downloads 67
3548 Urban Energy Demand Modelling: Spatial Analysis Approach

Authors: Hung-Chu Chen, Han Qi, Bauke de Vries

Abstract:

Energy consumption in the urban environment has attracted numerous researches in recent decades. However, it is comparatively rare to find literary works which investigated 3D spatial analysis of urban energy demand modelling. In order to analyze the spatial correlation between urban morphology and energy demand comprehensively, this paper investigates their relation by using the spatial regression tool. In addition, the spatial regression tool which is applied in this paper is ordinary least squares regression (OLS) and geographically weighted regression (GWR) model. Normalized Difference Built-up Index (NDBI), Normalized Difference Vegetation Index (NDVI), and building volume are explainers of urban morphology, which act as independent variables of Energy-land use (E-L) model. NDBI and NDVI are used as the index to describe five types of land use: urban area (U), open space (O), artificial green area (G), natural green area (V), and water body (W). Accordingly, annual electricity, gas demand and energy demand are dependent variables of the E-L model. Based on the analytical result of E-L model relation, it revealed that energy demand and urban morphology are closely connected and the possible causes and practical use are discussed. Besides, the spatial analysis methods of OLS and GWR are compared.

Keywords: energy demand model, geographically weighted regression, normalized difference built-up index, normalized difference vegetation index, spatial statistics

Procedia PDF Downloads 118
3547 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling

Authors: Muhammad Nouman Qureshi, Muhammad Hanif

Abstract:

Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.

Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation

Procedia PDF Downloads 201
3546 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 109
3545 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines

Authors: Arun Goel

Abstract:

The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free over-fall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, support vector machine (Polynomial and rbf) models, and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free over-fall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.

Keywords: air entrainment rate, dissolved oxygen, weir, SVM, regression

Procedia PDF Downloads 404
3544 Use of Regression Analysis in Determining the Length of Plastic Hinge in Reinforced Concrete Columns

Authors: Mehmet Alpaslan Köroğlu, Musa Hakan Arslan, Muslu Kazım Körez

Abstract:

Basic objective of this study is to create a regression analysis method that can estimate the length of a plastic hinge which is an important design parameter, by making use of the outcomes of (lateral load-lateral displacement hysteretic curves) the experimental studies conducted for the reinforced square concrete columns. For this aim, 170 different square reinforced concrete column tests results have been collected from the existing literature. The parameters which are thought affecting the plastic hinge length such as cross-section properties, features of material used, axial loading level, confinement of the column, longitudinal reinforcement bars in the columns etc. have been obtained from these 170 different square reinforced concrete column tests. In the study, when determining the length of plastic hinge, using the experimental test results, a regression analysis have been separately tested and compared with each other. In addition, the outcome of mentioned methods on determination of plastic hinge length of the reinforced concrete columns has been compared to other methods available in the literature.

Keywords: columns, plastic hinge length, regression analysis, reinforced concrete

Procedia PDF Downloads 448
3543 Predict Suspended Sediment Concentration Using Artificial Neural Networks Technique: Case Study Oued El Abiod Watershed, Algeria

Authors: Adel Bougamouza, Boualam Remini, Abd El Hadi Ammari, Feteh Sakhraoui

Abstract:

The assessment of sediments being carried by a river is importance for planning and designing of various water resources projects. In this study, Artificial Neural Network Techniques are used to estimate the daily suspended sediment concentration for the corresponding daily discharge flow in the upstream of Foum El Gherza dam, Biskra, Algeria. The FFNN, GRNN, and RBNN models are established for estimating current suspended sediment values. Some statistics involving RMSE and R2 were used to evaluate the performance of applied models. The comparison of three AI models showed that the RBNN model performed better than the FFNN and GRNN models with R2 = 0.967 and RMSE= 5.313 mg/l. Therefore, the ANN model had capability to improve nonlinear relationships between discharge flow and suspended sediment with reasonable precision.

Keywords: artificial neural network, Oued Abiod watershed, feedforward network, generalized regression network, radial basis network, sediment concentration

Procedia PDF Downloads 379
3542 A Boundary Backstepping Control Design for 2-D, 3-D and N-D Heat Equation

Authors: Aziz Sezgin

Abstract:

We consider the problem of stabilization of an unstable heat equation in a 2-D, 3-D and generally n-D domain by deriving a generalized backstepping boundary control design methodology. To stabilize the systems, we design boundary backstepping controllers inspired by the 1-D unstable heat equation stabilization procedure. We assume that one side of the boundary is hinged and the other side is controlled for each direction of the domain. Thus, controllers act on two boundaries for 2-D domain, three boundaries for 3-D domain and ”n” boundaries for n-D domain. The main idea of the design is to derive ”n” controllers for each of the dimensions by using ”n” kernel functions. Thus, we obtain ”n” controllers for the ”n” dimensional case. We use a transformation to change the system into an exponentially stable ”n” dimensional heat equation. The transformation used in this paper is a generalized Volterra/Fredholm type with ”n” kernel functions for n-D domain instead of the one kernel function of 1-D design.

Keywords: backstepping, boundary control, 2-D, 3-D, n-D heat equation, distributed parameter systems

Procedia PDF Downloads 375
3541 Quantitative Structure-Activity Relationship Study of Some Quinoline Derivatives as Antimalarial Agents

Authors: M. Ouassaf, S. Belaid

Abstract:

A series of quinoline derivatives with antimalarial activity were subjected to two-dimensional quantitative structure-activity relationship (2D-QSAR) studies. Three models were implemented using multiple regression linear MLR, a regression partial least squares (PLS), nonlinear regression (MNLR), to see which descriptors are closely related to the activity biologic. We relied on a principal component analysis (PCA). Based on our results, a comparison of the quality of, MLR, PLS, and MNLR models shows that the MNLR (R = 0.914 and R² = 0.835, RCV= 0.853) models have substantially better predictive capability because the MNLR approach gives better results than MLR (R = 0.835 and R² = 0,752, RCV=0.601)), PLS (R = 0.742 and R² = 0.552, RCV=0.550) The model of MNLR gave statistically significant results and showed good stability to data variation in leave-one-out cross-validation. The obtained results suggested that our proposed model MNLR may be useful to predict the biological activity of derivatives of quinoline.

Keywords: antimalarial, quinoline, QSAR, PCA, MLR , MNLR, MLR

Procedia PDF Downloads 123
3540 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 24
3539 Generalized Synchronization in Systems with a Complex Topology of Attractor

Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov

Abstract:

Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).

Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents

Procedia PDF Downloads 242
3538 On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes

Authors: Amit Ghosh, Chanchal Kundu

Abstract:

Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.

Keywords: cumulative past inaccuracy, marginal and conditional past lifetimes, conditional proportional reversed hazard rate model, usual stochastic order

Procedia PDF Downloads 222
3537 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (bi)digraphs, rough set theory, systems of interacting agents, complex systems

Procedia PDF Downloads 210
3536 Robustified Asymmetric Logistic Regression Model for Global Fish Stock Assessment

Authors: Osamu Komori, Shinto Eguchi, Hiroshi Okamura, Momoko Ichinokawa

Abstract:

The long time-series data on population assessments are essential for global ecosystem assessment because the temporal change of biomass in such a database reflects the status of global ecosystem properly. However, the available assessment data usually have limited sample sizes and the ratio of populations with low abundance of biomass (collapsed) to those with high abundance (non-collapsed) is highly imbalanced. To allow for the imbalance and uncertainty involved in the ecological data, we propose a binary regression model with mixed effects for inferring ecosystem status through an asymmetric logistic model. In the estimation equation, we observe that the weights for the non-collapsed populations are relatively reduced, which in turn puts more importance on the small number of observations of collapsed populations. Moreover, we extend the asymmetric logistic regression model using propensity score to allow for the sample biases observed in the labeled and unlabeled datasets. It robustified the estimation procedure and improved the model fitting.

Keywords: double robust estimation, ecological binary data, mixed effect logistic regression model, propensity score

Procedia PDF Downloads 235
3535 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.

Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution

Procedia PDF Downloads 346