Search results for: multinomial logistic regression.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 822

Search results for: multinomial logistic regression.

762 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: Piecewise, Bayesian, reversible jump MCMC, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
761 Early Warning System of Financial Distress Based On Credit Cycle Index

Authors: Bi-Huei Tsai

Abstract:

Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightlydistressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the one-stage model has the lower misclassification error rate than the two-stage model. The one-stage model is more accurate than the two-stage model.

Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2639
760 The Relative Efficiency of Parameter Estimation in Linear Weighted Regression

Authors: Baoguang Tian, Nan Chen

Abstract:

A new relative efficiency in linear model in reference is instructed into the linear weighted regression, and its upper and lower bound are proposed. In the linear weighted regression model, for the best linear unbiased estimation of mean matrix respect to the least-squares estimation, two new relative efficiencies are given, and their upper and lower bounds are also studied.

Keywords: Linear weighted regression, Relative efficiency, Mean matrix, Trace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2419
759 A Discrete-Event-Simulation Approach for Logistic Systems with Real Time Resource Routing and VR Integration

Authors: Gerrit Alves, Jürgen Roßmann, Roland Wischnewski

Abstract:

Today, transport and logistic systems are often tightly integrated in the production. Lean production and just-in-time delivering create multiple constraints that have to be fulfilled. As transport networks often have evolved over time they are very expensive to change. This paper describes a discrete-event-simulation system which simulates transportation models using real time resource routing and collision avoidance. It allows for the specification of own control algorithms and validation of new strategies. The simulation is integrated into a virtual reality (VR) environment and can be displayed in 3-D to show the progress. Simulation elements can be selected through VR metaphors. All data gathered during the simulation can be presented as a detailed summary afterwards. The included cost-benefit calculation can help to optimize the financial outcome. The operation of this approach is shown by the example of a timber harvest simulation.

Keywords: Discrete-Event-Simulation, Logistic, Simulation, Virtual Reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1838
758 Optimal Measures in Production Developing an Universal Decision Supporter for Evaluating Measures in a Production

Authors: Michael Grigutsch, Marco Kennemann, Peter Nyhuis

Abstract:

Due to the recovering global economy, enterprises are increasingly focusing on logistics. Investing in logistic measures for a production generates a large potential for achieving a good starting point within a competitive field. Unlike during the global economic crisis, enterprises are now challenged with investing available capital to maximize profits. In order to be able to create an informed and quantifiably comprehensible basis for a decision, enterprises need an adequate model for logistically and monetarily evaluating measures in production. The Collaborate Research Centre 489 (SFB 489) at the Institute for Production Systems (IFA) developed a Logistic Information System which provides support in making decisions and is designed specifically for the forging industry. The aim of a project that has been applied for is to now transfer this process in order to develop a universal approach to logistically and monetarily evaluate measures in production.

Keywords: Measures in Production, Logistic Operating Curves, Transfer Functions, Production Logistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
757 Internet Purchases in European Union Countries: Multiple Linear Regression Approach

Authors: Ksenija Dumičić, Anita Čeh Časni, Irena Palić

Abstract:

This paper examines economic and Information and Communication Technology (ICT) development influence on recently increasing Internet purchases by individuals for European Union member states. After a growing trend for Internet purchases in EU27 was noticed, all possible regression analysis was applied using nine independent variables in 2011. Finally, two linear regression models were studied in detail. Conducted simple linear regression analysis confirmed the research hypothesis that the Internet purchases in analyzed EU countries is positively correlated with statistically significant variable Gross Domestic Product per capita (GDPpc). Also, analyzed multiple linear regression model with four regressors, showing ICT development level, indicates that ICT development is crucial for explaining the Internet purchases by individuals, confirming the research hypothesis.

Keywords: European Union, Internet purchases, multiple linear regression model, outlier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2899
756 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: Actuarial loss reserving techniques, logistic regression, parametric function, volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366
755 Extended Least Squares LS–SVM

Authors: József Valyon, Gábor Horváth

Abstract:

Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) has been introduced. In this paper we present an extended view of the Least Squares Support Vector Regression (LS–SVR), which enables us to develop new formulations and algorithms to this regression technique. Based on manipulating the linear equation set -which embodies all information about the regression in the learning process- some new methods are introduced to simplify the formulations, speed up the calculations and/or provide better results.

Keywords: Function estimation, Least–Squares Support VectorMachines, Regression, System Modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
754 Optimization of Slider Crank Mechanism Using Design of Experiments and Multi-Linear Regression

Authors: Galal Elkobrosy, Amr M. Abdelrazek, Bassuny M. Elsouhily, Mohamed E. Khidr

Abstract:

Crank shaft length, connecting rod length, crank angle, engine rpm, cylinder bore, mass of piston and compression ratio are the inputs that can control the performance of the slider crank mechanism and then its efficiency. Several combinations of these seven inputs are used and compared. The throughput engine torque predicted by the simulation is analyzed through two different regression models, with and without interaction terms, developed according to multi-linear regression using LU decomposition to solve system of algebraic equations. These models are validated. A regression model in seven inputs including their interaction terms lowered the polynomial degree from 3rd degree to 1st degree and suggested valid predictions and stable explanations.

Keywords: Design of experiments, regression analysis, SI Engine, statistical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1200
753 Churn Prediction: Does Technology Matter?

Authors: John Hadden, Ashutosh Tiwari, Rajkumar Roy, Dymitr Ruta

Abstract:

The aim of this paper is to identify the most suitable model for churn prediction based on three different techniques. The paper identifies the variables that affect churn in reverence of customer complaints data and provides a comparative analysis of neural networks, regression trees and regression in their capabilities of predicting customer churn.

Keywords: Churn, Decision Trees, Neural Networks, Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3225
752 Research on the Problems of Housing Prices in Qingdao from a Macro Perspective

Authors: Liu Zhiyuan, Sun Zongdi, Liu Zhiyuan, Sun Zongdi

Abstract:

Qingdao is a seaside city. Taking into account the characteristics of Qingdao, this article established a multiple linear regression model to analyze the impact of macroeconomic factors on housing prices. We used stepwise regression method to make multiple linear regression analysis, and made statistical analysis of F test values and T test values. According to the analysis results, the model is continuously optimized. Finally, this article obtained the multiple linear regression equation and the influencing factors, and the reliability of the model was verified by F test and T test.

Keywords: Housing prices, multiple linear regression model, macroeconomic factors, Qingdao City.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1136
751 Estimating Regression Parameters in Linear Regression Model with a Censored Response Variable

Authors: Jesus Orbe, Vicente Nunez-Anton

Abstract:

In this work we study the effect of several covariates X on a censored response variable T with unknown probability distribution. In this context, most of the studies in the literature can be located in two possible general classes of regression models: models that study the effect the covariates have on the hazard function; and models that study the effect the covariates have on the censored response variable. Proposals in this paper are in the second class of models and, more specifically, on least squares based model approach. Thus, using the bootstrap estimate of the bias, we try to improve the estimation of the regression parameters by reducing their bias, for small sample sizes. Simulation results presented in the paper show that, for reasonable sample sizes and censoring levels, the bias is always smaller for the new proposals.

Keywords: Censored response variable, regression, bias.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
750 Adjusted Ratio and Regression Type Estimators for Estimation of Population Mean when some Observations are missing

Authors: Nuanpan Nangsue

Abstract:

Ratio and regression type estimators have been used by previous authors to estimate a population mean for the principal variable from samples in which both auxiliary x and principal y variable data are available. However, missing data are a common problem in statistical analyses with real data. Ratio and regression type estimators have also been used for imputing values of missing y data. In this paper, six new ratio and regression type estimators are proposed for imputing values for any missing y data and estimating a population mean for y from samples with missing x and/or y data. A simulation study has been conducted to compare the six ratio and regression type estimators with a previous estimator of Rueda. Two population sizes N = 1,000 and 5,000 have been considered with sample sizes of 10% and 30% and with correlation coefficients between population variables X and Y of 0.5 and 0.8. In the simulations, 10 and 40 percent of sample y values and 10 and 40 percent of sample x values were randomly designated as missing. The new ratio and regression type estimators give similar mean absolute percentage errors that are smaller than the Rueda estimator for all cases. The new estimators give a large reduction in errors for the case of 40% missing y values and sampling fraction of 30%.

Keywords: Auxiliary variable, missing data, ratio and regression type estimators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687
749 The Technological Problem of Simulation of the Logistics Center

Authors: Juraj Camaj, Anna Dolinayova, Jana Lalinska, Miroslav Bariak

Abstract:

Planning of infrastructure and processes in logistic center within the frame of various kinds of logistic hubs and technological activities in them represent quite complex problem. The main goal is to design appropriate layout, which enables to realize expected operation on the desired levels. The simulation software represents progressive contemporary experimental technique, which can support complex processes of infrastructure planning and all of activities on it. It means that simulation experiments, reflecting various planned infrastructure variants, investigate and verify their eligibilities in relation with corresponding expected operation. The inducted approach enables to make qualified decisions about infrastructure investments or measures, which derive benefit from simulation-based verifications. The paper represents simulation software for simulation infrastructural layout and technological activities in marshalling yard, intermodal terminal, warehouse and combination between them as the parts of logistic center.

Keywords: Marshalling yard, intermodal terminal, warehouse, transport technology, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2494
748 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: Customer value, Huff's Gravity Model, POS, retailer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 544
747 Delivery System Design of the Local Part to Reduce the Logistic Costs in an Automotive Industry

Authors: Inaki Maulida Hakim, Alesandro Romero

Abstract:

This research was conducted in an automotive company in Indonesia to overcome the problem of high logistics cost. The problem causes high of additional truck delivery. From the breakdown of the problem, chosen one route, which has the highest gap value, namely for RE-04. Research methodology will be started from calculating the ideal condition, making simulation, calculating the ideal logistic cost, and proposing an improvement. From the calculation of the ideal condition, box arrangement was done on the truck has efficiency with three trucks delivery per day. Route simulation making uses Tecnomatix Plant Simulation software as a visualization for the company about how the system is occurred on route RE-04 in ideal condition. The last step is proposing improvements on the area of route RE-04. The route arrangement is done by Saving Method and sequence of each supplier with the Nearest Neighbor. The results of the proposed improvements are three new route groups, where was expected to decrease logistics cost and increase the average of the truck efficiency per day.

Keywords: Logistic cost, milkrun, simulation, efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
746 Quality of Service Evaluation using a Combination of Fuzzy C-Means and Regression Model

Authors: Aboagela Dogman, Reza Saatchi, Samir Al-Khayatt

Abstract:

In this study, a network quality of service (QoS) evaluation system was proposed. The system used a combination of fuzzy C-means (FCM) and regression model to analyse and assess the QoS in a simulated network. Network QoS parameters of multimedia applications were intelligently analysed by FCM clustering algorithm. The QoS parameters for each FCM cluster centre were then inputted to a regression model in order to quantify the overall QoS. The proposed QoS evaluation system provided valuable information about the network-s QoS patterns and based on this information, the overall network-s QoS was effectively quantified.

Keywords: Fuzzy C-means; regression model, network quality of service

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
745 Performance Analysis of Adaptive LMS Filter through Regression Analysis using SystemC

Authors: Hyeong-Geon Lee, Jae-Young Park, Suk-ki Lee, Jong-Tae Kim

Abstract:

The LMS adaptive filter has several parameters which can affect their performance. From among these parameters, most papers handle the step size parameter for controlling the performance. In this paper, we approach three parameters: step-size, filter tap-size and filter form. The regression analysis is used for defining the relation between parameters and performance of LMS adaptive filter with using the system level simulation results. The results present that all parameters have performance trends in each own particular form, which can be estimated from equations drawn by regression analysis.

Keywords: System level model, adaptive LMS FIR filter, regression analysis, systemC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
744 Cost Sensitive Analysis of Production Logistics Measures A Decision Making Support System for Evaluating Measures in the Production

Authors: Michael Grigutsch, Peter Nyhuis

Abstract:

Due to the volatile global economy, enterprises are increasingly focusing on logistics. By investing in suitable measures a company can increase their logistic performance and assert themselves over the competition. However, enterprises are also faced with the challenge of investing available capital for maximum profits. In order to be able to create an informed and quantifiably comprehensible basis for a decision, enterprises need a suitable model for logistically and monetarily evaluating measures in production. Previously, within the frame of Collaborate Research Centre 489 (SFB 489) at the Institute for Production Systems and Logistics, (IFA) a Logistic Information System was developed specifically for providing enterprises in the forging industry with support when making decisions. Based on this research, a new initiative referred to as ‘Transfer Project T7’, aims to develop a universal approach for logistically and monetarily evaluating production measures. This paper focuses on the structural measure echelon storage and their impact on the entire production system.

Keywords: Logistic Operating Curves, Transfer Functions, Production Logistics, Storages Echelon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1295
743 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
742 Modeling the Influence of Socioeconomic and Land-Use Factors on Mode Choice: A Comparison of Riyadh, Saudi Arabia, and Melbourne, Australia

Authors: M. Alqhatani, S. Bajwa, S. Setunge

Abstract:

Metropolitan areas have suffered from traffic problems, which have steadily increased in many monocentric cities. Urban expansion, population growth, and road network development have resulted in a structural shift toward urban sprawl, increasing commuters’ dependence on private modes of transport. This paper aims to model the influence of socioeconomic and land-use factors on mode choice using a multinomial and nested logit model. Land-use patterns—such as residential, commercial, retail, educational and employment related—affect the choice of mode and destination in the short and medium term. Socioeconomic factors—such as age, gender, income, household size, and house type—also affect choice, while residential location is affected in the long term. Riyadh in Saudi Arabia and Melbourne in Australia were chosen as case studies. Riyadh is a car-dependent city with limited public transport, whereas Melbourne has good public transport but an increase in car dependence. Aggregate level land-use data and disaggregate level individual, household, and journey-to-work data are used to determine the effects of land use and socioeconomic factors on mode choice. The model results determined that urban sprawl is the main factor that affects mode choice, income, and house type.

Keywords: Socioeconomic, land use, mode choice, multinomial logit and nested logit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2387
741 Multiple Regression based Graphical Modeling for Images

Authors: Pavan S., Sridhar G., Sridhar V.

Abstract:

Super resolution is one of the commonly referred inference problems in computer vision. In the case of images, this problem is generally addressed using a graphical model framework wherein each node represents a portion of the image and the edges between the nodes represent the statistical dependencies. However, the large dimensionality of images along with the large number of possible states for a node makes the inference problem computationally intractable. In this paper, we propose a representation wherein each node can be represented as acombination of multiple regression functions. The proposed approach achieves a tradeoff between the computational complexity and inference accuracy by varying the number of regression functions for a node.

Keywords: Belief propagation, Graphical model, Regression, Super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
740 Empirical Statistical Modeling of Rainfall Prediction over Myanmar

Authors: Wint Thida Zaw, Thinn Thu Naing

Abstract:

One of the essential sectors of Myanmar economy is agriculture which is sensitive to climate variation. The most important climatic element which impacts on agriculture sector is rainfall. Thus rainfall prediction becomes an important issue in agriculture country. Multi variables polynomial regression (MPR) provides an effective way to describe complex nonlinear input output relationships so that an outcome variable can be predicted from the other or others. In this paper, the modeling of monthly rainfall prediction over Myanmar is described in detail by applying the polynomial regression equation. The proposed model results are compared to the results produced by multiple linear regression model (MLR). Experiments indicate that the prediction model based on MPR has higher accuracy than using MLR.

Keywords: Polynomial Regression, Rainfall Forecasting, Statistical forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2582
739 Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion

Authors: Elena Ezhova, Vadim Mottl, Olga Krasotkina

Abstract:

The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.

Keywords: Time varying regression, time-volatility of regression coefficients, Akaike Information Criterion (AIC), Kullback information maximization principle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
738 Bioprocess Optimization Based On Relevance Vector Regression Models and Evolutionary Programming Technique

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte

Abstract:

This paper proposes a bioprocess optimization procedure based on Relevance Vector Regression models and evolutionary programming technique. Relevance Vector Regression scheme allows developing a compact and stable data-based process model avoiding time-consuming modeling expenses. The model building and process optimization procedure could be done in a half-automated way and repeated after every new cultivation run. The proposed technique was tested in a simulated mammalian cell cultivation process. The obtained results are promising and could be attractive for optimization of industrial bioprocesses.

Keywords: Bioprocess optimization, Evolutionary programming, Relevance Vector Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2141
737 Performance Analysis of Proprietary and Non-Proprietary Tools for Regression Testing Using Genetic Algorithm

Authors: K. Hema Shankari, R. Thirumalaiselvi, N. V. Balasubramanian

Abstract:

The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.

Keywords: APFD metric, genetic algorithm, regression testing, RFT tool, test case prioritization, selenium tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
736 A Fuzzy Nonlinear Regression Model for Interval Type-2 Fuzzy Sets

Authors: O. Poleshchuk, E.Komarov

Abstract:

This paper presents a regression model for interval type-2 fuzzy sets based on the least squares estimation technique. Unknown coefficients are assumed to be triangular fuzzy numbers. The basic idea is to determine aggregation intervals for type-1 fuzzy sets, membership functions of whose are low membership function and upper membership function of interval type-2 fuzzy set. These aggregation intervals were called weighted intervals. Low and upper membership functions of input and output interval type-2 fuzzy sets for developed regression models are considered as piecewise linear functions.

Keywords: Interval type-2 fuzzy sets, fuzzy regression, weighted interval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2176
735 Logistic Model Tree and Expectation-Maximization for Pollen Recognition and Grouping

Authors: Endrick Barnacin, Jean-Luc Henry, Jack Molinié, Jimmy Nagau, Hélène Delatte, Gérard Lebreton

Abstract:

Palynology is a field of interest for many disciplines. It has multiple applications such as chronological dating, climatology, allergy treatment, and even honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time-consuming task that requires the intervention of experts in the field, which is becoming increasingly rare due to economic and social conditions. So, the automation of this task is a necessity. Pollen slides analysis is mainly a visual process as it is carried out with the naked eye. That is the reason why a primary method to automate palynology is the use of digital image processing. This method presents the lowest cost and has relatively good accuracy in pollen retrieval. In this work, we propose a system combining recognition and grouping of pollen. It consists of using a Logistic Model Tree to classify pollen already known by the proposed system while detecting any unknown species. Then, the unknown pollen species are divided using a cluster-based approach. Success rates for the recognition of known species have been achieved, and automated clustering seems to be a promising approach.

Keywords: Pollen recognition, logistic model tree, expectation-maximization, local binary pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
734 Predicting Bridge Pier Scour Depth with SVM

Authors: Arun Goel

Abstract:

Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly & Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly & Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicate the improvement in the performance of SVM (Poly & Rbf) in comparison to dimensional form of scour.

Keywords: Modeling, pier scour, regression, prediction, SVM (Poly & Rbf kernels).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
733 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979