Search results for: linear regression estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7422

Search results for: linear regression estimation

7212 A Mathematical Model of Power System State Estimation for Power Flow Solution

Authors: F. Benhamida, A. Graa, L. Benameur, I. Ziane

Abstract:

The state estimation of the electrical power system operation state is very important for supervising task. With the nonlinearity of the AC power flow model, the state estimation problem (SEP) is a nonlinear mathematical problem with many local optima. This paper treat the mathematical model for the SEP and the monitoring of the nonlinear systems of great dimensions with an application on power electrical system, the modelling, the analysis and state estimation synthesis in order to supervise the power system behavior. in fact, it is very difficult, to see impossible, (for reasons of accessibility, techniques and/or of cost) to measure the excessive number of the variables of state in a large-sized system. It is thus important to develop software sensors being able to produce a reliable estimate of the variables necessary for the diagnosis and also for the control.

Keywords: power system, state estimation, robustness, observability

Procedia PDF Downloads 516
7211 Estimation of the Acute Toxicity of Halogenated Phenols Using Quantum Chemistry Descriptors

Authors: Khadidja Bellifa, Sidi Mohamed Mekelleche

Abstract:

Phenols and especially halogenated phenols represent a substantial part of the chemicals produced worldwide and are known as aquatic pollutants. Quantitative structure–toxicity relationship (QSTR) models are useful for understanding how chemical structure relates to the toxicity of chemicals. In the present study, the acute toxicities of 45 halogenated phenols to Tetrahymena Pyriformis are estimated using no cost semi-empirical quantum chemistry methods. QSTR models were established using the multiple linear regression technique and the predictive ability of the models was evaluated by the internal cross-validation, the Y-randomization and the external validation. Their structural chemical domain has been defined by the leverage approach. The results show that the best model is obtained with the AM1 method (R²= 0.91, R²CV= 0.90, SD= 0.20 for the training set and R²= 0.96, SD= 0.11 for the test set). Moreover, all the Tropsha’ criteria for a predictive QSTR model are verified.

Keywords: halogenated phenols, toxicity mechanism, hydrophobicity, electrophilicity index, quantitative stucture-toxicity relationships

Procedia PDF Downloads 294
7210 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 121
7209 On the Performance of Improvised Generalized M-Estimator in the Presence of High Leverage Collinearity Enhancing Observations

Authors: Habshah Midi, Mohammed A. Mohammed, Sohel Rana

Abstract:

Multicollinearity occurs when two or more independent variables in a multiple linear regression model are highly correlated. The ridge regression is the commonly used method to rectify this problem. However, the ridge regression cannot handle the problem of multicollinearity which is caused by high leverage collinearity enhancing observation (HLCEO). Since high leverage points (HLPs) are responsible for inducing multicollinearity, the effect of HLPs needs to be reduced by using Generalized M estimator. The existing GM6 estimator is based on the Minimum Volume Ellipsoid (MVE) which tends to swamp some low leverage points. Hence an improvised GM (MGM) estimator is presented to improve the precision of the GM6 estimator. Numerical example and simulation study are presented to show how HLPs can cause multicollinearity. The numerical results show that our MGM estimator is the most efficient method compared to some existing methods.

Keywords: identification, high leverage points, multicollinearity, GM-estimator, DRGP, DFFITS

Procedia PDF Downloads 259
7208 Bartlett Factor Scores in Multiple Linear Regression Equation as a Tool for Estimating Economic Traits in Broilers

Authors: Oluwatosin M. A. Jesuyon

Abstract:

In order to propose a simpler tool that eliminates the age-long problems associated with the traditional index method for selection of multiple traits in broilers, the Barttlet factor regression equation is being proposed as an alternative selection tool. 100 day-old chicks each of Arbor Acres (AA) and Annak (AN) broiler strains were obtained from two rival hatcheries in Ibadan Nigeria. These were raised in deep litter system in a 56-day feeding trial at the University of Ibadan Teaching and Research Farm, located in South-west Tropical Nigeria. The body weight and body dimensions were measured and recorded during the trial period. Eight (8) zoometric measurements namely live weight (g), abdominal circumference, abdominal length, breast width, leg length, height, wing length and thigh circumference (all in cm) were recorded randomly from 20 birds within strain, at a fixed time on the first day of the new week respectively with a 5-kg capacity Camry scale. These records were analyzed and compared using completely randomized design (CRD) of SPSS analytical software, with the means procedure, Factor Scores (FS) in stepwise Multiple Linear Regression (MLR) procedure for initial live weight equations. Bartlett Factor Score (BFS) analysis extracted 2 factors for each strain, termed Body-length and Thigh-meatiness Factors for AA, and; Breast Size and Height Factors for AN. These derived orthogonal factors assisted in deducing and comparing trait-combinations that best describe body conformation and Meatiness in experimental broilers. BFS procedure yielded different body conformational traits for the two strains, thus indicating the different economic traits and advantages of strains. These factors could be useful as selection criteria for improving desired economic traits. The final Bartlett Factor Regression equations for prediction of body weight were highly significant with P < 0.0001, R2 of 0.92 and above, VIF of 1.00, and DW of 1.90 and 1.47 for Arbor Acres and Annak respectively. These FSR equations could be used as a simple and potent tool for selection during poultry flock improvement, it could also be used to estimate selection index of flocks in order to discriminate between strains, and evaluate consumer preference traits in broilers.

Keywords: alternative selection tool, Bartlet factor regression model, consumer preference trait, linear and body measurements, live body weight

Procedia PDF Downloads 200
7207 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health

Authors: Irfan Ahmad Afip

Abstract:

This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.

Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression

Procedia PDF Downloads 112
7206 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, construction projects, cost estimation, NRM, ontology

Procedia PDF Downloads 547
7205 Poverty Dynamics in Thailand: Evidence from Household Panel Data

Authors: Nattabhorn Leamcharaskul

Abstract:

This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.

Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer

Procedia PDF Downloads 110
7204 Hybrid Subspace Approach for Time Delay Estimation in MIMO Systems

Authors: Mojtaba Saeedinezhad, Sarah Yousefi

Abstract:

In this paper, we present a hybrid subspace approach for Time Delay Estimation (TDE) in multivariable systems. While several methods have been proposed for time delay estimation in SISO systems, delay estimation in MIMO systems were always a big challenge. In these systems the existing TDE methods have significant limitations because most of procedures are just based on system response estimation or correlation analysis. We introduce a new hybrid method for TDE in MIMO systems based on subspace identification and explicit output error method; and compare its performance with previously introduced procedures in presence of different noise levels and in a statistical manner. Then the best method is selected with multi objective decision making technique. It is shown that the performance of new approach is much better than the existing methods, even in low signal-to-noise conditions.

Keywords: system identification, time delay estimation, ARX, OE, merit ratio, multi variable decision making

Procedia PDF Downloads 344
7203 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach

Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam

Abstract:

Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.

Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment

Procedia PDF Downloads 82
7202 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 447
7201 Tenants Use Less Input on Rented Plots: Evidence from Northern Ethiopia

Authors: Desta Brhanu Gebrehiwot

Abstract:

The study aims to investigate the impact of land tenure arrangements on fertilizer use per hectare in Northern Ethiopia. Household and Plot level data are used for analysis. Land tenure contracts such as sharecropping and fixed rent arrangements have endogeneity. Different unobservable characteristics may affect renting-out decisions. Thus, the appropriate method of analysis was the instrumental variable estimation technic. Therefore, the family of instrumental variable estimation methods two-stage least-squares regression (2SLS, the generalized method of moments (GMM), Limited information maximum likelihood (LIML), and instrumental variable Tobit (IV-Tobit) was used. Besides, a method to handle a binary endogenous variable is applied, which uses a two-step estimation. In the first step probit model includes instruments, and in the second step, maximum likelihood estimation (MLE) (“etregress” command in Stata 14) was used. There was lower fertilizer use per hectare on sharecropped and fixed rented plots relative to owner-operated. The result supports the Marshallian inefficiency principle in sharecropping. The difference in fertilizer use per hectare could be explained by a lack of incentivized detailed contract forms, such as giving more proportion of the output to the tenant under sharecropping contracts, which motivates to use of more fertilizer in rented plots to maximize the production because most sharecropping arrangements share output equally between tenants and landlords.

Keywords: tenure-contracts, endogeneity, plot-level data, Ethiopia, fertilizer

Procedia PDF Downloads 83
7200 How Do Crisis Affect Economic Policy?

Authors: Eva Kotlánová

Abstract:

After recession that began in 2007 in the United States and subsequently spilled over the Europe we could expect recovery of economic growth. According to the last estimation of economic progress of European countries, this recovery is not strong enough. Among others, it will depend on economic policy, where and in which way, the economic indicators will proceed. Economic theories postulate that the economic subjects prefer stably, continual economic policy without repeated and strong fluctuations. This policy is perceived as support of economic growth. Mostly in crises period, when the government must cope with consequences of recession, the economic policy becomes unpredictable for many subjects and economic policy uncertainty grows, which have negative influence on economic growth. The aim of this paper is to use panel regression to prove or disprove this hypothesis on the example of five largest European economies in the period 2008–2012.

Keywords: economic crises in Europe, economic policy, uncertainty, panel analysis regression

Procedia PDF Downloads 381
7199 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 319
7198 Age Estimation Using Destructive and Non-Destructive Dental Methods on an Archeological Human Sample from the Poor Claire Nunnery in Brussels, Belgium

Authors: Pilar Cornejo Ulloa, Guy Willems, Steffen Fieuws, Kim Quintelier, Wim Van Neer, Patrick Thevissen

Abstract:

Dental age estimation can be performed both in living and deceased individuals. In anthropology, few studies have tested the reliability of dental age estimation methods complementary to the usually applied osteological methods. Objectives: In this study, destructive and non-destructive dental age estimation methods were applied on an archeological sample in order to compare them with the previously obtained anthropological age estimates. Materials and Methods: One hundred and thirty-four teeth from 24 individuals were analyzed using Kvaal, Kvaal and Solheim, Bang and Ramm, Lamendin, Gustafson, Maples, Dalitz and Johanson’s methods. Results: A high variability and wider age ranges than the ones previously obtained by the anthropologist could be observed. Destructive methods had a slightly higher agreement than the non-destructive. Discussion: Due to the heterogeneity of the sample and the lack of the real age at death, the obtained results were not representative, and it was not possible to suggest one dental age estimation method over another.

Keywords: archeology, dental age estimation, forensic anthropology, forensic dentistry

Procedia PDF Downloads 355
7197 Estimation of Population Mean under Random Non-Response in Two-Occasion Successive Sampling

Authors: M. Khalid, G. N. Singh

Abstract:

In this paper, we have considered the problems of estimation for the population mean on current (second) occasion in two-occasion successive sampling under random non-response situations. Some modified exponential type estimators have been proposed and their properties are studied under the assumptions that the number of sampling unit follows a discrete distribution due to random non-response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.

Keywords: modified exponential estimator, successive sampling, random non-response, auxiliary variable, bias, mean square error

Procedia PDF Downloads 346
7196 State Estimation Method Based on Unscented Kalman Filter for Vehicle Nonlinear Dynamics

Authors: Wataru Nakamura, Tomoaki Hashimoto, Liang-Kuang Chen

Abstract:

This paper provides a state estimation method for automatic control systems of nonlinear vehicle dynamics. A nonlinear tire model is employed to represent the realistic behavior of a vehicle. In general, all the state variables of control systems are not precisedly known, because those variables are observed through output sensors and limited parts of them might be only measurable. Hence, automatic control systems must incorporate some type of state estimation. It is needed to establish a state estimation method for nonlinear vehicle dynamics with restricted measurable state variables. For this purpose, unscented Kalman filter method is applied in this study for estimating the state variables of nonlinear vehicle dynamics. The objective of this paper is to propose a state estimation method using unscented Kalman filter for nonlinear vehicle dynamics. The effectiveness of the proposed method is verified by numerical simulations.

Keywords: state estimation, control systems, observer systems, nonlinear systems

Procedia PDF Downloads 130
7195 Stock Market Prediction by Regression Model with Social Moods

Authors: Masahiro Ohmura, Koh Kakusho, Takeshi Okadome

Abstract:

This paper presents a regression model with autocorrelated errors in which the inputs are social moods obtained by analyzing the adjectives in Twitter posts using a document topic model. The regression model predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.

Keywords: stock market prediction, social moods, regression model, DJIA

Procedia PDF Downloads 542
7194 Channel Estimation for LTE Downlink

Authors: Rashi Jain

Abstract:

The LTE systems employ Orthogonal Frequency Division Multiplexing (OFDM) as the multiple access technology for the Downlink channels. For enhanced performance, accurate channel estimation is required. Various algorithms such as Least Squares (LS), Minimum Mean Square Error (MMSE) and Recursive Least Squares (RLS) can be employed for the purpose. The paper proposes channel estimation algorithm based on Kalman Filter for LTE-Downlink system. Using the frequency domain pilots, the initial channel response is obtained using the LS criterion. Then Kalman Filter is employed to track the channel variations in time-domain. To suppress the noise within a symbol, threshold processing is employed. The paper draws comparison between the LS, MMSE, RLS and Kalman filter for channel estimation. The parameters for evaluation are Bit Error Rate (BER), Mean Square Error (MSE) and run-time.

Keywords: LTE, channel estimation, OFDM, RLS, Kalman filter, threshold

Procedia PDF Downloads 350
7193 Survival Analysis Based Delivery Time Estimates for Display FAB

Authors: Paul Han, Jun-Geol Baek

Abstract:

In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.

Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model

Procedia PDF Downloads 536
7192 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 63
7191 Comparing Skill, Employment, and Productivity of Industrial City Case Study: Bekasi Industrial Area and Special Economic Zone Sei Mangkei

Authors: Auliya Adzillatin Uzhma, M. Adrian Rizky, Puri Diah Santyarini

Abstract:

Bekasi Industrial Area in Kab. Bekasi and SEZ (Special Economic Zone) Sei Mangkei in Kab. Simalungun are two areas whose have the same main economic activity that are manufacturing industrial. Manufacturing industry in Bekasi Industrial Area contributes more than 70% of Kab. Bekasi’s GDP, while manufacturing industry in SEZ Sei Mangkei contributes less than 20% of Kab. Simalungun’s GDP. The dependent variable in the research is labor productivity, while the independent variable is the amount of labor, the level of labor education, the length of work and salary. This research used linear regression method to find the model for represent actual condition of productivity in two industrial area, then the equalization using dummy variable on labor education level variable. The initial hypothesis (Ho) in this research is that labor productivity in Bekasi Industrial Area will be higher than the productivity of labor in SEZ Sei Mangkei. The variable that supporting the accepted hypothesis are more labor, higher education, longer work and higher salary in Bekasi Industrial Area.

Keywords: labor, industrial city, linear regression, productivity

Procedia PDF Downloads 174
7190 Artificial Neural Network and Statistical Method

Authors: Tomas Berhanu Bekele

Abstract:

Traffic congestion is one of the main problems related to transportation in developed as well as developing countries. Traffic control systems are based on the idea of avoiding traffic instabilities and homogenizing traffic flow in such a way that the risk of accidents is minimized and traffic flow is maximized. Lately, Intelligent Transport Systems (ITS) has become an important area of research to solve such road traffic-related issues for making smart decisions. It links people, roads and vehicles together using communication technologies to increase safety and mobility. Moreover, accurate prediction of road traffic is important to manage traffic congestion. The aim of this study is to develop an ANN model for the prediction of traffic flow and to compare the ANN model with the linear regression model of traffic flow predictions. Data extraction was carried out in intervals of 15 minutes from the video player. Video of mixed traffic flow was taken and then counted during office work in order to determine the traffic volume. Vehicles were classified into six categories, namely Car, Motorcycle, Minibus, mid-bus, Bus, and Truck vehicles. The average time taken by each vehicle type to travel the trap length was measured by time displayed on a video screen.

Keywords: intelligent transport system (ITS), traffic flow prediction, artificial neural network (ANN), linear regression

Procedia PDF Downloads 61
7189 Single Carrier Frequency Domain Equalization Design to Cope with Narrow Band Jammer

Authors: So-Young Ju, Sung-Mi Jo, Eui-Rim Jeong

Abstract:

In this paper, based on the conventional single carrier frequency domain equalization (SC-FDE) structure, we propose a new SC-FDE structure to cope with narrowband jammer. In the conventional SC-FDE structure, channel estimation is performed in the time domain. When a narrowband jammer exists, time-domain channel estimation is very difficult due to high power jamming interference, which degrades receiver performance. To relieve from this problem, a new SC-FDE frame is proposed to enable channel estimation under narrow band jamming environments. In this paper, we proposed a modified SC-FDE structure that can perform channel estimation in the frequency domain and verified the performance via computer simulation.

Keywords: channel estimation, jammer, pilot, SC-FDE

Procedia PDF Downloads 468
7188 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation

Procedia PDF Downloads 431
7187 A Spectral Decomposition Method for Ordinary Differential Equation Systems with Constant or Linear Right Hand Sides

Authors: R. B. Ogunrinde, C. C. Jibunoh

Abstract:

In this paper, a spectral decomposition method is developed for the direct integration of stiff and nonstiff homogeneous linear (ODE) systems with linear, constant, or zero right hand sides (RHSs). The method does not require iteration but obtains solutions at any random points of t, by direct evaluation, in the interval of integration. All the numerical solutions obtained for the class of systems coincide with the exact theoretical solutions. In particular, solutions of homogeneous linear systems, i.e. with zero RHS, conform to the exact analytical solutions of the systems in terms of t.

Keywords: spectral decomposition, linear RHS, homogeneous linear systems, eigenvalues of the Jacobian

Procedia PDF Downloads 326
7186 Model-Based Software Regression Test Suite Reduction

Authors: Shiwei Deng, Yang Bao

Abstract:

In this paper, we present a model-based regression test suite reducing approach that uses EFSM model dependence analysis and probability-driven greedy algorithm to reduce software regression test suites. The approach automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. The EFSM dependence analysis is performed for each elementary modification to reduce the regression test suite, and then the probability-driven greedy algorithm is adopted to select the minimum set of test cases from the reduced regression test suite that cover all interaction patterns. Our initial experience shows that the approach may significantly reduce the size of regression test suites.

Keywords: dependence analysis, EFSM model, greedy algorithm, regression test

Procedia PDF Downloads 422
7185 Choosing between the Regression Correlation, the Rank Correlation, and the Correlation Curve

Authors: Roger L. Goodwin

Abstract:

This paper presents a rank correlation curve. The traditional correlation coefficient is valid for both continuous variables and for integer variables using rank statistics. Since the correlation coefficient has already been established in rank statistics by Spearman, such a calculation can be extended to the correlation curve. This paper presents two survey questions. The survey collected non-continuous variables. We will show weak to moderate correlation. Obviously, one question has a negative effect on the other. A review of the qualitative literature can answer which question and why. The rank correlation curve shows which collection of responses has a positive slope and which collection of responses has a negative slope. Such information is unavailable from the flat, "first-glance" correlation statistics.

Keywords: Bayesian estimation, regression model, rank statistics, correlation, correlation curve

Procedia PDF Downloads 469
7184 Growth Curves Genetic Analysis of Native South Caspian Sea Poultry Using Bayesian Statistics

Authors: Jamal Fayazi, Farhad Anoosheh, Mohammad R. Ghorbani, Ali R. Paydar

Abstract:

In this study, to determine the best non-linear regression model describing the growth curve of native poultry, 9657 chicks of generations 18, 19, and 20 raised in Mazandaran breeding center were used. Fowls and roosters of this center distributed in south of Caspian Sea region. To estimate the genetic variability of none linear regression parameter of growth traits, a Gibbs sampling of Bayesian analysis was used. The average body weight traits in the first day (BW1), eighth week (BW8) and twelfth week (BW12) were respectively estimated as 36.05, 763.03, and 1194.98 grams. Based on the coefficient of determination, mean squares of error and Akaike information criteria, Gompertz model was selected as the best growth descriptive function. In Gompertz model, the estimated values for the parameters of maturity weight (A), integration constant (B) and maturity rate (K) were estimated to be 1734.4, 3.986, and 0.282, respectively. The direct heritability of BW1, BW8 and BW12 were respectively reported to be as 0.378, 0.3709, 0.316, 0.389, 0.43, 0.09 and 0.07. With regard to estimated parameters, the results of this study indicated that there is a possibility to improve some property of growth curve using appropriate selection programs.

Keywords: direct heritability, Gompertz, growth traits, maturity weight, native poultry

Procedia PDF Downloads 260
7183 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 161