Search results for: logistics regression model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18495

Search results for: logistics regression model

18195 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: customer value, Huff's Gravity Model, POS, Retailer

Procedia PDF Downloads 99
18194 Understanding the Effect of Fall Armyworm and Integrated Pest Management Practices on the Farm Productivity and Food Security in Malawi

Authors: Innocent Pangapanga, Eric Mungatana

Abstract:

Fall armyworm (FAW) (Spodoptera frugiperda), an invasive lepidopteran pest, has caused substantial yield loss since its first detection in September 2016, thereby threatening the farm productivity food security and poverty reduction initiatives in Malawi. Several stakeholders, including households, have adopted chemical pesticides to control FAW without accounting for its costs on welfare, health and the environment. Thus, this study has used panel data endogenous switching regression model to investigate the impact of FAW and the integrated pest management (IPM) –related practices on-farm productivity and food security. The study finds that FAW substantively reduces farm productivity by seven (7) percent and influences the adoption of IPM –related practices, namely, intercropping, mulching, and agroforestry, by 6 percent, ceteris paribus. Interestingly, multiple adoptions of the IPM -related practices noticeably increase farm productivity by 21 percent. After accounting for potential endogeneity through the endogenous switching regression model, the IPM practices further demonstrate tenfold more improvement on food security, implying the role of the IPM –related practices in containing the effect of FAW at the household level.

Keywords: hunger, invasive fall army worms, integrated pest management practices, farm productivity, endogenous switching regression

Procedia PDF Downloads 111
18193 Parameter Estimation via Metamodeling

Authors: Sergio Haram Sarmiento, Arcady Ponosov

Abstract:

Based on appropriate multivariate statistical methodology, we suggest a generic framework for efficient parameter estimation for ordinary differential equations and the corresponding nonlinear models. In this framework classical linear regression strategies is refined into a nonlinear regression by a locally linear modelling technique (known as metamodelling). The approach identifies those latent variables of the given model that accumulate most information about it among all approximations of the same dimension. The method is applied to several benchmark problems, in particular, to the so-called ”power-law systems”, being non-linear differential equations typically used in Biochemical System Theory.

Keywords: principal component analysis, generalized law of mass action, parameter estimation, metamodels

Procedia PDF Downloads 483
18192 Method of Parameter Calibration for Error Term in Stochastic User Equilibrium Traffic Assignment Model

Authors: Xiang Zhang, David Rey, S. Travis Waller

Abstract:

Stochastic User Equilibrium (SUE) model is a widely used traffic assignment model in transportation planning, which is regarded more advanced than Deterministic User Equilibrium (DUE) model. However, a problem exists that the performance of the SUE model depends on its error term parameter. The objective of this paper is to propose a systematic method of determining the appropriate error term parameter value for the SUE model. First, the significance of the parameter is explored through a numerical example. Second, the parameter calibration method is developed based on the Logit-based route choice model. The calibration process is realized through multiple nonlinear regression, using sequential quadratic programming combined with least square method. Finally, case analysis is conducted to demonstrate the application of the calibration process and validate the better performance of the SUE model calibrated by the proposed method compared to the SUE models under other parameter values and the DUE model.

Keywords: parameter calibration, sequential quadratic programming, stochastic user equilibrium, traffic assignment, transportation planning

Procedia PDF Downloads 266
18191 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning

Authors: Jean Berger, Mohamed Barkaoui

Abstract:

Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.

Keywords: search path planning, false alarm, search-and-delivery, entropy, genetic algorithm

Procedia PDF Downloads 338
18190 Integrated Nested Laplace Approximations For Quantile Regression

Authors: Kajingulu Malandala, Ranganai Edmore

Abstract:

The asymmetric Laplace distribution (ADL) is commonly used as the likelihood function of the Bayesian quantile regression, and it offers different families of likelihood method for quantile regression. Notwithstanding their popularity and practicality, ADL is not smooth and thus making it difficult to maximize its likelihood. Furthermore, Bayesian inference is time consuming and the selection of likelihood may mislead the inference, as the Bayes theorem does not automatically establish the posterior inference. Furthermore, ADL does not account for greater skewness and Kurtosis. This paper develops a new aspect of quantile regression approach for count data based on inverse of the cumulative density function of the Poisson, binomial and Delaporte distributions using the integrated nested Laplace Approximations. Our result validates the benefit of using the integrated nested Laplace Approximations and support the approach for count data.

Keywords: quantile regression, Delaporte distribution, count data, integrated nested Laplace approximation

Procedia PDF Downloads 137
18189 Combined Analysis of m⁶A and m⁵C Modulators on the Prognosis of Hepatocellular Carcinoma

Authors: Hongmeng Su, Luyu Zhao, Yanyan Qian, Hong Fan

Abstract:

Aim: Hepatocellular carcinoma (HCC) is one of the most common malignant tumors that endanger human health seriously. RNA methylation, especially N6-methyladenosine (m⁶A) and 5-methylcytosine (m⁵C), a crucial epigenetic transcriptional regulatory mechanism, plays an important role in tumorigenesis, progression and prognosis. This research aims to systematically evaluate the prognostic value of m⁶A and m⁵C modulators in HCC patients. Methods: Twenty-four modulators of m⁶A and m⁵C were candidates to analyze their expression level and their contribution to predict the prognosis of HCC. Consensus clustering analysis was applied to classify HCC patients. Cox and LASSO regression were used to construct the risk model. According to the risk score, HCC patients were divided into high-risk and low/medium-risk groups. The clinical pathology factors of HCC patients were analyzed by univariate and multivariate Cox regression analysis. Results: The HCC patients were classified into 2 clusters with significant differences in overall survival and clinical characteristics. Nine-gene risk model was constructed including METTL3, VIRMA, YTHDF1, YTHDF2, NOP2, NSUN4, NSUN5, DNMT3A and ALYREF. It was indicated that the risk score could serve as an independent prognostic factor for patients with HCC. Conclusion: This study constructed a Nine-gene risk model by modulators of m⁶A and m⁵C and investigated its effect on the clinical prognosis of HCC. This model may provide important consideration for the therapeutic strategy and prognosis evaluation analysis of patients with HCC.

Keywords: hepatocellular carcinoma, m⁶A, m⁵C, prognosis, RNA methylation

Procedia PDF Downloads 38
18188 Student Loan Debt among Students with Disabilities

Authors: Kaycee Bills

Abstract:

This study will determine if students with disabilities have higher student loan debt payments than other student populations. The hypothesis was that students with disabilities would have significantly higher student loan debt payments than other students due to the length of time they spend in school. Using the Bachelorette and Beyond Study Wave 2015/017 dataset, quantitative methods were employed. These data analysis methods included linear regression and a correlation matrix. Due to the exploratory nature of the study, the significance levels for the overall model and each variable were set at .05. The correlation matrix demonstrated that students with certain types of disabilities are more likely to fall under higher student loan payment brackets than students without disabilities. These results also varied among the different types of disabilities. The result of the overall linear regression model was statistically significant (p = .04). Despite the overall model being statistically significant, the majority of the significance values for the different types of disabilities were null. However, several other variables had statistically significant results, such as veterans, people of minority races, and people who attended private schools. Implications for how this impacts the economy, capitalism, and financial wellbeing of various students are discussed.

Keywords: disability, student loan debt, higher education, social work

Procedia PDF Downloads 149
18187 Analysis of Active Compounds in Thai Herbs by near Infrared Spectroscopy

Authors: Chaluntorn Vichasilp, Sutee Wangtueai

Abstract:

This study aims to develop a new method to detect active compounds in Thai herbs (1-deoxynojirimycin (DNJ) in mulberry leave, anthocyanin in Mao and curcumin in turmeric) using near infrared spectroscopy (NIRs). NIRs is non-destructive technique that rapid, non-chemical involved and low-cost determination. By NIRs and chemometrics technique, it was found that the DNJ prediction equation conducted with partial least square regression with cross-validation had low accuracy R2 (0.42) and SEP (31.87 mg/100g). On the other hand, the anthocyanin prediction equation showed moderate good results (R2 and SEP of 0.78 and 0.51 mg/g) with Multiplication scattering correction at wavelength of 2000-2200 nm. The high absorption could be observed at wavelength of 2047 nm and this model could be used as screening level. For curcumin prediction, the good result was obtained when applied original spectra with smoothing technique. The wavelength of 1400-2500 nm was created regression model with R2 (0.68) and SEP (0.17 mg/g). This model had high NIRs absorption at a wavelength of 1476, 1665, 1986 and 2395 nm, respectively. NIRs showed prospective technique for detection of some active compounds in Thai herbs.

Keywords: anthocyanin, curcumin, 1-deoxynojirimycin (DNJ), near infrared spectroscopy (NIRs)

Procedia PDF Downloads 355
18186 Modeling Intention to Use 3PL Services: An Application of the Theory of Planned Behavior

Authors: Nasrin Akter, Prem Chhetri, Shams Rahman

Abstract:

The present study tested Ajzen’s Theory of Planned Behavior (TPB) model to explain the formation of business customers’ intention to use 3PL services in Bangladesh. The findings show that the TPB model has a good fit to the data. Based on theoretical support and suggested modification indices, a refined TPB model was developed afterwards which provides a better predictive power for intention. Consistent with the theory, the results of a structural equation analysis revealed that the intention to use 3PL services is predicted by attitude and subjective norms but not by perceived behavioral control. Further investigation indicated that the paths between (attitude and intention) and (subjective norms and intention) did not statistically differ between 3PL user and non-user. Findings of this research provide an evidence base to formulate business strategies to increase the use of 3PL services in Bangladesh to enhance productivity and to gain economic efficiency.

Keywords: Bangladesh, intention, third-party logistics, Theory of Planned Behavior

Procedia PDF Downloads 561
18185 Applying Quadrant Analysis in Identifying Business-to-Business Customer-Driven Improvement Opportunities in Third Party Logistics Industry

Authors: Luay Jum'a

Abstract:

Many challenges are facing third-party logistics (3PL) providers in the domestic and global markets which create a volatile decision making environment. All these challenges such as managing changes in consumer behaviour, demanding expectations from customers and time compressions have turned into complex problems for 3PL providers. Since the movement towards increased outsourcing outpaces movement towards insourcing, the need to achieve a competitive advantage over competitors in 3PL market increases. This trend continues to grow over the years and as a result, areas of strengths and improvements are highlighted through the analysis of the LSQ factors that lead to B2B customers’ satisfaction which become a priority for 3PL companies. Consequently, 3PL companies are increasingly focusing on the most important issues from the perspective of their customers and relying more on this value of information in making their managerial decisions. Therefore, this study is concerned with providing guidance for improving logistics service quality (LSQ) levels in the context of 3PL industry in Jordan. The study focused on the most important factors in LSQ and used a managerial tool that guides 3PL companies in making LSQ improvements based on a quadrant analysis of two main dimensions: LSQ declared importance and LSQ inferred importance. Although, a considerable amount of research has been conducted to investigate the relationship between logistics service quality (LSQ) and customer satisfaction, there remains a lack of developing managerial tools to aid in the process of LSQ improvement decision-making. Moreover, the main advantage for the companies to use 3PL service providers as a trend is due to the realised percentage of cost reduction on the total cost of logistics operations and the incremental improvement in customer service. In this regard, having a managerial tool that help 3PL service providers in managing the LSQ factors portfolio effectively and efficiently would be a great investment for service providers. One way of suggesting LSQ improvement actions for 3PL service providers is via the adoption of analysis tools that perform attribute categorisation such as Importance–Performance matrix. In mind of the above, it can be stated that the use of quadrant analysis will provide a valuable opportunity for 3PL service providers to identify improvement opportunities as customer service attributes or factors importance are identified in two different techniques that complete each other. Moreover, the data were collected through conducting a survey and 293 questionnaires were returned from business-to-business (B2B) customers of 3PL companies in Jordan. The results showed that the LSQ factors vary in their importance and 3PL companies should focus on some LSQ factors more than other factors. Moreover, ordering procedures, timeliness/responsiveness LSQ factors considered being crucial in 3PL businesses and therefore they need to have more focus and development by 3PL service providers in the Jordanian market.

Keywords: logistics service quality, managerial decisions, quadrant analysis, third party logistics service provider

Procedia PDF Downloads 112
18184 Analysis of Spatial Heterogeneity of Residential Prices in Guangzhou: An Actual Study Based on Poi Geographically Weighted Regression Model

Authors: Zichun Guo

Abstract:

Guangzhou's housing prices have declined for a long time compared with the other three major cities. As Guangzhou's housing price ladder increases, the influencing factors of housing prices have gradually attracted attention. This article attempts to use housing price data and POI data and uses the Kriging spatial interpolation method and the geographically weighted regression model to explore the distribution of housing prices and the impact of factors. Caused, especially located in Huadu District and the city center. The response is mainly obvious in surrounding areas, which may be related to housing positioning. Economic POIs close to the city center have stronger responses. The factors affecting housing prices provide this method, which is conducive to the management and macro-control of relevant departments, better meets the demand for home purchases, and realizes financing-side reforms.

Keywords: housing prices, spatial heterogeneity, Guangzhou, POI

Procedia PDF Downloads 17
18183 On the Performance of Improvised Generalized M-Estimator in the Presence of High Leverage Collinearity Enhancing Observations

Authors: Habshah Midi, Mohammed A. Mohammed, Sohel Rana

Abstract:

Multicollinearity occurs when two or more independent variables in a multiple linear regression model are highly correlated. The ridge regression is the commonly used method to rectify this problem. However, the ridge regression cannot handle the problem of multicollinearity which is caused by high leverage collinearity enhancing observation (HLCEO). Since high leverage points (HLPs) are responsible for inducing multicollinearity, the effect of HLPs needs to be reduced by using Generalized M estimator. The existing GM6 estimator is based on the Minimum Volume Ellipsoid (MVE) which tends to swamp some low leverage points. Hence an improvised GM (MGM) estimator is presented to improve the precision of the GM6 estimator. Numerical example and simulation study are presented to show how HLPs can cause multicollinearity. The numerical results show that our MGM estimator is the most efficient method compared to some existing methods.

Keywords: identification, high leverage points, multicollinearity, GM-estimator, DRGP, DFFITS

Procedia PDF Downloads 228
18182 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 275
18181 The Influence of Air Temperature Controls in Estimation of Air Temperature over Homogeneous Terrain

Authors: Fariza Yunus, Jasmee Jaafar, Zamalia Mahmud, Nurul Nisa’ Khairul Azmi, Nursalleh K. Chang, Nursalleh K. Chang

Abstract:

Variation of air temperature from one place to another is cause by air temperature controls. In general, the most important control of air temperature is elevation. Another significant independent variable in estimating air temperature is the location of meteorological stations. Distances to coastline and land use type are also contributed to significant variations in the air temperature. On the other hand, in homogeneous terrain direct interpolation of discrete points of air temperature work well to estimate air temperature values in un-sampled area. In this process the estimation is solely based on discrete points of air temperature. However, this study presents that air temperature controls also play significant roles in estimating air temperature over homogenous terrain of Peninsular Malaysia. An Inverse Distance Weighting (IDW) interpolation technique was adopted to generate continuous data of air temperature. This study compared two different datasets, observed mean monthly data of T, and estimation error of T–T’, where T’ estimated value from a multiple regression model. The multiple regression model considered eight independent variables of elevation, latitude, longitude, coastline, and four land use types of water bodies, forest, agriculture and build up areas, to represent the role of air temperature controls. Cross validation analysis was conducted to review accuracy of the estimation values. Final results show, estimation values of T–T’ produced lower errors for mean monthly mean air temperature over homogeneous terrain in Peninsular Malaysia.

Keywords: air temperature control, interpolation analysis, peninsular Malaysia, regression model, air temperature

Procedia PDF Downloads 353
18180 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 112
18179 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity

Procedia PDF Downloads 130
18178 Nowcasting Indonesian Economy

Authors: Ferry Kurniawan

Abstract:

In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.

Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination

Procedia PDF Downloads 312
18177 Assessing the Effectiveness of Warehousing Facility Management: The Case of Mantrac Ghana Limited

Authors: Kuhorfah Emmanuel Mawuli

Abstract:

Generally, for firms to enhance their operational efficiency of logistics, it is imperative to assess the logistics function. The cost of logistics conventionally represents a key consideration in the pricing decisions of firms, which suggests that cost efficiency in logistics can go a long way to improve margins. Warehousing, which is a key part of logistics operations, has the prospect of influencing operational efficiency in logistics management as well as customer value, but this potential has often not been recognized. It has been found that there is a paucity of research that evaluates the efficiency of warehouses. Indeed, limited research has been conducted to examine potential barriers to effective warehousing management. Due to this paucity of research, there is limited knowledge on how to address the obstacles associated with warehousing management. In order for warehousing management to become profitable, there is the need to integrate, balance, and manage the economic inputs and outputs of the entire warehouse operations, something that many firms tend to ignore. Management of warehousing is not solely related to storage functions. Instead, effective warehousing management requires such practices as maximum possible mechanization and automation of operations, optimal use of space and capacity of storage facilities, organization through "continuous flow" of goods, a planned system of storage operations, and safety of goods. For example, there is an important need for space utilization of the warehouse surface as it is a good way to evaluate the storing operation and pick items per hour. In the setting of Mantrac Ghana, not much knowledge regarding the management of the warehouses exists. The researcher has personally observed many gaps in the management of the warehouse facilities in the case organization Mantrac Ghana. It is important, therefore, to assess the warehouse facility management of the case company with the objective of identifying weaknesses for improvement. The study employs an in-depth qualitative research approach using interviews as a mode of data collection. Respondents in the study mainly comprised warehouse facility managers in the studied company. A total of 10 participants were selected for the study using a purposive sampling strategy. Results emanating from the study demonstrate limited warehousing effectiveness in the case company. Findings further reveal that the major barriers to effective warehousing facility management comprise poor layout, poor picking optimization, labour costs, and inaccurate orders; policy implications of the study findings are finally outlined.

Keywords: assessing, warehousing, facility, management

Procedia PDF Downloads 37
18176 Weighted Rank Regression with Adaptive Penalty Function

Authors: Kang-Mo Jung

Abstract:

The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.

Keywords: adaptive penalty function, robust penalized regression, variable selection, weighted rank regression

Procedia PDF Downloads 435
18175 Assessing the Impacts of Urbanization on Urban Precincts: A Case of Golconda Precinct, Hyderabad

Authors: Sai AKhila Budaraju

Abstract:

Heritage sites are an integral part of cities and carry a sense of identity to the cities/ towns, but the process of urbanization is a carrying potential threat for the loss of these heritage sites/monuments. Both Central and State Governments listed the historic Golconda fort as National Important Monument and the Heritage precinct with eight heritage-listed buildings and two historical sites respectively, for conservation and preservation, due to the presence of IT Corridor 6kms away accommodating more people in the precinct is under constant pressure. The heritage precinct possesses high property values, being a prime location connecting the IT corridor and CBD (central business district )areas. The primary objective of the study was to assess and identify the factors that are affecting the heritage precinct through Mapping and documentation, Identifying and assessing the factors through empirical analysis, Ordinal regression analysis and Hedonic Pricing Model. Ordinal regression analysis was used to identify the factors that contribute to the changes in the precinct due to urbanization. Hedonic Pricing Model was used to understand and establish a relation whether the presence of historical monuments is also a contributing factor to the property value and to what extent this influence can contribute. The above methods and field visit indicates the Physical, socio-economic factors and the neighborhood characteristics of the precinct contributing to the property values. The outturns and the potential elements derived from the analysis of the Development Control Rules were derived as recommendations to Integrate both Old and newly built environments.

Keywords: heritage planning, heritage conservation, hedonic pricing model, ordinal regression analysis

Procedia PDF Downloads 167
18174 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 116
18173 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 24
18172 Estimation of Missing Values in Aggregate Level Spatial Data

Authors: Amitha Puranik, V. S. Binu, Seena Biju

Abstract:

Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.

Keywords: spatial regression, missing data estimation, spatial autocorrelation, simulation analysis

Procedia PDF Downloads 349
18171 The Role of Logistics Services in Influencing Customer Satisfaction and Reviews in an Online Marketplace

Authors: nafees mahbub, blake tindol, utkarsh shrivastava, kuanchin chen

Abstract:

Online shopping has become an integral part of businesses today. Big players such as Amazon are setting the bar for delivery services, and many businesses are working towards meeting them. However, what happens if a seller underestimates or overestimates the delivery time? Does it translate to consumer comments, ratings, or lost sales? Although several prior studies have investigated the impact of poor logistics on customer satisfaction, that impact of under estimation of delivery times has been rarely considered. The study uses real-time customer online purchase data to study the impact of missed delivery times on satisfaction.

Keywords: LOST SALES, DELIVERY TIME, CUSTOMER SATISFACTION, CUSTOMER REVIEWS

Procedia PDF Downloads 176
18170 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 121
18169 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 252
18168 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles

Authors: Yihua Wang, Yunru Lai

Abstract:

Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.

Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring

Procedia PDF Downloads 436
18167 Breast Cancer Detection Using Machine Learning Algorithms

Authors: Jiwan Kumar, Pooja, Sandeep Negi, Anjum Rouf, Amit Kumar, Naveen Lakra

Abstract:

In modern times where, health issues are increasing day by day, breast cancer is also one of them, which is very crucial and really important to find in the early stages. Doctors can use this model in order to tell their patients whether a cancer is not harmful (benign) or harmful (malignant). We have used the knowledge of machine learning in order to produce the model. we have used algorithms like Logistic Regression, Random forest, support Vector Classifier, Bayesian Network and Radial Basis Function. We tried to use the data of crucial parts and show them the results in pictures in order to make it easier for doctors. By doing this, we're making ML better at finding breast cancer, which can lead to saving more lives and better health care.

Keywords: Bayesian network, radial basis function, ensemble learning, understandable, data making better, random forest, logistic regression, breast cancer

Procedia PDF Downloads 22
18166 Nurse-Patient Assignment: Case of Pediatrics Department

Authors: Jihene Jlassi, Ahmed Frikha, Wazna Kortli

Abstract:

The objectives of Nurse-Patient Assignment are the minimization of the overall hospital cost and the maximization of nurses ‘preferences. This paper aims to assess nurses' satisfaction related to the implementation of patient acuity tool-based assignments. So, we used an integer linear program that assigns patients to nurses while balancing nurse workloads. Then, the proposed model is applied to the Paediatrics Department at Kasserine Hospital Tunisia. Where patients need special acuities and high-level nursing skills and care. Hence, numerical results suggested that proposed nurse-patient assignment models can achieve a balanced assignment

Keywords: nurse-patient assignment, mathematical model, logistics, pediatrics department, balanced assignment

Procedia PDF Downloads 120