Search results for: forecast accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3851

Search results for: forecast accuracy

3821 Improved Accuracy of Ratio Multiple Valuation

Authors: Julianto Agung Saputro, Jogiyanto Hartono

Abstract:

Multiple valuation is widely used by investors and practitioners but its accuracy is questionable. Multiple valuation inaccuracies are due to the unreliability of information used in valuation, inaccuracies comparison group selection, and use of individual multiple values. This study investigated the accuracy of valuation to examine factors that can increase the accuracy of the valuation of multiple ratios, that are discretionary accruals, the comparison group, and the composite of multiple valuation. These results indicate that multiple value adjustment method with discretionary accruals provides better accuracy, the industry comparator group method combined with the size and growth of companies also provide better accuracy. Composite of individual multiple valuation gives the best accuracy. If all of these factors combined, the accuracy of valuation of multiple ratios will give the best results.

Keywords: multiple, valuation, composite, accuracy

Procedia PDF Downloads 252
3820 The Impact of Corporate Social Responsibility Information Disclosure on the Accuracy of Analysts' Earnings Forecasts

Authors: Xin-Hua Zhao

Abstract:

In recent years, the growth rate of social responsibility reports disclosed by Chinese corporations has grown rapidly. The economic effects of the growing corporate social responsibility reports have become a hot topic. The article takes the chemical listed engineering corporations that disclose social responsibility reports in China as a sample, and based on the information asymmetry theory, examines the economic effect generated by corporate social responsibility disclosure with the method of ordinary least squares. The research is conducted from the perspective of analysts’ earnings forecasts and studies the impact of corporate social responsibility information disclosure on improving the accuracy of analysts' earnings forecasts. The results show that there is a statistically significant negative correlation between corporate social responsibility disclosure index and analysts’ earnings forecast error. The conclusions confirm that enterprises can reduce the asymmetry of social and environmental information by disclosing social responsibility reports, and thus improve the accuracy of analysts’ earnings forecasts. It can promote the effective allocation of resources in the market.

Keywords: analysts' earnings forecasts, corporate social responsibility disclosure, economic effect, information asymmetry

Procedia PDF Downloads 122
3819 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data

Authors: Natalia Feruleva

Abstract:

The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.

Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data

Procedia PDF Downloads 87
3818 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 285
3817 IPO Valuation and Profitability Expectations: Evidence from the Italian Exchange

Authors: Matteo Bonaventura, Giancarlo Giudici

Abstract:

This paper analyses the valuation process of companies listed on the Italian Exchange in the period 2000-2009 at their Initial Public Offering (IPO). One the most common valuation techniques declared in the IPO prospectus to determine the offer price is the Discounted Cash Flow (DCF) method. We develop a ‘reverse engineering’ model to discover the short term profitability implied in the offer prices. We show that there is a significant optimistic bias in the estimation of future profitability compared to ex-post actual realization and the mean forecast error is substantially large. Yet we show that such error characterizes also the estimations carried out by analysts evaluating non-IPO companies. The forecast error is larger the faster has been the recent growth of the company, the higher is the leverage of the IPO firm, the more companies issued equity on the market. IPO companies generally exhibit better operating performance before the listing, with respect to comparable listed companies, while after the flotation they do not perform significantly different in term of return on invested capital. Pre-IPO book building activity plays a significant role in partially reducing the forecast error and revising expectations, while the market price of the first day of trading does not contain information for further reducing forecast errors.

Keywords: initial public offerings, DCF, book building, post-IPO profitability drop

Procedia PDF Downloads 319
3816 Forecasting 24-Hour Ahead Electricity Load Using Time Series Models

Authors: Ramin Vafadary, Maryam Khanbaghi

Abstract:

Forecasting electricity load is important for various purposes like planning, operation, and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet, and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria, namely, the mean absolute error and root mean square error. The National Renewable Energy Laboratory (NREL) residential energy consumption data is used to train the models. The results of this study show that the SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts, we can improve the robustness of the models for 24 hours ahead of electricity load forecasting.

Keywords: bagging, Fbprophet, Holt-Winters, LSTM, load forecast, SARIMA, TensorFlow probability, time series

Procedia PDF Downloads 62
3815 Wind Power Forecast Error Simulation Model

Authors: Josip Vasilj, Petar Sarajcev, Damir Jakus

Abstract:

One of the major difficulties introduced with wind power penetration is the inherent uncertainty in production originating from uncertain wind conditions. This uncertainty impacts many different aspects of power system operation, especially the balancing power requirements. For this reason, in power system development planing, it is necessary to evaluate the potential uncertainty in future wind power generation. For this purpose, simulation models are required, reproducing the performance of wind power forecasts. This paper presents a wind power forecast error simulation models which are based on the stochastic process simulation. Proposed models capture the most important statistical parameters recognized in wind power forecast error time series. Furthermore, two distinct models are presented based on data availability. First model uses wind speed measurements on potential or existing wind power plant locations, while the seconds model uses statistical distribution of wind speeds.

Keywords: wind power, uncertainty, stochastic process, Monte Carlo simulation

Procedia PDF Downloads 451
3814 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 32
3813 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries

Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni

Abstract:

In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.

Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm

Procedia PDF Downloads 95
3812 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 93
3811 Forecasting Model to Predict Dengue Incidence in Malaysia

Authors: W. H. Wan Zakiyatussariroh, A. A. Nasuhar, W. Y. Wan Fairos, Z. A. Nazatul Shahreen

Abstract:

Forecasting dengue incidence in a population can provide useful information to facilitate the planning of the public health intervention. Many studies on dengue cases in Malaysia were conducted but are limited in modeling the outbreak and forecasting incidence. This article attempts to propose the most appropriate time series model to explain the behavior of dengue incidence in Malaysia for the purpose of forecasting future dengue outbreaks. Several seasonal auto-regressive integrated moving average (SARIMA) models were developed to model Malaysia’s number of dengue incidence on weekly data collected from January 2001 to December 2011. SARIMA (2,1,1)(1,1,1)52 model was found to be the most suitable model for Malaysia’s dengue incidence with the least value of Akaike information criteria (AIC) and Bayesian information criteria (BIC) for in-sample fitting. The models further evaluate out-sample forecast accuracy using four different accuracy measures. The results indicate that SARIMA (2,1,1)(1,1,1)52 performed well for both in-sample fitting and out-sample evaluation.

Keywords: time series modeling, Box-Jenkins, SARIMA, forecasting

Procedia PDF Downloads 447
3810 Load Forecast of the Peak Demand Based on Both the Peak Demand and Its Location

Authors: Qais H. Alsafasfeh

Abstract:

The aim of this paper is to provide a forecast of the peak demand for the next 15 years for electrical distribution companies. The proposed methodology provides both the peak demand and its location for the next 15 years. This paper describes the Spatial Load Forecasting model used, the information provided by electrical distribution company in Jordan, the workflow followed, the parameters used and the assumptions made to run the model. The aim of this paper is to provide a forecast of the peak demand for the next 15 years for electrical distribution companies. The proposed methodology provides both the peak demand and its location for the next 15 years. This paper describes the Spatial Load Forecasting model used, the information provided by electrical distribution company in Jordan, the workflow followed, the parameters used and the assumptions made to run the model.

Keywords: load forecast, peak demand, spatial load, electrical distribution

Procedia PDF Downloads 464
3809 Impact of Climate on Sugarcane Yield Over Belagavi District, Karnataka Using Statistical Mode

Authors: Girish Chavadappanavar

Abstract:

The impact of climate on agriculture could result in problems with food security and may threaten the livelihood activities upon which much of the population depends. In the present study, the development of a statistical yield forecast model has been carried out for sugarcane production over Belagavi district, Karnataka using weather variables of crop growing season and past observed yield data for the period of 1971 to 2010. The study shows that this type of statistical yield forecast model could efficiently forecast yield 5 weeks and even 10 weeks in advance of the harvest for sugarcane within an acceptable limit of error. The performance of the model in predicting yields at the district level for sugarcane crops is found quite satisfactory for both validation (2007 and 2008) as well as forecasting (2009 and 2010).In addition to the above study, the climate variability of the area has also been studied, and hence, the data series was tested for Mann Kendall Rank Statistical Test. The maximum and minimum temperatures were found to be significant with opposite trends (decreasing trend in maximum and increasing in minimum temperature), while the other three are found in significant with different trends (rainfall and evening time relative humidity with increasing trend and morning time relative humidity with decreasing trend).

Keywords: climate impact, regression analysis, yield and forecast model, sugar models

Procedia PDF Downloads 34
3808 Time Series Modelling for Forecasting Wheat Production and Consumption of South Africa in Time of War

Authors: Yiseyon Hosu, Joseph Akande

Abstract:

Wheat is one of the most important staple food grains of human for centuries and is largely consumed in South Africa. It has a special place in the South African economy because of its significance in food security, trade, and industry. This paper modelled and forecast the production and consumption of wheat in South Africa in the time covid-19 and the ongoing Russia-Ukraine war by using annual time series data from 1940–2021 based on the ARIMA models. Both the averaging forecast and selected models forecast indicate that there is the possibility of an increase with respect to production. The minimum and maximum growth in production is projected to be between 3million and 10 million tons, respectively. However, the model also forecast a possibility of depression with respect to consumption in South Africa. Although Covid-19 and the war between Ukraine and Russia, two major producers and exporters of global wheat, are having an effect on the volatility of the prices currently, the wheat production in South African is expected to increase and meat the consumption demand and provided an opportunity for increase export with respect to domestic consumption. The forecasting of production and consumption behaviours of major crops play an important role towards food and nutrition security, these findings can assist policymakers and will provide them with insights into the production and pricing policy of wheat in South Africa.

Keywords: ARIMA, food security, price volatility, staple food, South Africa

Procedia PDF Downloads 70
3807 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm

Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu

Abstract:

Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.

Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model

Procedia PDF Downloads 219
3806 Objective-Based System Dynamics Modeling to Forecast the Number of Health Professionals in Pudong New Area of Shanghai

Authors: Jie Ji, Jing Xu, Yuehong Zhuang, Xiangqing Kang, Ying Qian, Ping Zhou, Di Xue

Abstract:

Background: In 2014, there were 28,341 health professionals in Pudong new area of Shanghai and the number per 1000 population was 5.199, 55.55% higher than that in 2006. But it was always less than the average number of health professionals per 1000 population in Shanghai from 2006 to 2014. Therefore, allocation planning for the health professionals in Pudong new area has become a high priority task in order to meet the future demands of health care. In this study, we constructed an objective-based system dynamics model to forecast the number of health professionals in Pudong new area of Shanghai in 2020. Methods: We collected the data from health statistics reports and previous survey of human resources in Pudong new area of Shanghai. Nine experts, who were from health administrative departments, public hospitals and community health service centers, were consulted to estimate the current and future status of nine variables used in the system dynamics model. Based on the objective of the number of health professionals per 1000 population (8.0) in Shanghai for 2020, the system dynamics model for health professionals in Pudong new area of Shanghai was constructed to forecast the number of health professionals needed in Pudong new area in 2020. Results: The system dynamics model for health professionals in Pudong new area of Shanghai was constructed. The model forecasted that there will be 37,330 health professionals (6.433 per 1000 population) in 2020. If the success rate of health professional recruitment changed from 20% to 70%, the number of health professionals per 1000 population would be changed from 5.269 to 6.919. If this rate changed from 20% to 70% and the success rate of building new beds changed from 5% to 30% at the same time, the number of health professionals per 1000 population would be changed from 5.269 to 6.923. Conclusions: The system dynamics model could be used to simulate and forecast the health professionals. But, if there were no significant changes in health policies and management system, the number of health professionals per 1000 population would not reach the objectives in Pudong new area in 2020.

Keywords: allocation planning, forecast, health professional, system dynamics

Procedia PDF Downloads 355
3805 Loan Supply and Asset Price Volatility: An Experimental Study

Authors: Gabriele Iannotta

Abstract:

This paper investigates credit cycles by means of an experiment based on a Kiyotaki & Moore (1997) model with heterogeneous expectations. The aim is to examine how a credit squeeze caused by high lender-level risk perceptions affects the real prices of a collateralised asset, with a special focus on the macroeconomic implications of rising price volatility in terms of total welfare and the number of bankruptcies that occur. To do that, a learning-to-forecast experiment (LtFE) has been run where participants are asked to predict the future price of land and then rewarded based on the accuracy of their forecasts. The setting includes one lender and five borrowers in each of the twelve sessions split between six control groups (G1) and six treatment groups (G2). The only difference is that while in G1 the lender always satisfies borrowers’ loan demand (bankruptcies permitting), in G2 he/she closes the entire credit market in case three or more bankruptcies occur in the previous round. Experimental results show that negative risk-driven supply shocks amplify the volatility of collateral prices. This uncertainty worsens the agents’ ability to predict the future value of land and, as a consequence, the number of defaults increases and the total welfare deteriorates.

Keywords: Behavioural Macroeconomics, Credit Cycle, Experimental Economics, Heterogeneous Expectations, Learning-to-Forecast Experiment

Procedia PDF Downloads 107
3804 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach

Authors: Chen-Yin Kuo, Yung-Hsin Lee

Abstract:

Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.

Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy

Procedia PDF Downloads 287
3803 Propagation of DEM Varying Accuracy into Terrain-Based Analysis

Authors: Wassim Katerji, Mercedes Farjas, Carmen Morillo

Abstract:

Terrain-Based Analysis results in derived products from an input DEM and these products are needed to perform various analyses. To efficiently use these products in decision-making, their accuracies must be estimated systematically. This paper proposes a procedure to assess the accuracy of these derived products, by calculating the accuracy of the slope dataset and its significance, taking as an input the accuracy of the DEM. Based on the output of previously published research on modeling the relative accuracy of a DEM, specifically ASTER and SRTM DEMs with Lebanon coverage as the area of study, analysis have showed that ASTER has a low significance in the majority of the area where only 2% of the modeled terrain has 50% or more significance. On the other hand, SRTM showed a better significance, where 37% of the modeled terrain has 50% or more significance. Statistical analysis deduced that the accuracy of the slope dataset, calculated on a cell-by-cell basis, is highly correlated to the accuracy of the input DEM. However, this correlation becomes lower between the slope accuracy and the slope significance, whereas it becomes much higher between the modeled slope and the slope significance.

Keywords: terrain-based analysis, slope, accuracy assessment, Digital Elevation Model (DEM)

Procedia PDF Downloads 420
3802 Logistic Regression Based Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

In recent years, there has been a desire to forecast student academic achievement prior to graduation. This is to help them improve their grades, particularly for individuals with poor performance. The goal of this study is to employ supervised learning techniques to construct a predictive model for student academic achievement. Many academics have already constructed models that predict student academic achievement based on factors such as smoking, demography, culture, social media, parent educational background, parent finances, and family background, to name a few. This feature and the model employed may not have correctly classified the students in terms of their academic performance. This model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester as a prerequisite to predict if the student will perform well in future on related courses. The model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost, returning a 96.7% accuracy. This model is available as a desktop application, allowing both instructors and students to benefit from user-friendly interfaces for predicting student academic achievement. As a result, it is recommended that both students and professors use this tool to better forecast outcomes.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 64
3801 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 87
3800 Short Life Cycle Time Series Forecasting

Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar

Abstract:

The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.

Keywords: forecast, short life cycle product, structured judgement, time series

Procedia PDF Downloads 323
3799 Forecasting the Influences of Information and Communication Technology on the Structural Changes of Japanese Industrial Sectors: A Study Using Statistical Analysis

Authors: Ubaidillah Zuhdi, Shunsuke Mori, Kazuhisa Kamegai

Abstract:

The purpose of this study is to forecast the influences of Information and Communication Technology (ICT) on the structural changes of Japanese economies based on Leontief Input-Output (IO) coefficients. This study establishes a statistical analysis to predict the future interrelationships among industries. We employ the Constrained Multivariate Regression (CMR) model to analyze the historical changes of input-output coefficients. Statistical significance of the model is then tested by Likelihood Ratio Test (LRT). In our model, ICT is represented by two explanatory variables, i.e. computers (including main parts and accessories) and telecommunications equipment. A previous study, which analyzed the influences of these variables on the structural changes of Japanese industrial sectors from 1985-2005, concluded that these variables had significant influences on the changes in the business circumstances of Japanese commerce, business services and office supplies, and personal services sectors. The projected future Japanese economic structure based on the above forecast generates the differentiated direct and indirect outcomes of ICT penetration.

Keywords: forecast, ICT, industrial structural changes, statistical analysis

Procedia PDF Downloads 348
3798 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 254
3797 Using Gaussian Process in Wind Power Forecasting

Authors: Hacene Benkhoula, Mohamed Badreddine Benabdella, Hamid Bouzeboudja, Abderrahmane Asraoui

Abstract:

The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.

Keywords: wind power, Gaussien process, modelling, forecasting

Procedia PDF Downloads 374
3796 The Role of Inventory Classification in Supply Chain Responsiveness in a Build-to-Order and Build-To-Forecast Manufacturing Environment: A Comparative Analysis

Authors: Qamar Iqbal

Abstract:

Companies strive to improve their forecasting methods to predict the fluctuations in customer demand. These fluctuation and variation in demand affect the manufacturing operations and can limit a company’s ability to fulfill customer demand on time. Companies keep the inventory buffer and maintain the stocking levels to reduce the impact of demand variation. A mid-size company deals with thousands of stock keeping units (skus). It is neither easy and nor efficient to control and manage each sku. Inventory classification provides a tool to the management to increase their ability to support customer demand. The paper presents a framework that shows how inventory classification can play a role to increase supply chain responsiveness. A case study will be presented to further elaborate the method both for build-to-order and build-to-forecast manufacturing environments. Results will be compared that will show which manufacturing setting has advantage over another under different circumstances. The outcome of this study is very useful to the management because this will give them an insight on how inventory classification can be used to increase their ability to respond to changing customer needs.

Keywords: inventory classification, supply chain responsiveness, forecast, manufacturing environment

Procedia PDF Downloads 571
3795 The Reliability of Management Earnings Forecasts in IPO Prospectuses: A Study of Managers’ Forecasting Preferences

Authors: Maha Hammami, Olfa Benouda Sioud

Abstract:

This study investigates the reliability of management earnings forecasts with reference to these two ingredients: verifiability and neutrality. Specifically, we examine the biasedness (or accuracy) of management earnings forecasts and company specific characteristics that can be associated with accuracy. Based on sample of 102 IPO prospectuses published for admission on NYSE Euronext Paris from 2002 to 2010, we found that these forecasts are on average optimistic and two of the five test variables, earnings variability and financial leverage are significant in explaining ex post bias. Acknowledging the possibility that the bias is the result of the managers’ forecasting behavior, we then examine whether managers decide to under-predict, over-predict or forecast accurately for self-serving purposes. Explicitly, we examine the role of financial distress, operating performance, ownership by insiders and the economy state in influencing managers’ forecasting preferences. We find that managers of distressed firms seem to over-predict future earnings. We also find that when managers are given more stock options, they tend to under-predict future earnings. Finally, we conclude that the management earnings forecasts are affected by an intentional bias due to managers’ forecasting preferences.

Keywords: intentional bias, management earnings forecasts, neutrality, verifiability

Procedia PDF Downloads 204
3794 Natural Gas Production Forecasts Using Diffusion Models

Authors: Md. Abud Darda

Abstract:

Different options for natural gas production in wide geographic areas may be described through diffusion of innovation models. This type of modeling approach provides an indirect estimate of an ultimately recoverable resource, URR, capture the quantitative effects of observed strategic interventions, and allow ex-ante assessments of future scenarios over time. In order to ensure a sustainable energy policy, it is important to forecast the availability of this natural resource. Considering a finite life cycle, in this paper we try to investigate the natural gas production of Myanmar and Algeria, two important natural gas provider in the world energy market. A number of homogeneous and heterogeneous diffusion models, with convenient extensions, have been used. Models validation has also been performed in terms of prediction capability.

Keywords: diffusion models, energy forecast, natural gas, nonlinear production

Procedia PDF Downloads 201
3793 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 312
3792 Load Forecasting Using Neural Network Integrated with Economic Dispatch Problem

Authors: Mariyam Arif, Ye Liu, Israr Ul Haq, Ahsan Ashfaq

Abstract:

High cost of fossil fuels and intensifying installations of alternate energy generation sources are intimidating main challenges in power systems. Making accurate load forecasting an important and challenging task for optimal energy planning and management at both distribution and generation side. There are many techniques to forecast load but each technique comes with its own limitation and requires data to accurately predict the forecast load. Artificial Neural Network (ANN) is one such technique to efficiently forecast the load. Comparison between two different ranges of input datasets has been applied to dynamic ANN technique using MATLAB Neural Network Toolbox. It has been observed that selection of input data on training of a network has significant effects on forecasted results. Day-wise input data forecasted the load accurately as compared to year-wise input data. The forecasted load is then distributed among the six generators by using the linear programming to get the optimal point of generation. The algorithm is then verified by comparing the results of each generator with their respective generation limits.

Keywords: artificial neural networks, demand-side management, economic dispatch, linear programming, power generation dispatch

Procedia PDF Downloads 162