Search results for: price prediction
3107 Integrating Artificial Neural Network and Taguchi Method on Constructing the Real Estate Appraisal Model
Authors: Mu-Yen Chen, Min-Hsuan Fan, Chia-Chen Chen, Siang-Yu Jhong
Abstract:
In recent years, real estate prediction or valuation has been a topic of discussion in many developed countries. Improper hype created by investors leads to fluctuating prices of real estate, affecting many consumers to purchase their own homes. Therefore, scholars from various countries have conducted research in real estate valuation and prediction. With the back-propagation neural network that has been popular in recent years and the orthogonal array in the Taguchi method, this study aimed to find the optimal parameter combination at different levels of orthogonal array after the system presented different parameter combinations, so that the artificial neural network obtained the most accurate results. The experimental results also demonstrated that the method presented in the study had a better result than traditional machine learning. Finally, it also showed that the model proposed in this study had the optimal predictive effect, and could significantly reduce the cost of time in simulation operation. The best predictive results could be found with a fewer number of experiments more efficiently. Thus users could predict a real estate transaction price that is not far from the current actual prices.Keywords: artificial neural network, Taguchi method, real estate valuation model, investors
Procedia PDF Downloads 4893106 Machine Learning Techniques to Develop Traffic Accident Frequency Prediction Models
Authors: Rodrigo Aguiar, Adelino Ferreira
Abstract:
Road traffic accidents are the leading cause of unnatural death and injuries worldwide, representing a significant problem of road safety. In this context, the use of artificial intelligence with advanced machine learning techniques has gained prominence as a promising approach to predict traffic accidents. This article investigates the application of machine learning algorithms to develop traffic accident frequency prediction models. Models are evaluated based on performance metrics, making it possible to do a comparative analysis with traditional prediction approaches. The results suggest that machine learning can provide a powerful tool for accident prediction, which will contribute to making more informed decisions regarding road safety.Keywords: machine learning, artificial intelligence, frequency of accidents, road safety
Procedia PDF Downloads 893105 Potentials and Influencing Factors of Dynamic Pricing in Business: Empirical Insights of European Experts
Authors: Christopher Reichstein, Ralf-Christian Härting, Martina Häußler
Abstract:
With a continuously increasing speed of information exchange on the World Wide Web, retailers in the E-Commerce sector are faced with immense possibilities regarding different online purchase processes like dynamic price settings. By use of Dynamic Pricing, retailers are able to set short time price changes in order to optimize producer surplus. The empirical research illustrates the basics of Dynamic Pricing and identifies six influencing factors of Dynamic Pricing. The results of a structural equation modeling approach show five main drivers increasing the potential of dynamic price settings in the E-Commerce. Influencing factors are the knowledge of customers’ individual willingness to pay, rising sales, the possibility of customization, the data volume and the information about competitors’ pricing strategy.Keywords: e-commerce, empirical research, experts, dynamic pricing (DP), influencing factors, potentials
Procedia PDF Downloads 2643104 Optimal Policies in a Two-Level Supply Chain with Defective Product and Price Dependent Demand
Authors: Samira Mohabbatdar, Abbas Ahmadi, Mohsen S. Sajadieh
Abstract:
This paper deals with a two-level supply chain consisted of one manufacturer and one retailer for single-type product. The demand function of the customers depends on price. We consider an integrated production inventory system where the manufacturer processes raw materials in order to deliver finished product with imperfect quality to the retailer. Then retailer inspects the products and after that delivers perfect products to customers. The proposed model is based on the joint total profit of both the manufacturer and the retailer, and it determines the optimal ordering lot-size, number of shipment and selling price of the retailer. A numerical example is provided to analyse and illustrate the behaviour and application of the model. Finally, sensitivity analysis of the key parameters are presented to test feasibility of the model.Keywords: supply chain, pricing policy, defective quality, joint economic lot sizing
Procedia PDF Downloads 3373103 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences
Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur
Abstract:
In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.Keywords: decision making, endowment effect, pricing, loss aversion, loss attention
Procedia PDF Downloads 3453102 Performance Analysis of Bluetooth Low Energy Mesh Routing Algorithm in Case of Disaster Prediction
Authors: Asmir Gogic, Aljo Mujcic, Sandra Ibric, Nermin Suljanovic
Abstract:
Ubiquity of natural disasters during last few decades have risen serious questions towards the prediction of such events and human safety. Every disaster regardless its proportion has a precursor which is manifested as a disruption of some environmental parameter such as temperature, humidity, pressure, vibrations and etc. In order to anticipate and monitor those changes, in this paper we propose an overall system for disaster prediction and monitoring, based on wireless sensor network (WSN). Furthermore, we introduce a modified and simplified WSN routing protocol built on the top of the trickle routing algorithm. Routing algorithm was deployed using the bluetooth low energy protocol in order to achieve low power consumption. Performance of the WSN network was analyzed using a real life system implementation. Estimates of the WSN parameters such as battery life time, network size and packet delay are determined. Based on the performance of the WSN network, proposed system can be utilized for disaster monitoring and prediction due to its low power profile and mesh routing feature.Keywords: bluetooth low energy, disaster prediction, mesh routing protocols, wireless sensor networks
Procedia PDF Downloads 3863101 Intelligent Earthquake Prediction System Based On Neural Network
Authors: Emad Amar, Tawfik Khattab, Fatma Zada
Abstract:
Predicting earthquakes is an important issue in the study of geography. Accurate prediction of earthquakes can help people to take effective measures to minimize the loss of personal and economic damage, such as large casualties, destruction of buildings and broken of traffic, occurred within a few seconds. United States Geological Survey (USGS) science organization provides reliable scientific information of Earthquake Existed throughout history & Preliminary database from the National Center Earthquake Information (NEIC) show some useful factors to predict an earthquake in a seismic area like Aleutian Arc in the U.S. state of Alaska. The main advantage of this prediction method that it does not require any assumption, it makes prediction according to the future evolution of object's time series. The article compares between simulation data result from trained BP and RBF neural network versus actual output result from the system calculations. Therefore, this article focuses on analysis of data relating to real earthquakes. Evaluation results show better accuracy and higher speed by using radial basis functions (RBF) neural network.Keywords: BP neural network, prediction, RBF neural network, earthquake
Procedia PDF Downloads 4963100 Hybrid Wavelet-Adaptive Neuro-Fuzzy Inference System Model for a Greenhouse Energy Demand Prediction
Authors: Azzedine Hamza, Chouaib Chakour, Messaoud Ramdani
Abstract:
Energy demand prediction plays a crucial role in achieving next-generation power systems for agricultural greenhouses. As a result, high prediction quality is required for efficient smart grid management and therefore low-cost energy consumption. The aim of this paper is to investigate the effectiveness of a hybrid data-driven model in day-ahead energy demand prediction. The proposed model consists of Discrete Wavelet Transform (DWT), and Adaptive Neuro-Fuzzy Inference System (ANFIS). The DWT is employed to decompose the original signal in a set of subseries and then an ANFIS is used to generate the forecast for each subseries. The proposed hybrid method (DWT-ANFIS) was evaluated using a greenhouse energy demand data for a week and compared with ANFIS. The performances of the different models were evaluated by comparing the corresponding values of Mean Absolute Percentage Error (MAPE). It was demonstrated that discret wavelet transform can improve agricultural greenhouse energy demand modeling.Keywords: wavelet transform, ANFIS, energy consumption prediction, greenhouse
Procedia PDF Downloads 883099 Predicting Destination Station Based on Public Transit Passenger Profiling
Authors: Xuyang Song, Jun Yin
Abstract:
The smart card has been an extremely universal tool in public transit. It collects a large amount of data on buses, urban railway transit, and ferries and provides possibilities for passenger profiling. This paper combines offline analysis of passenger profiling and real-time prediction to propose a method that can accurately predict the destination station in real-time when passengers tag on. Firstly, this article constructs a static database of user travel characteristics after identifying passenger travel patterns based on the Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The dual travel passenger habits are identified: OD travel habits and D station travel habits. Then a rapid real-time prediction algorithm based on Transit Passenger Profiling is proposed, which can predict the destination of in-board passengers. This article combines offline learning with online prediction, providing a technical foundation for real-time passenger flow prediction, monitoring and simulation, and short-term passenger behavior and demand prediction. This technology facilitates the efficient and real-time acquisition of passengers' travel destinations and demand. The last, an actual case was simulated and demonstrated feasibility and efficiency.Keywords: travel behavior, destination prediction, public transit, passenger profiling
Procedia PDF Downloads 203098 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting
Authors: Yiannis G. Smirlis
Abstract:
The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.Keywords: data envelopment analysis, interval DEA, efficiency classification, efficiency prediction
Procedia PDF Downloads 1643097 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 6213096 Money and Inflation in Cambodia
Authors: Siphat Lim
Abstract:
The result of the study revealed that the interaction between money, exchange rate, and price level was mainly derived from the policy-induced by the central bank. Furthermore, the variation of inflation was explained weakly by exchange rate and money supply. In the period of twelfth-month, the variation of inflation which caused by exchange rate and money supply were not more than 1.78 percent and 9.77 percent, respectively.Keywords: money supply, exchange rate, price level, VAR model
Procedia PDF Downloads 2883095 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 3363094 Mathematical Modeling for Diabetes Prediction: A Neuro-Fuzzy Approach
Authors: Vijay Kr. Yadav, Nilam Rathi
Abstract:
Accurate prediction of glucose level for diabetes mellitus is required to avoid affecting the functioning of major organs of human body. This study describes the fundamental assumptions and two different methodologies of the Blood glucose prediction. First is based on the back-propagation algorithm of Artificial Neural Network (ANN), and second is based on the Neuro-Fuzzy technique, called Fuzzy Inference System (FIS). Errors between proposed methods further discussed through various statistical methods such as mean square error (MSE), normalised mean absolute error (NMAE). The main objective of present study is to develop mathematical model for blood glucose prediction before 12 hours advanced using data set of three patients for 60 days. The comparative studies of the accuracy with other existing models are also made with same data set.Keywords: back-propagation, diabetes mellitus, fuzzy inference system, neuro-fuzzy
Procedia PDF Downloads 2573093 The Promotion Effects for a Supply Chain System with a Dominant Retailer
Authors: Tai-Yue Wang, Yi-Ho Chen
Abstract:
In this study, we investigate a two-echelon supply chain with two suppliers and three retailers among which one retailer dominates other retailers. A price competition demand function is used to model this dominant retailer, which is leading market. The promotion strategies and negotiation schemes are integrated to form decision-making models under different scenarios. These models are then formulated into different mathematical programming models. The decision variables such as promotional costs, retailer prices, wholesale price, and order quantity are included in these models. At last, the distributions of promotion costs under different cost allocation strategies are discussed. Finally, an empirical example used to validate our models. The results from this empirical example show that the profit model will create the largest profit for the supply chain but with different profit-sharing results. At the same time, the more risk a member can take, the more profits are distributed to that member in the utility model.Keywords: supply chain, price promotion, mathematical models, dominant retailer
Procedia PDF Downloads 4013092 Clinical Feature Analysis and Prediction on Recurrence in Cervical Cancer
Authors: Ravinder Bahl, Jamini Sharma
Abstract:
The paper demonstrates analysis of the cervical cancer based on a probabilistic model. It involves technique for classification and prediction by recognizing typical and diagnostically most important test features relating to cervical cancer. The main contributions of the research include predicting the probability of recurrences in no recurrence (first time detection) cases. The combination of the conventional statistical and machine learning tools is applied for the analysis. Experimental study with real data demonstrates the feasibility and potential of the proposed approach for the said cause.Keywords: cervical cancer, recurrence, no recurrence, probabilistic, classification, prediction, machine learning
Procedia PDF Downloads 3603091 Dynamic vs. Static Bankruptcy Prediction Models: A Dynamic Performance Evaluation Framework
Authors: Mohammad Mahdi Mousavi
Abstract:
Bankruptcy prediction models have been implemented for continuous evaluation and monitoring of firms. With the huge number of bankruptcy models, an extensive number of studies have focused on answering the question that which of these models are superior in performance. In practice, one of the drawbacks of existing comparative studies is that the relative assessment of alternative bankruptcy models remains an exercise that is mono-criterion in nature. Further, a very restricted number of criteria and measure have been applied to compare the performance of competing bankruptcy prediction models. In this research, we overcome these methodological gaps through implementing an extensive range of criteria and measures for comparison between dynamic and static bankruptcy models, and through proposing a multi-criteria framework to compare the relative performance of bankruptcy models in forecasting firm distress for UK firms.Keywords: bankruptcy prediction, data envelopment analysis, performance criteria, performance measures
Procedia PDF Downloads 2493090 Prediction of Extreme Precipitation in East Asia Using Complex Network
Authors: Feng Guolin, Gong Zhiqiang
Abstract:
In order to study the spatial structure and dynamical mechanism of extreme precipitation in East Asia, a corresponding climate network is constructed by employing the method of event synchronization. It is found that the area of East Asian summer extreme precipitation can be separated into two regions: one with high area weighted connectivity receiving heavy precipitation mostly during the active phase of the East Asian Summer Monsoon (EASM), and another one with low area weighted connectivity receiving heavy precipitation during both the active and the retreat phase of the EASM. Besides,a way for the prediction of extreme precipitation is also developed by constructing a directed climate networks. The simulation accuracy in East Asia is 58% with a 0-day lead, and the prediction accuracy is 21% and average 12% with a 1-day and an n-day (2≤n≤10) lead, respectively. Compare to the normal EASM year, the prediction accuracy is lower in a weak year and higher in a strong year, which is relevant to the differences in correlations and extreme precipitation rates in different EASM situations. Recognizing and identifying these effects is good for understanding and predicting extreme precipitation in East Asia.Keywords: synchronization, climate network, prediction, rainfall
Procedia PDF Downloads 4433089 External Validation of Risk Prediction Score for Candidemia in Critically Ill Patients: A Retrospective Observational Study
Authors: Nurul Mazni Abdullah, Saw Kian Cheah, Raha Abdul Rahman, Qurratu 'Aini Musthafa
Abstract:
Purpose: Candidemia was associated with high mortality in critically ill patients. Early candidemia prediction is imperative for preemptive antifungal treatment. This study aimed to externally validate the candidemia risk prediction scores by Jameran et al. (2021) by identifying risk factors of acute kidney injury, renal replacement therapy, parenteral nutrition, and multifocal candida colonization. Methods: This single-center, retrospective observational study included all critically ill patients admitted to the intensive care unit (ICU) in a tertiary referral center from January 2018 to December 2023. The study evaluated the candidemia risk prediction score performance by analyzing the occurrence of candidemia within the study period. Patients’ demographic characteristics, comorbidities, SOFA scores, and ICU outcomes were analyzed. Patients who were diagnosed with candidemia before ICU admission were excluded. Results: A total of 500 patients were analyzed with 2 dropouts due to incomplete data. Validation analysis showed that the candidemia risk prediction score has a sensitivity of 75.00% (95% CI: 59.66-86.81), specificity of 65.35% (95% CI: 60.78-69.72), positive predictive value of 17.28, and negative predictive value of 96.44. The incidence of candidemia was 8.86% with no significant differences in the demographic and comorbidities except higher SOFA scoring in the candidemia group. The candidemia group showed significantly longer ICU and hospital LOS and higher ICU and in-hospital mortality. Conclusion: This study concluded the candidemia risk prediction score by Jameran et al (2021) had good sensitivity and a high negative prediction value.Keywords: candidemia, intensive care, clinical prediction rule, incidence
Procedia PDF Downloads 93088 Representation Data without Lost Compression Properties in Time Series: A Review
Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan
Abstract:
Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.Keywords: compression properties, uncertainty, uncertain time series, mining technique, weather prediction
Procedia PDF Downloads 4283087 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks
Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz
Abstract:
Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks
Procedia PDF Downloads 1473086 Aggregating Buyers and Sellers for E-Commerce: How Demand and Supply Meet in Fairs
Authors: Pierluigi Gallo, Francesco Randazzo, Ignazio Gallo
Abstract:
In recent years, many new and interesting models of successful online business have been developed. Many of these are based on the competition between users, such as online auctions, where the product price is not fixed and tends to rise. Other models, including group-buying, are based on cooperation between users, characterized by a dynamic price of the product that tends to go down. There is not yet a business model in which both sellers and buyers are grouped in order to negotiate on a specific product or service. The present study investigates a new extension of the group-buying model, called fair, which allows aggregation of demand and supply for price optimization, in a cooperative manner. Additionally, our system also aggregates products and destinations for shipping optimization. We introduced the following new relevant input parameters in order to implement a double-side aggregation: (a) price-quantity curves provided by the seller; (b) waiting time, that is, the longer buyers wait, the greater discount they get; (c) payment time, which determines if the buyer pays before, during or after receiving the product; (d) the distance between the place where products are available and the place of shipment, provided in advance by the buyer or dynamically suggested by the system. To analyze the proposed model we implemented a system prototype and a simulator that allows studying effects of changing some input parameters. We analyzed the dynamic price model in fairs having one single seller and a combination of selected sellers. The results are very encouraging and motivate further investigation on this topic.Keywords: auction, aggregation, fair, group buying, social buying
Procedia PDF Downloads 2943085 Co-Integration and Error Correction Mechanism of Supply Response of Sugarcane in Pakistan (1980-2012)
Authors: Himayatullah Khan
Abstract:
This study estimates supply response function of sugarcane in Pakistan from 1980-81 to 2012-13. The study uses co-integration approach and error correction mechanism. Sugarcane production, area and price series were tested for unit root using Augmented Dickey Fuller (ADF). The study found that these series were stationary at their first differenced level. Using the Augmented Engle-Granger test and Cointegrating Regression Durbin-Watson (CRDW) test, the study found that “production and price” and “area and price” were co-integrated suggesting that the two sets of time series had long-run or equilibrium relationship. The results of the error correction models for the two sets of series showed that there was disequilibrium in the short run there may be disequilibrium. The Engle-Granger residual may be thought of as the equilibrium error which can be used to tie the short-run behavior of the dependent variable to its long-run value. The Granger-Causality test results showed that log of price granger caused both the long of production and log of area whereas, the log of production and log of area Granger caused each other.Keywords: co-integration, error correction mechanism, Granger-causality, sugarcane, supply response
Procedia PDF Downloads 4363084 A Prediction Method for Large-Size Event Occurrences in the Sandpile Model
Authors: S. Channgam, A. Sae-Tang, T. Termsaithong
Abstract:
In this research, the occurrences of large size events in various system sizes of the Bak-Tang-Wiesenfeld sandpile model are considered. The system sizes (square lattice) of model considered here are 25×25, 50×50, 75×75 and 100×100. The cross-correlation between the ratio of sites containing 3 grain time series and the large size event time series for these 4 system sizes are also analyzed. Moreover, a prediction method of the large-size event for the 50×50 system size is also introduced. Lastly, it can be shown that this prediction method provides a slightly higher efficiency than random predictions.Keywords: Bak-Tang-Wiesenfeld sandpile model, cross-correlation, avalanches, prediction method
Procedia PDF Downloads 3823083 Prediction of Bodyweight of Cattle by Artificial Neural Networks Using Digital Images
Authors: Yalçın Bozkurt
Abstract:
Prediction models were developed for accurate prediction of bodyweight (BW) by using Digital Images of beef cattle body dimensions by Artificial Neural Networks (ANN). For this purpose, the animal data were collected at a private slaughter house and the digital images and the weights of each live animal were taken just before they were slaughtered and the body dimensions such as digital wither height (DJWH), digital body length (DJBL), digital body depth (DJBD), digital hip width (DJHW), digital hip height (DJHH) and digital pin bone length (DJPL) were determined from the images, using the data with 1069 observations for each traits. Then, prediction models were developed by ANN. Digital body measurements were analysed by ANN for body prediction and R2 values of DJBL, DJWH, DJHW, DJBD, DJHH and DJPL were approximately 94.32, 91.31, 80.70, 83.61, 89.45 and 70.56 % respectively. It can be concluded that in management situations where BW cannot be measured it can be predicted accurately by measuring DJBL and DJWH alone or both DJBD and even DJHH and different models may be needed to predict BW in different feeding and environmental conditions and breedsKeywords: artificial neural networks, bodyweight, cattle, digital body measurements
Procedia PDF Downloads 3733082 An Empirical Analysis of the Effects of Corporate Derivatives Use on the Underlying Stock Price Exposure: South African Evidence
Authors: Edson Vengesai
Abstract:
Derivative products have become essential instruments in portfolio diversification, price discovery, and, most importantly, risk hedging. Derivatives are complex instruments; their valuation, volatility implications, and real impact on the underlying assets' behaviour are not well understood. Little is documented empirically, with conflicting conclusions on how these instruments affect firm risk exposures. Given the growing interest in using derivatives in risk management and portfolio engineering, this study examines the practical impact of derivative usage on the underlying stock price exposure and systematic risk. The paper uses data from South African listed firms. The study employs GARCH models to understand the effect of derivative uses on conditional stock volatility. The GMM models are used to estimate the effect of derivatives use on stocks' systematic risk as measured by Beta and on the total risk of stocks as measured by the standard deviation of returns. The results provide evidence on whether derivatives use is instrumental in reducing stock returns' systematic and total risk. The results are subjected to numerous controls for robustness, including financial leverage, firm size, growth opportunities, and macroeconomic effects.Keywords: derivatives use, hedging, volatility, stock price exposure
Procedia PDF Downloads 1103081 Evaluation of the Execution Effect of the Minimum Grain Purchase Price in Rural Areas
Authors: Zhaojun Wang, Zongdi Sun, Yongjie Chen, Manman Chen, Linghui Wang
Abstract:
This paper uses the analytic hierarchy process to study the execution effect of the minimum purchase price of grain in different regions and various grain crops. Firstly, for different regions, five indicators including grain yield, grain sown area, gross agricultural production, grain consumption price index, and disposable income of rural residents were selected to construct an evaluation index system. We collect data of six provinces including Hebei Province, Heilongjiang Province and Shandong Province from 2006 to 2017. Then, the judgment matrix is constructed, and the hierarchical single ordering and consistency test are carried out to determine the scoring standard for the minimum purchase price of grain. The ranking of the execution effect from high to low is: Heilongjiang Province, Shandong Province, Hebei Province, Guizhou Province, Shaanxi Province, and Guangdong Province. Secondly, taking Shandong Province as an example, we collect the relevant data of sown area and yield of cereals, beans, potatoes and other crops from 2006 to 2017. The weight of area and yield index is determined by expert scoring method. And the average sown area and yield of cereals, beans and potatoes in 2006-2017 were calculated, respectively. On this basis, according to the sum of products of weights and mean values, the execution effects of different grain crops are determined. It turns out that among the cereals, the minimum purchase price had the best execution effect on paddy, followed by wheat and finally maize. Moreover, among major categories of crops, cereals perform best, followed by beans and finally potatoes. Lastly, countermeasures are proposed for different regions, various categories of crops, and different crops of the same category.Keywords: analytic hierarchy process, grain yield, grain sown area, minimum grain purchase price
Procedia PDF Downloads 1403080 Engagement Analysis Using DAiSEE Dataset
Authors: Naman Solanki, Souraj Mondal
Abstract:
With the world moving towards online communication, the video datastore has exploded in the past few years. Consequently, it has become crucial to analyse participant’s engagement levels in online communication videos. Engagement prediction of people in videos can be useful in many domains, like education, client meetings, dating, etc. Video-level or frame-level prediction of engagement for a user involves the development of robust models that can capture facial micro-emotions efficiently. For the development of an engagement prediction model, it is necessary to have a widely-accepted standard dataset for engagement analysis. DAiSEE is one of the datasets which consist of in-the-wild data and has a gold standard annotation for engagement prediction. Earlier research done using the DAiSEE dataset involved training and testing standard models like CNN-based models, but the results were not satisfactory according to industry standards. In this paper, a multi-level classification approach has been introduced to create a more robust model for engagement analysis using the DAiSEE dataset. This approach has recorded testing accuracies of 0.638, 0.7728, 0.8195, and 0.866 for predicting boredom level, engagement level, confusion level, and frustration level, respectively.Keywords: computer vision, engagement prediction, deep learning, multi-level classification
Procedia PDF Downloads 1143079 Performance Evaluation of Arrival Time Prediction Models
Abstract:
Arrival time information is a crucial component of advanced public transport system (APTS). The advertisement of arrival time at stops can help reduce the waiting time and anxiety of passengers, and improve the quality of service. In this research, an experiment was conducted to compare the performance on prediction accuracy and precision between the link-based and the path-based historical travel time based model with the automatic vehicle location (AVL) data collected from an actual bus route. The research results show that the path-based model is superior to the link-based model, and achieves the best improvement on peak hours.Keywords: bus transit, arrival time prediction, link-based, path-based
Procedia PDF Downloads 3593078 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 141