Search results for: cost prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7910

Search results for: cost prediction

7760 EDM for Prediction of Academic Trends and Patterns

Authors: Trupti Diwan

Abstract:

Predicting student failure at school has changed into a difficult challenge due to both the large number of factors that can affect the reduced performance of students and the imbalanced nature of these kinds of data sets. This paper surveys the two elements needed to make prediction on Students’ Academic Performances which are parameters and methods. This paper also proposes a framework for predicting the performance of engineering students. Genetic programming can be used to predict student failure/success. Ranking algorithm is used to rank students according to their credit points. The framework can be used as a basis for the system implementation & prediction of students’ Academic Performance in Higher Learning Institute.

Keywords: classification, educational data mining, student failure, grammar-based genetic programming

Procedia PDF Downloads 403
7759 Discrete State Prediction Algorithm Design with Self Performance Enhancement Capacity

Authors: Smail Tigani, Mohamed Ouzzif

Abstract:

This work presents a discrete quantitative state prediction algorithm with intelligent behavior making it able to self-improve some performance aspects. The specificity of this algorithm is the capacity of self-rectification of the prediction strategy before the final decision. The auto-rectification mechanism is based on two parallel mathematical models. In one hand, the algorithm predicts the next state based on event transition matrix updated after each observation. In the other hand, the algorithm extracts its residues trend with a linear regression representing historical residues data-points in order to rectify the first decision if needs. For a normal distribution, the interactivity between the two models allows the algorithm to self-optimize its performance and then make better prediction. Designed key performance indicator, computed during a Monte Carlo simulation, shows the advantages of the proposed approach compared with traditional one.

Keywords: discrete state, Markov Chains, linear regression, auto-adaptive systems, decision making, Monte Carlo Simulation

Procedia PDF Downloads 480
7758 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 225
7757 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 400
7756 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction

Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé

Abstract:

One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.

Keywords: input variable disposition, machine learning, optimization, performance, time series prediction

Procedia PDF Downloads 76
7755 Seismic Hazard Prediction Using Seismic Bumps: Artificial Neural Network Technique

Authors: Belkacem Selma, Boumediene Selma, Tourkia Guerzou, Abbes Labdelli

Abstract:

Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. The Earthquakes prediction to prevent the loss of human lives and even property damage is an important factor; that is why it is crucial to develop techniques for predicting this natural disaster. This present study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 10^4J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines has been analyzed. The results obtained show that the ANN with high accuracy was able to predict earthquake parameters; the classification accuracy through neural networks is more than 94%, and that the models developed are efficient and robust and depend only weakly on the initial database.

Keywords: earthquake prediction, ANN, seismic bumps

Procedia PDF Downloads 106
7754 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia

Authors: The Danh Phan

Abstract:

House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.

Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise

Procedia PDF Downloads 199
7753 Factors Contributing to Building Construction Project’s Cost Overrun in Jordan

Authors: Ghaleb Y. Abbasi, Sufyan Al-Mrayat

Abstract:

This study examined the contribution of thirty-six factors to building construction project’s cost overrun in Jordan. A questionnaire was distributed to a random sample of 350 stakeholders comprised of owners, consultants, and contractors, of which 285 responded. SPSS analysis was conducted to identify the top five causes of cost overrun, which were a large number of variation orders, inadequate quantities provided in the contract, misunderstanding of the project plan, incomplete bid documents, and choosing the lowest price in the contract bidding. There was an agreement among the study participants in ranking the factors contributing to cost overrun, which indicated that these factors were very commonly encountered in most construction projects in Jordan. Thus, it is crucial to enhance the collaboration among the different project stakeholders to understand the project’s objectives and set a realistic plan that takes into consideration all the factors that might influence the project cost, which might eventually prevent cost overrun.

Keywords: cost, overrun, building construction projects, Jordan

Procedia PDF Downloads 78
7752 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks

Authors: Yong Zhao, Jian He, Cheng Zhang

Abstract:

Cardiovascular diseases caused by hypertension are extremely threatening to human health, and early diagnosis of hypertension can save a large number of lives. Traditional hypertension detection methods require special equipment and are difficult to detect continuous blood pressure changes. In this regard, this paper first analyzes the principle of heart rate variability (HRV) and introduces sliding window and power spectral density (PSD) to analyze the time domain features and frequency domain features of HRV, and secondly, designs an HRV-based hypertension prediction network by combining Resnet, attention mechanism, and multilayer perceptron, which extracts the frequency domain through the improved ResNet18 features through a modified ResNet18, its fusion with time-domain features through an attention mechanism, and the auxiliary prediction of hypertension through a multilayer perceptron. Finally, the network was trained and tested using the publicly available SHAREE dataset on PhysioNet, and the test results showed that this network achieved 92.06% prediction accuracy for hypertension and outperformed K Near Neighbor(KNN), Bayes, Logistic, and traditional Convolutional Neural Network(CNN) models in prediction performance.

Keywords: feature extraction, heart rate variability, hypertension, residual networks

Procedia PDF Downloads 73
7751 The Cardiac Diagnostic Prediction Applied to a Designed Holter

Authors: Leonardo Juan Ramírez López, Javier Oswaldo Rodriguez Velasquez

Abstract:

We have designed a Holter that measures the heart´s activity for over 24 hours, implemented a prediction methodology, and generate alarms as well as indicators to patients and treating physicians. Various diagnostic advances have been developed in clinical cardiology thanks to Holter implementation; however, their interpretation has largely been conditioned to clinical analysis and measurements adjusted to diverse population characteristics, thus turning it into a subjective examination. This, however, requires vast population studies to be validated that, in turn, have not achieved the ultimate goal: mortality prediction. Given this context, our Insight Research Group developed a mathematical methodology that assesses cardiac dynamics through entropy and probability, creating a numerical and geometrical attractor which allows quantifying the normalcy of chronic and acute disease as well as the evolution between such states, and our Tigum Research Group developed a holter device with 12 channels and advanced computer software. This has been shown in different contexts with 100% sensitivity and specificity results.

Keywords: attractor , cardiac, entropy, holter, mathematical , prediction

Procedia PDF Downloads 149
7750 Life Prediction of Cutting Tool by the Workpiece Cutting Condition

Authors: Noemia Gomes de Mattos de Mesquita, José Eduardo Ferreira de Oliveira, Arimatea Quaresma Ferraz

Abstract:

Stops to exchange cutting tool, to set up again the tool in a turning operation with CNC or to measure the workpiece dimensions have a direct influence on production. The premature removal of the cutting tool results in high cost of machining since the parcel relating to the cost of the cutting tool increases. On the other hand, the late exchange of cutting tool also increases the cost of production because getting parts out of the preset tolerances may require rework for its use when it does not cause bigger problems such as breaking of cutting tools or the loss of the part. Therefore, the right time to exchange the tool should be well defined when wanted to minimize production costs. When the flank wear is the limiting tool life, the time predetermination that a cutting tool must be used for the machining occurs within the limits of tolerance can be done without difficulty. This paper aims to show how the life of the cutting tool can be calculated taking into account the cutting parameters (cutting speed, feed and depth of cut), workpiece material, power of the machine, the dimensional tolerance of the part, the finishing surface, the geometry of the cutting tool and operating conditions of the machine tool, once known the parameters of Taylor algebraic structure. These parameters were raised for the ABNT 1038 steel machined with cutting tools of hard metal.

Keywords: machining, productions, cutting condition, design, manufacturing, measurement

Procedia PDF Downloads 615
7749 Development of Work Breakdown Structure for EVMS in South Korea

Authors: Dong-Ho Kim, Su-Sang Lim, Sang-Won Han, Chang-Taek Hyun

Abstract:

In the construction site, the cost and schedules are the most important management elements. Despite efforts to integrated management the cost and schedule, WBS classification is struggling to differ from each other. The cost and schedule can be integrated and can be managed due to the characteristic of the detail system in the case of Korea around the axis of pressure and official fixture system. In this research, the Work Breakdown Structure (WBS) integrating the cost and schedules around in government office construction, WBS which can be used in common was presented in order to analyze the detail system of the public institution construction and improve. As to this method, the efficient administration of not only the link application of the cost and schedule but also construction project is expected.

Keywords: WBS, EVMS, integrated cost and schedule, Korea case

Procedia PDF Downloads 351
7748 Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph

Authors: Mo-Se Lee, Cheol-Hwi Ahn, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market.

Keywords: convolutional neural network, deep learning, Korean stock market, stock market prediction

Procedia PDF Downloads 410
7747 Using Neural Networks for Click Prediction of Sponsored Search

Authors: Afroze Ibrahim Baqapuri, Ilya Trofimov

Abstract:

Sponsored search is a multi-billion dollar industry and makes up a major source of revenue for search engines (SE). Click-through-rate (CTR) estimation plays a crucial role for ads selection, and greatly affects the SE revenue, advertiser traffic and user experience. We propose a novel architecture of solving CTR prediction problem by combining artificial neural networks (ANN) with decision trees. First, we compare ANN with respect to other popular machine learning models being used for this task. Then we go on to combine ANN with MatrixNet (proprietary implementation of boosted trees) and evaluate the performance of the system as a whole. The results show that our approach provides a significant improvement over existing models.

Keywords: neural networks, sponsored search, web advertisement, click prediction, click-through rate

Procedia PDF Downloads 552
7746 Residual Life Prediction for a System Subject to Condition Monitoring and Two Failure Modes

Authors: Akram Khaleghei, Ghosheh Balagh, Viliam Makis

Abstract:

In this paper, we investigate the residual life prediction problem for a partially observable system subject to two failure modes, namely a catastrophic failure and a failure due to the system degradation. The system is subject to condition monitoring and the degradation process is described by a hidden Markov model with unknown parameters. The parameter estimation procedure based on an EM algorithm is developed and the formulas for the conditional reliability function and the mean residual life are derived, illustrated by a numerical example.

Keywords: partially observable system, hidden Markov model, competing risks, residual life prediction

Procedia PDF Downloads 391
7745 Loan Repayment Prediction Using Machine Learning: Model Development, Django Web Integration and Cloud Deployment

Authors: Seun Mayowa Sunday

Abstract:

Loan prediction is one of the most significant and recognised fields of research in the banking, insurance, and the financial security industries. Some prediction systems on the market include the construction of static software. However, due to the fact that static software only operates with strictly regulated rules, they cannot aid customers beyond these limitations. Application of many machine learning (ML) techniques are required for loan prediction. Four separate machine learning models, random forest (RF), decision tree (DT), k-nearest neighbour (KNN), and logistic regression, are used to create the loan prediction model. Using the anaconda navigator and the required machine learning (ML) libraries, models are created and evaluated using the appropriate measuring metrics. From the finding, the random forest performs with the highest accuracy of 80.17% which was later implemented into the Django framework. For real-time testing, the web application is deployed on the Alibabacloud which is among the top 4 biggest cloud computing provider. Hence, to the best of our knowledge, this research will serve as the first academic paper which combines the model development and the Django framework, with the deployment into the Alibaba cloud computing application.

Keywords: k-nearest neighbor, random forest, logistic regression, decision tree, django, cloud computing, alibaba cloud

Procedia PDF Downloads 108
7744 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, construction projects, cost estimation, NRM, ontology

Procedia PDF Downloads 530
7743 Deadline Missing Prediction for Mobile Robots through the Use of Historical Data

Authors: Edwaldo R. B. Monteiro, Patricia D. M. Plentz, Edson R. De Pieri

Abstract:

Mobile robotics is gaining an increasingly important role in modern society. Several potentially dangerous or laborious tasks for human are assigned to mobile robots, which are increasingly capable. Many of these tasks need to be performed within a specified period, i.e., meet a deadline. Missing the deadline can result in financial and/or material losses. Mechanisms for predicting the missing of deadlines are fundamental because corrective actions can be taken to avoid or minimize the losses resulting from missing the deadline. In this work we propose a simple but reliable deadline missing prediction mechanism for mobile robots through the use of historical data and we use the Pioneer 3-DX robot for experiments and simulations, one of the most popular robots in academia.

Keywords: deadline missing, historical data, mobile robots, prediction mechanism

Procedia PDF Downloads 381
7742 Useful Lifetime Prediction of Rail Pads for High Speed Trains

Authors: Chang Su Woo, Hyun Sung Park

Abstract:

Useful lifetime evaluations of rail-pads were very important in design procedure to assure the safety and reliability. It is, therefore, necessary to establish a suitable criterion for the replacement period of rail pads. In this study, we performed properties and accelerated heat aging tests of rail pads considering degradation factors and all environmental conditions including operation, and then derived a lifetime prediction equation according to changes in hardness, thickness, and static spring constants in the Arrhenius plot to establish how to estimate the aging of rail pads. With the useful lifetime prediction equation, the lifetime of e-clip pads was 2.5 years when the change in hardness was 10% at 25°C; and that of f-clip pads was 1.7 years. When the change in thickness was 10%, the lifetime of e-clip pads and f-clip pads is 2.6 years respectively. The results obtained in this study to estimate the useful lifetime of rail pads for high speed trains can be used for determining the maintenance and replacement schedule for rail pads.

Keywords: rail pads, accelerated test, Arrhenius plot, useful lifetime prediction, mechanical engineering design

Procedia PDF Downloads 300
7741 Final Costs of Civil Claims

Authors: Behnam Habibi Dargah

Abstract:

The economics of cost-benefit theory seeks to monitor claims and determine their final price. The cost of litigation is important because it is a measure of the efficiency of the justice system. From an economic point of view, the cost of litigation is considered to be the point of equilibrium of litigation, whereby litigation is regarded as a high-risk investment and is initiated when the costs are less than the probable and expected benefits. Costs are economically separated into private and social costs. Private cost includes material (direct and indirect) and spiritual costs. The social costs of litigation are also subsidized-centric due to the public and governmental nature of litigation and cover both types of bureaucratic bureaucracy and the costs of judicial misconduct. Macroeconomic policy in the economics of justice is the reverse engineering of controlling the social costs of litigation by employing selective litigation and working on the judicial culture to achieve rationality in the monopoly system. Procedures for controlling and managing court costs are also circumscribed to economic patterns in the field. Rational cost allocation model and cost transfer model. The rational allocation model deals with cost-tolerance systems, and the transfer model also considers three models of transferability, including legal, judicial and contractual transferability, which will be described and explored in the present article in a comparative manner.

Keywords: cost of litigation, economics of litigation, private cost, social cost, cost of litigation

Procedia PDF Downloads 106
7740 Economical Working Hours per Workday for a Production Worker under Hazardous Environment

Authors: Mohammed Darwish

Abstract:

Workplace injuries cost organizations significant amount of money. Causes of injuries at workplace are very well documented in the literature and attributed to variety of reasons. One important reason is the long working-hours. The purpose of this paper is to develop a mathematical model that finds the optimal working-hours at workplace. The developed model minimizes the expected total cost which consists of the expected cost incurred due to unsafe conditions of workplace, the other cost is related to the lost production due to work incidents, and the production cost.

Keywords: 8-hour workday, mathematical model, optimal working hours, workplace injuries

Procedia PDF Downloads 134
7739 Design Criteria for an Internal Information Technology Cost Allocation to Support Business Information Technology Alignment

Authors: Andrea Schnabl, Mario Bernhart

Abstract:

The controlling instrument of an internal cost allocation (IT chargeback) is commonly used to make IT costs transparent and controllable. Information Technology (IT) became, especially for information industries, a central competitive factor. Consequently, the focus is not on minimizing IT costs but on the strategic aligned application of IT. Hence, an internal IT cost allocation should be designed to enhance the business-IT alignment (strategic alignment of IT) in order to support the effective application of IT from a company’s point of view. To identify design criteria for an internal cost allocation to support business alignment a case study analysis at a typical medium-sized firm in information industry is performed. Documents, Key Performance Indicators, and cost accounting data over a period of 10 years are analyzed and interviews are performed. The derived design criteria are evaluated by 6 heads of IT departments from 6 different companies, which have an internal IT cost allocation at use. By applying these design criteria an internal cost allocation serves not only for cost controlling but also as an instrument in strategic IT management.

Keywords: accounting for IT services, Business IT Alignment, internal cost allocation, IT controlling, IT governance, strategic IT management

Procedia PDF Downloads 142
7738 Stochastic Frontier Application for Evaluating Cost Inefficiencies in Organic Saffron

Authors: Pawan Kumar Sharma, Sudhakar Dwivedi, R. K. Arora

Abstract:

Saffron is one of the most precious spices grown on the earth and is cultivated in a very limited area in few countries of the world. It has also been grown as a niche crop in Kishtwar district of Jammu region of Jammu and Kashmir State of India. This paper attempts to examine the presence of cost inefficiencies in saffron production and the associated socio-economic characteristics of saffron growers in the mentioned area. Although the numbers of inputs used in cultivation of saffron were limited, still cost inefficiencies were present in its production. The net present value (NPV), internal rate of return (IRR) and profitability index (PI) of investment in five years of saffron production were INR 1120803, 95.67 % and 3.52 respectively. The estimated coefficients of saffron stochastic cost function for saffron bulbs, human labour, animal labour, manure and saffron output were positive. The saffron growers having non-farm income were more cost inefficient as compared to farmers who did not have sources of income other than farming by 0.04 %. The maximum value of cost efficiency for saffron grower was 1.69 with mean value of 1.12. The majority of farmers have low cost inefficiencies, as the highest frequency of occurrence of the predicted cost efficiency was below 1.06.

Keywords: saffron, internal rate of return, cost efficiency, stochastic frontier model

Procedia PDF Downloads 131
7737 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy

Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu

Abstract:

Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.

Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR

Procedia PDF Downloads 44
7736 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures

Authors: Milad Abbasi

Abstract:

Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.

Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network

Procedia PDF Downloads 123
7735 Using Water Erosion Prediction Project Simulation Model for Studying Some Soil Properties in Egypt

Authors: H. A. Mansour

Abstract:

The objective of this research work is studying the water use prediction, prediction technology for water use by action agencies, and others involved in conservation, planning, and environmental assessment of the Water Erosion Prediction Project (WEPP) simulation model. Models the important physical, processes governing erosion in Egypt (climate, infiltration, runoff, ET, detachment by raindrops, detachment by flowing water, deposition, etc.). Simulation of the non-uniform slope, soils, cropping/management., and Egyptian databases for climate, soils, and crops. The study included important parameters in Egyptian conditions as follows: Water Balance & Percolation, Soil Component (Tillage impacts), Plant Growth & Residue Decomposition, Overland Flow Hydraulics. It could be concluded that we can adapt the WEPP simulation model to determining the previous important parameters under Egyptian conditions.

Keywords: WEPP, adaptation, soil properties, tillage impacts, water balance, soil percolation

Procedia PDF Downloads 277
7734 Physically Informed Kernels for Wave Loading Prediction

Authors: Daniel James Pitchforth, Timothy James Rogers, Ulf Tyge Tygesen, Elizabeth Jane Cross

Abstract:

Wave loading is a primary cause of fatigue within offshore structures and its quantification presents a challenging and important subtask within the SHM framework. The accurate representation of physics in such environments is difficult, however, driving the development of data-driven techniques in recent years. Within many industrial applications, empirical laws remain the preferred method of wave loading prediction due to their low computational cost and ease of implementation. This paper aims to develop an approach that combines data-driven Gaussian process models with physical empirical solutions for wave loading, including Morison’s Equation. The aim here is to incorporate physics directly into the covariance function (kernel) of the Gaussian process, enforcing derived behaviors whilst still allowing enough flexibility to account for phenomena such as vortex shedding, which may not be represented within the empirical laws. The combined approach has a number of advantages, including improved performance over either component used independently and interpretable hyperparameters.

Keywords: offshore structures, Gaussian processes, Physics informed machine learning, Kernel design

Procedia PDF Downloads 167
7733 Development of Fuzzy Logic and Neuro-Fuzzy Surface Roughness Prediction Systems Coupled with Cutting Current in Milling Operation

Authors: Joseph C. Chen, Venkata Mohan Kudapa

Abstract:

Development of two real-time surface roughness (Ra) prediction systems for milling operations was attempted. The systems used not only cutting parameters, such as feed rate and spindle speed, but also the cutting current generated and corrected by a clamp type energy sensor. Two different approaches were developed. First, a fuzzy inference system (FIS), in which the fuzzy logic rules are generated by experts in the milling processes, was used to conduct prediction modeling using current cutting data. Second, a neuro-fuzzy system (ANFIS) was explored. Neuro-fuzzy systems are adaptive techniques in which data are collected on the network, processed, and rules are generated by the system. The inference system then uses these rules to predict Ra as the output. Experimental results showed that the parameters of spindle speed, feed rate, depth of cut, and input current variation could predict Ra. These two systems enable the prediction of Ra during the milling operation with an average of 91.83% and 94.48% accuracy by FIS and ANFIS systems, respectively. Statistically, the ANFIS system provided better prediction accuracy than that of the FIS system.

Keywords: surface roughness, input current, fuzzy logic, neuro-fuzzy, milling operations

Procedia PDF Downloads 120
7732 Neural Network Based Approach of Software Maintenance Prediction for Laboratory Information System

Authors: Vuk M. Popovic, Dunja D. Popovic

Abstract:

Software maintenance phase is started once a software project has been developed and delivered. After that, any modification to it corresponds to maintenance. Software maintenance involves modifications to keep a software project usable in a changed or a changing environment, to correct discovered faults, and modifications, and to improve performance or maintainability. Software maintenance and management of software maintenance are recognized as two most important and most expensive processes in a life of a software product. This research is basing the prediction of maintenance, on risks and time evaluation, and using them as data sets for working with neural networks. The aim of this paper is to provide support to project maintenance managers. They will be able to pass the issues planned for the next software-service-patch to the experts, for risk and working time evaluation, and afterward to put all data to neural networks in order to get software maintenance prediction. This process will lead to the more accurate prediction of the working hours needed for the software-service-patch, which will eventually lead to better planning of budget for the software maintenance projects.

Keywords: laboratory information system, maintenance engineering, neural networks, software maintenance, software maintenance costs

Procedia PDF Downloads 329
7731 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: bioassay, machine learning, preprocessing, virtual screen

Procedia PDF Downloads 254