Search results for: prediction error
3582 Prediction of Oil Recovery Factor Using Artificial Neural Network
Authors: O. P. Oladipo, O. A. Falode
Abstract:
The determination of Recovery Factor is of great importance to the reservoir engineer since it relates reserves to the initial oil in place. Reserves are the producible portion of reservoirs and give an indication of the profitability of a field Development. The core objective of this project is to develop an artificial neural network model using selected reservoir data to predict Recovery Factors (RF) of hydrocarbon reservoirs and compare the model with a couple of the existing correlations. The type of Artificial Neural Network model developed was the Single Layer Feed Forward Network. MATLAB was used as the network simulator and the network was trained using the supervised learning method, Afterwards, the network was tested with input data never seen by the network. The results of the predicted values of the recovery factors of the Artificial Neural Network Model, API Correlation for water drive reservoirs (Sands and Sandstones) and Guthrie and Greenberger Correlation Equation were obtained and compared. It was noted that the coefficient of correlation of the Artificial Neural Network Model was higher than the coefficient of correlations of the other two correlation equations, thus making it a more accurate prediction tool. The Artificial Neural Network, because of its accurate prediction ability is helpful in the correct prediction of hydrocarbon reservoir factors. Artificial Neural Network could be applied in the prediction of other Petroleum Engineering parameters because it is able to recognise complex patterns of data set and establish a relationship between them.Keywords: recovery factor, reservoir, reserves, artificial neural network, hydrocarbon, MATLAB, API, Guthrie, Greenberger
Procedia PDF Downloads 4413581 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa
Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees
Abstract:
The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.Keywords: solar energy, solar radiation, ERA-5, potential energy
Procedia PDF Downloads 2113580 Assessment of Intern Students' Attitudes towards Medical Errors
Authors: Nilgün Katrancı, Pınar Göv
Abstract:
With the acceleration and assessment of quality and patient safety works in healthcare services in the 21st century, activities to reduce errors have gained importance. The prevention and reduction of unintended consequences related to healthcare services and errors made during the delivery of healthcare services can be achieved by understanding the causes of the errors. Communication is the basic reason most frequently seen in such cases. Nurses who communicate with patients more closely and for longer time play a more critical role in ensuring patient safety compared to other healthcare professionals. To reduce the risk of medical errors and increase the quality of care, it is important to raise the awareness of nurses about patient safety in training period. This descriptive study was conducted between February 2017 and May 2017 to assess intern students' attitudes towards and knowledge of patient safety and medical errors. The target population of the study consists of intern students at the Faculty of Nursing in Gaziantep University (N=180). The study did not apply any sample selection method, and the research group consisted of 90 female and 37 male senior students who were available and accepted to take part in the study (N=127). The study used personal information form and medical error attitude scale to collect data. The medical error attitude scale consists of 16 items and 3 sub-dimensions. The most frequently seen medical error in the clinics the interns worked at was found as ‘Failure to comply with asepsis rules’ with a rate of 67,7%. The most frequent case among reasons for not disclosing an error is ‘noticing and correcting the error before affecting the patient’ with the rate of 70,9%. The most frequently expressed implications of disclosing a serious error for the intern students participating in the study are ‘harming patient trust (78%)’ and ‘possibility of overreaction by patient (62,2%)’. According to the results of the study, the awareness of the students about the importance of medical errors and error reporting was found high (3,48 ± 0,49). Consequently, it is important to assess and positively improve the attitudes of nurses and other healthcare professionals towards medical errors for the determination of causes of medical errors and their prevention.Keywords: healthcare service, intern student, medical error, patient safety
Procedia PDF Downloads 2033579 Dynamic Compensation for Environmental Temperature Variation in the Coolant Refrigeration Cycle as a Means of Increasing Machine-Tool Precision
Authors: Robbie C. Murchison, Ibrahim Küçükdemiral, Andrew Cowell
Abstract:
Thermal effects are the largest source of dimensional error in precision machining, and a major proportion is caused by ambient temperature variation. The use of coolant is a primary means of mitigating these effects, but there has been limited work on coolant temperature control. This research critically explored whether CNC-machine coolant refrigeration systems adapted to actively compensate for ambient temperature variation could increase machining accuracy. Accuracy data were collected from operators’ checklists for a CNC 5-axis mill and statistically reduced to bias and precision metrics for observations of one day over a sample period of 27 days. Temperature data were collected using three USB dataloggers in ambient air, the chiller inflow, and the chiller outflow. The accuracy and temperature data were analysed using Pearson correlation, then the thermodynamics of the system were described using system identification with MATLAB. It was found that 75% of thermal error is reflected in the hot coolant temperature but that this is negligibly dependent on ambient temperature. The effect of the coolant refrigeration process on hot coolant outflow temperature was also found to be negligible. Therefore, the evidence indicated that it would not be beneficial to adapt coolant chillers to compensate for ambient temperature variation. However, it is concluded that hot coolant outflow temperature is a robust and accessible source of thermal error data which could be used for prevention strategy evaluation or as the basis of other thermal error strategies.Keywords: CNC manufacturing, machine-tool, precision machining, thermal error
Procedia PDF Downloads 893578 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions
Authors: Timothy Kayode Samson, Adedoyin Isola Lawal
Abstract:
The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH
Procedia PDF Downloads 1193577 Stress Recovery and Durability Prediction of a Vehicular Structure with Random Road Dynamic Simulation
Authors: Jia-Shiun Chen, Quoc-Viet Huynh
Abstract:
This work develops a flexible-body dynamic model of an all-terrain vehicle (ATV), capable of recovering dynamic stresses while the ATV travels on random bumpy roads. The fatigue life of components is forecasted as well. While considering the interaction between dynamic forces and structure deformation, the proposed model achieves a highly accurate structure stress prediction and fatigue life prediction. During the simulation, stress time history of the ATV structure is retrieved for life prediction. Finally, the hot sports of the ATV frame are located, and the frame life for combined road conditions is forecasted, i.e. 25833.6 hr. If the usage of vehicle is eight hours daily, the total vehicle frame life is 8.847 years. Moreover, the reaction force and deformation due to the dynamic motion can be described more accurately by using flexible body dynamics than by using rigid-body dynamics. Based on recommendations made in the product design stage before mass production, the proposed model can significantly lower development and testing costs.Keywords: flexible-body dynamics, veicle, dynamics, fatigue, durability
Procedia PDF Downloads 3943576 On Constructing a Cubically Convergent Numerical Method for Multiple Roots
Authors: Young Hee Geum
Abstract:
We propose the numerical method defined by xn+1 = xn − λ[f(xn − μh(xn))/]f'(xn) , n ∈ N, and determine the control parameter λ and μ to converge cubically. In addition, we derive the asymptotic error constant. Applying this proposed scheme to various test functions, numerical results show a good agreement with the theory analyzed in this paper and are proven using Mathematica with its high-precision computability.Keywords: asymptotic error constant, iterative method, multiple root, root-finding
Procedia PDF Downloads 2203575 SSRUIC Students’ Attitude and Preference toward Error Corrections
Authors: Papitchaya Papangkorn
Abstract:
Matching the expectations of teachers and learners is significant for successful language learning. Moreover, teachers should discover what their learners think and feel about what and how they want to learn. Therefore, this study investigates International College, Suan Sunandha Rajabhat University students’ preferences toward error corrections in order to help SSRUIC teachers match their expectations and their learners because it is important for successful language learning. This study examined the learners’ attitude and preference toward error correction through 50 first year SSRUIC students both male (25) and female (25) in Bangkok, Thailand. The data were collected from a questionnaire and interviews to investigate the necessity and frequency, timing, type of errors, method of corrective feedback, and person who gives error correction in order to answer the overall research question and sub-questions. The findings indicate five suggestions regarding the overall research question. Firstly, errors should be treated, and always be treated. Secondly, treating errors after finish speaking is the most appropriate time. Thirdly, “errors that may cause problems in an understanding of listener” and “frequent spoken errors” should be treated. Fourthly, repetition and explicit feedback were the most popular types of feedback among males, whereas metalinguistic feedback was the most favoured types amongst females. Finally, teachers were the most preferred person to deliver corrective feedback for the learners. Although the results of the study are difficult to generalize to a larger population, which are Thai EFL learners because of the small sample, the findings provide useful information that may contribute to understanding of SSRUIC learners’ preferences toward error corrections and it might reduce the gap between what teachers employ and what students expect when receiving corrective feedback. The reduction of this gap may be useful for the learning process and could enhance the efforts of both teachers and learners in a Thai context.Keywords: attitude, corrective feedback, error, preference
Procedia PDF Downloads 3573574 Free Fatty Acid Assessment of Crude Palm Oil Using a Non-Destructive Approach
Authors: Siti Nurhidayah Naqiah Abdull Rani, Herlina Abdul Rahim, Rashidah Ghazali, Noramli Abdul Razak
Abstract:
Near infrared (NIR) spectroscopy has always been of great interest in the food and agriculture industries. The development of prediction models has facilitated the estimation process in recent years. In this study, 110 crude palm oil (CPO) samples were used to build a free fatty acid (FFA) prediction model. 60% of the collected data were used for training purposes and the remaining 40% used for testing. The visible peaks on the NIR spectrum were at 1725 nm and 1760 nm, indicating the existence of the first overtone of C-H bands. Principal component regression (PCR) was applied to the data in order to build this mathematical prediction model. The optimal number of principal components was 10. The results showed R2=0.7147 for the training set and R2=0.6404 for the testing set.Keywords: palm oil, fatty acid, NIRS, regression
Procedia PDF Downloads 5073573 Channel Estimation for LTE Downlink
Authors: Rashi Jain
Abstract:
The LTE systems employ Orthogonal Frequency Division Multiplexing (OFDM) as the multiple access technology for the Downlink channels. For enhanced performance, accurate channel estimation is required. Various algorithms such as Least Squares (LS), Minimum Mean Square Error (MMSE) and Recursive Least Squares (RLS) can be employed for the purpose. The paper proposes channel estimation algorithm based on Kalman Filter for LTE-Downlink system. Using the frequency domain pilots, the initial channel response is obtained using the LS criterion. Then Kalman Filter is employed to track the channel variations in time-domain. To suppress the noise within a symbol, threshold processing is employed. The paper draws comparison between the LS, MMSE, RLS and Kalman filter for channel estimation. The parameters for evaluation are Bit Error Rate (BER), Mean Square Error (MSE) and run-time.Keywords: LTE, channel estimation, OFDM, RLS, Kalman filter, threshold
Procedia PDF Downloads 3563572 Analyzing Tools and Techniques for Classification In Educational Data Mining: A Survey
Authors: D. I. George Amalarethinam, A. Emima
Abstract:
Educational Data Mining (EDM) is one of the newest topics to emerge in recent years, and it is concerned with developing methods for analyzing various types of data gathered from the educational circle. EDM methods and techniques with machine learning algorithms are used to extract meaningful and usable information from huge databases. For scientists and researchers, realistic applications of Machine Learning in the EDM sectors offer new frontiers and present new problems. One of the most important research areas in EDM is predicting student success. The prediction algorithms and techniques must be developed to forecast students' performance, which aids the tutor, institution to boost the level of student’s performance. This paper examines various classification techniques in prediction methods and data mining tools used in EDM.Keywords: classification technique, data mining, EDM methods, prediction methods
Procedia PDF Downloads 1173571 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar
Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo
Abstract:
The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB
Procedia PDF Downloads 893570 Reservoir Inflow Prediction for Pump Station Using Upstream Sewer Depth Data
Authors: Osung Im, Neha Yadav, Eui Hoon Lee, Joong Hoon Kim
Abstract:
Artificial Neural Network (ANN) approach is commonly used in lots of fields for forecasting. In water resources engineering, forecast of water level or inflow of reservoir is useful for various kind of purposes. Due to advantages of ANN, many papers were written for inflow prediction in river networks, but in this study, ANN is used in urban sewer networks. The growth of severe rain storm in Korea has increased flood damage severely, and the precipitation distribution is getting more erratic. Therefore, effective pump operation in pump station is an essential task for the reduction in urban area. If real time inflow of pump station reservoir can be predicted, it is possible to operate pump effectively for reducing the flood damage. This study used ANN model for pump station reservoir inflow prediction using upstream sewer depth data. For this study, rainfall events, sewer depth, and inflow into Banpo pump station reservoir between years of 2013-2014 were considered. Feed – Forward Back Propagation (FFBF), Cascade – Forward Back Propagation (CFBP), Elman Back Propagation (EBP) and Nonlinear Autoregressive Exogenous (NARX) were used as ANN model for prediction. A comparison of results with ANN model suggests that ANN is a powerful tool for inflow prediction using the sewer depth data.Keywords: artificial neural network, forecasting, reservoir inflow, sewer depth
Procedia PDF Downloads 3173569 Pre-Operative Tool for Facial-Post-Surgical Estimation and Detection
Authors: Ayat E. Ali, Christeen R. Aziz, Merna A. Helmy, Mohammed M. Malek, Sherif H. El-Gohary
Abstract:
Goal: Purpose of the project was to make a plastic surgery prediction by using pre-operative images for the plastic surgeries’ patients and to show this prediction on a screen to compare between the current case and the appearance after the surgery. Methods: To this aim, we implemented a software which used data from the internet for facial skin diseases, skin burns, pre-and post-images for plastic surgeries then the post- surgical prediction is done by using K-nearest neighbor (KNN). So we designed and fabricated a smart mirror divided into two parts a screen and a reflective mirror so patient's pre- and post-appearance will be showed at the same time. Results: We worked on some skin diseases like vitiligo, skin burns and wrinkles. We classified the three degrees of burns using KNN classifier with accuracy 60%. We also succeeded in segmenting the area of vitiligo. Our future work will include working on more skin diseases, classify them and give a prediction for the look after the surgery. Also we will go deeper into facial deformities and plastic surgeries like nose reshaping and face slim down. Conclusion: Our project will give a prediction relates strongly to the real look after surgery and decrease different diagnoses among doctors. Significance: The mirror may have broad societal appeal as it will make the distance between patient's satisfaction and the medical standards smaller.Keywords: k-nearest neighbor (knn), face detection, vitiligo, bone deformity
Procedia PDF Downloads 1643568 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 3093567 Artificial Neural Networks and Geographic Information Systems for Coastal Erosion Prediction
Authors: Angeliki Peponi, Paulo Morgado, Jorge Trindade
Abstract:
Artificial Neural Networks (ANNs) and Geographic Information Systems (GIS) are applied as a robust tool for modeling and forecasting the erosion changes in Costa Caparica, Lisbon, Portugal, for 2021. ANNs present noteworthy advantages compared with other methods used for prediction and decision making in urban coastal areas. Multilayer perceptron type of ANNs was used. Sensitivity analysis was conducted on natural and social forces and dynamic relations in the dune-beach system of the study area. Variations in network’s parameters were performed in order to select the optimum topology of the network. The developed methodology appears fitted to reality; however further steps would make it better suited.Keywords: artificial neural networks, backpropagation, coastal urban zones, erosion prediction
Procedia PDF Downloads 3923566 A Novel Approach to Design of EDDR Architecture for High Speed Motion Estimation Testing Applications
Authors: T. Gangadhararao, K. Krishna Kishore
Abstract:
Motion Estimation (ME) plays a critical role in a video coder, testing such a module is of priority concern. While focusing on the testing of ME in a video coding system, this work presents an error detection and data recovery (EDDR) design, based on the residue-and-quotient (RQ) code, to embed into ME for video coding testing applications. An error in processing Elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the proposed EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty.Keywords: area overhead, data recovery, error detection, motion estimation, reliability, residue-and-quotient (RQ) code
Procedia PDF Downloads 4323565 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 3093564 Stock Price Prediction Using Time Series Algorithms
Authors: Sumit Sen, Sohan Khedekar, Umang Shinde, Shivam Bhargava
Abstract:
This study has been undertaken to investigate whether the deep learning models are able to predict the future stock prices by training the model with the historical stock price data. Since this work required time series analysis, various models are present today to perform time series analysis such as Recurrent Neural Network LSTM, ARIMA and Facebook Prophet. Applying these models the movement of stock price of stocks are predicted and also tried to provide the future prediction of the stock price of a stock. Final product will be a stock price prediction web application that is developed for providing the user the ease of analysis of the stocks and will also provide the predicted stock price for the next seven days.Keywords: Autoregressive Integrated Moving Average, Deep Learning, Long Short Term Memory, Time-series
Procedia PDF Downloads 1423563 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization
Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang
Abstract:
The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling
Procedia PDF Downloads 2543562 An Error Analysis of English Communication of Suan Sunandha Rajabhat University Students
Authors: Chantima Wangsomchok
Abstract:
The main purposes of this study are (1) to test the students’ communicative competence within six main functions: greeting, parting, thanking, offering, requesting and suggesting, (2) to employ error analysis in the students’ communicative competence within those functions, and (3) to compare the characteristics of the error found from the investigation. The subjects of the study is 328 first-year undergraduates taking the Foundation English course in the first semester of the 2008 academic year at Suan Sunandha Rajabhat University. This study found that while the subjects showed high communicative competence in the use of the following three functions: greeting, thanking, and offering, they seemed to show poor communicative competence in suggesting, requesting and parting instead. In addition, this study found that the grammatical errors were likely to be most frequently found in the parting function. In the same way, the type of errors which were less frequently found was in the functions of thanking and requesting respectively. Instead, the students tended to have high pragmatic failure in the use of greeting and suggesting functions.Keywords: error analysis, functions of English language, communicative competence, cognitive science
Procedia PDF Downloads 4313561 ARIMA-GARCH, A Statistical Modeling for Epileptic Seizure Prediction
Authors: Salman Mohamadi, Seyed Mohammad Ali Tayaranian Hosseini, Hamidreza Amindavar
Abstract:
In this paper, we provide a procedure to analyze and model EEG (electroencephalogram) signal as a time series using ARIMA-GARCH to predict an epileptic attack. The heteroskedasticity of EEG signal is examined through the ARCH or GARCH, (Autore- gressive conditional heteroskedasticity, Generalized autoregressive conditional heteroskedasticity) test. The best ARIMA-GARCH model in AIC sense is utilized to measure the volatility of the EEG from epileptic canine subjects, to forecast the future values of EEG. ARIMA-only model can perform prediction, but the ARCH or GARCH model acting on the residuals of ARIMA attains a con- siderable improved forecast horizon. First, we estimate the best ARIMA model, then different orders of ARCH and GARCH modelings are surveyed to determine the best heteroskedastic model of the residuals of the mentioned ARIMA. Using the simulated conditional variance of selected ARCH or GARCH model, we suggest the procedure to predict the oncoming seizures. The results indicate that GARCH modeling determines the dynamic changes of variance well before the onset of seizure. It can be inferred that the prediction capability comes from the ability of the combined ARIMA-GARCH modeling to cover the heteroskedastic nature of EEG signal changes.Keywords: epileptic seizure prediction , ARIMA, ARCH and GARCH modeling, heteroskedasticity, EEG
Procedia PDF Downloads 4063560 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite
Abstract:
Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination
Procedia PDF Downloads 1243559 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model
Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim
Abstract:
Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph
Procedia PDF Downloads 2803558 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors
Authors: Dylan Elliott, Katya Pertsova
Abstract:
Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.Keywords: BERT, grammatical error correction, preposition error detection, prepositions
Procedia PDF Downloads 1473557 Prediction of Energy Storage Areas for Static Photovoltaic System Using Irradiation and Regression Modelling
Authors: Kisan Sarda, Bhavika Shingote
Abstract:
This paper aims to evaluate regression modelling for prediction of Energy storage of solar photovoltaic (PV) system using Semi parametric regression techniques because there are some parameters which are known while there are some unknown parameters like humidity, dust etc. Here irradiation of solar energy is different for different places on the basis of Latitudes, so by finding out areas which give more storage we can implement PV systems at those places and our need of energy will be fulfilled. This regression modelling is done for daily, monthly and seasonal prediction of solar energy storage. In this, we have used R modules for designing the algorithm. This algorithm will give the best comparative results than other regression models for the solar PV cell energy storage.Keywords: semi parametric regression, photovoltaic (PV) system, regression modelling, irradiation
Procedia PDF Downloads 3823556 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 1623555 Knowledge Required for Avoiding Lexical Errors at Machine Translation
Authors: Yukiko Sasaki Alam
Abstract:
This research aims at finding out the causes that led to wrong lexical selections in machine translation (MT) rather than categorizing lexical errors, which has been a main practice in error analysis. By manually examining and analyzing lexical errors outputted by a MT system, it suggests what knowledge would help the system reduce lexical errors.Keywords: machine translation, error analysis, lexical errors, evaluation
Procedia PDF Downloads 3383554 Pattern of Refractive Error, Knowledge, Attitude and Practice about Eye Health among the Primary School Children in Bangladesh
Authors: Husain Rajib, K. S. Kishor, D. G. Jewel
Abstract:
Background: Uncorrected refractive error is a common cause of preventable visual impairment in pediatric age group which can be lead to blindness but early detection of visual impairment can reduce the problem that will have good effective in education and more involve in social activities. Glasses are the cheapest and commonest form of correction of refractive errors. To achieve this, patient must exhibit good compliance to spectacle wear. Patient’s attitude and perception of glasses and eye health could affect compliance. Material and method: A Prospective community based cross sectional study was designed in order to evaluate the knowledge, attitude and practices about refractive errors and eye health amongst the primary school going children. Result: Among 140 respondents, 72 were males and 68 were females. We found 50 children were myopic and out of them 26 were male and 24 were female, 27 children were hyperopic and out of them 14 were male and 13 were female. About 63 children were astigmatic and out of them 32 were male and 31 were female. The level of knowledge, attitude was satisfactory. The attitude of the students, teachers and parents was cooperative which helps to do cycloplegic refraction. Practice was not satisfactory due to social stigma and information gap. Conclusion: Knowledge of refractive error and acceptance of glasses for the correction of uncorrected refractive error. Public awareness program such as vision screening program, eye camp, and teachers training program are more beneficial for wearing and prescribing spectacle.Keywords: refractive error, stigma, knowledge, attitude, practice
Procedia PDF Downloads 2643553 Legal Judgment Prediction through Indictments via Data Visualization in Chinese
Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun
Abstract:
Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization
Procedia PDF Downloads 121