Search results for: discretization error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1994

Search results for: discretization error

1664 Permeability Prediction Based on Hydraulic Flow Unit Identification and Artificial Neural Networks

Authors: Emad A. Mohammed

Abstract:

The concept of hydraulic flow units (HFU) has been used for decades in the petroleum industry to improve the prediction of permeability. This concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir rock quality index (RQI). Both indices are based on reservoir porosity and permeability of core samples. It is assumed that core samples with similar FZI values belong to the same HFU. Thus, after dividing the porosity-permeability data based on the HFU, transformations can be done in order to estimate the permeability from the porosity. The conventional practice is to use the power law transformation using conventional HFU where percentage of error is considerably high. In this paper, neural network technique is employed as a soft computing transformation method to predict permeability instead of power law method to avoid higher percentage of error. This technique is based on HFU identification where Amaefule et al. (1993) method is utilized. In this regard, Kozeny and Carman (K–C) model, and modified K–C model by Hasan and Hossain (2011) are employed. A comparison is made between the two transformation techniques for the two porosity-permeability models. Results show that the modified K-C model helps in getting better results with lower percentage of error in predicting permeability. The results also show that the use of artificial intelligence techniques give more accurate prediction than power law method. This study was conducted on a heterogeneous complex carbonate reservoir in Oman. Data were collected from seven wells to obtain the permeability correlations for the whole field. The findings of this study will help in getting better estimation of permeability of a complex reservoir.

Keywords: permeability, hydraulic flow units, artificial intelligence, correlation

Procedia PDF Downloads 140
1663 Artificial Intelligence Based Predictive Models for Short Term Global Horizontal Irradiation Prediction

Authors: Kudzanayi Chiteka, Wellington Makondo

Abstract:

The whole world is on the drive to go green owing to the negative effects of burning fossil fuels. Therefore, there is immediate need to identify and utilise alternative renewable energy sources. Among these energy sources solar energy is one of the most dominant in Zimbabwe. Solar power plants used to generate electricity are entirely dependent on solar radiation. For planning purposes, solar radiation values should be known in advance to make necessary arrangements to minimise the negative effects of the absence of solar radiation due to cloud cover and other naturally occurring phenomena. This research focused on the prediction of Global Horizontal Irradiation values for the sixth day given values for the past five days. Artificial intelligence techniques were used in this research. Three models were developed based on Support Vector Machines, Radial Basis Function, and Feed Forward Back-Propagation Artificial neural network. Results revealed that Support Vector Machines gives the best results compared to the other two with a mean absolute percentage error (MAPE) of 2%, Mean Absolute Error (MAE) of 0.05kWh/m²/day root mean square (RMS) error of 0.15kWh/m²/day and a coefficient of determination of 0.990. The other predictive models had prediction accuracies of MAPEs of 4.5% and 6% respectively for Radial Basis Function and Feed Forward Back-propagation Artificial neural network. These two models also had coefficients of determination of 0.975 and 0.970 respectively. It was found that prediction of GHI values for the future days is possible using artificial intelligence-based predictive models.

Keywords: solar energy, global horizontal irradiation, artificial intelligence, predictive models

Procedia PDF Downloads 276
1662 Comparison between Some of Robust Regression Methods with OLS Method with Application

Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq

Abstract:

The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.

Keywords: Robest, LTS, M estimate, MSE

Procedia PDF Downloads 236
1661 Performances Analysis of the Pressure and Production of an Oil Zone by Simulation of the Flow of a Fluid through the Porous Media

Authors: Makhlouf Mourad, Medkour Mihoub, Bouchher Omar, Messabih Sidi Mohamed, Benrachedi Khaled

Abstract:

This work is the modeling and simulation of fluid flow (liquid) through porous media. This type of flow occurs in many situations of interest in applied sciences and engineering, fluid (oil) consists of several individual substances in pure, single-phase flow is incompressible and isothermal. The porous medium is isotropic, homogeneous optionally, with the rectangular format and the flow is two-dimensional. Modeling of hydrodynamic phenomena incorporates Darcy's law and the equation of mass conservation. Correlations are used to model the density and viscosity of the fluid. A finite volume code is used in the discretization of differential equations. The nonlinearity is treated by Newton's method with relaxation coefficient. The results of the simulation of the pressure and the mobility of liquid flowing through porous media are presented, analyzed, and illustrated.

Keywords: Darcy equation, middle porous, continuity equation, Peng Robinson equation, mobility

Procedia PDF Downloads 222
1660 Anomalies of Visual Perceptual Skills Amongst School Children in Foundation Phase in Olievenhoutbosch, Gauteng Province, South Africa

Authors: Maria Bonolo Mathevula

Abstract:

Background: Children are important members of communities playing major role in the future of any given country (Pera, Fails, Gelsomini, &Garzotto, 2018). Visual Perceptual Skills (VPSs) in children are important health aspect of early childhood development through the Foundation Phases in school. Subsequently, children should undergo visual screening before commencement of schooling for early diagnosis ofVPSs anomalies because the primary role of VPSs is to capacitate children with academic performance in general. Aim : The aim of this study was to determine the anomalies of visual VPSs amongst school children in Foundation Phase. The study’s objectives were to determine the prevalence of VPSs anomalies amongst school children in Foundation Phase; Determine the relationship between children’s academic and VPSs anomalies; and to investigate the relationship between VPSs anomalies and refractive error. Methodology: This study was a mixed method whereby triangulated qualitative (interviews) and quantitative (questionnaire and clinical data) was used. This was, therefore, descriptive by nature. The study’s target population was school children in Foundation Phase. The study followed purposive sampling method. School children in Foundation Phase were purposively sampled to form part of this study provided their parents have given a signed the consent. Data was collected by the use of standardized interviews; questionnaire; clinical data card, and TVPS standard data card. Results: Although the study is still ongoing, the preliminary study outcome based on data collected from one of the Foundation Phases have suggested the following:While VPSs anomalies is not prevalent, it, however, have indirect relationship with children’s academic performance in Foundation phase; Notably, VPSs anomalies and refractive error are directly related since majority of children with refractive error, specifically compound hyperopic astigmatism, failed most subtests of TVPS standard tests. Conclusion: Based on the study’s preliminary findings, it was clear that optometrists still have a lot to do in as far as researching on VPSs is concerned. Furthermore, the researcher recommends that optometrist, as the primary healthcare professionals, should also conduct the school-readiness pre-assessment on children before commencement of their grades in Foundation phase.

Keywords: foundation phase, visual perceptual skills, school children, refractive error

Procedia PDF Downloads 105
1659 Discretization of Cuckoo Optimization Algorithm for Solving Quadratic Assignment Problems

Authors: Elham Kazemi

Abstract:

Quadratic Assignment Problem (QAP) is one the combinatorial optimization problems about which research has been done in many companies for allocating some facilities to some locations. The issue of particular importance in this process is the costs of this allocation and the attempt in this problem is to minimize this group of costs. Since the QAP’s are from NP-hard problem, they cannot be solved by exact solution methods. Cuckoo Optimization Algorithm is a Meta-heuristicmethod which has higher capability to find the global optimal points. It is an algorithm which is basically raised to search a continuous space. The Quadratic Assignment Problem is the issue which can be solved in the discrete space, thus the standard arithmetic operators of Cuckoo Optimization Algorithm need to be redefined on the discrete space in order to apply the Cuckoo Optimization Algorithm on the discrete searching space. This paper represents the way of discretizing the Cuckoo optimization algorithm for solving the quadratic assignment problem.

Keywords: Quadratic Assignment Problem (QAP), Discrete Cuckoo Optimization Algorithm (DCOA), meta-heuristic algorithms, optimization algorithms

Procedia PDF Downloads 524
1658 Accuracy/Precision Evaluation of Excalibur I: A Neurosurgery-Specific Haptic Hand Controller

Authors: Hamidreza Hoshyarmanesh, Benjamin Durante, Alex Irwin, Sanju Lama, Kourosh Zareinia, Garnette R. Sutherland

Abstract:

This study reports on a proposed method to evaluate the accuracy and precision of Excalibur I, a neurosurgery-specific haptic hand controller, designed and developed at Project neuroArm. Having an efficient and successful robot-assisted telesurgery is considerably contingent on how accurate and precise a haptic hand controller (master/local robot) would be able to interpret the kinematic indices of motion, i.e., position and orientation, from the surgeon’s upper limp to the slave/remote robot. A proposed test rig is designed and manufactured according to standard ASTM F2554-10 to determine the accuracy and precision range of Excalibur I at four different locations within its workspace: central workspace, extreme forward, far left and far right. The test rig is metrologically characterized by a coordinate measuring machine (accuracy and repeatability < ± 5 µm). Only the serial linkage of the haptic device is examined due to the use of the Structural Length Index (SLI). The results indicate that accuracy decreases by moving from the workspace central area towards the borders of the workspace. In a comparative study, Excalibur I performs on par with the PHANToM PremiumTM 3.0 and more accurate/precise than the PHANToM PremiumTM 1.5. The error in Cartesian coordinate system shows a dominant component in one direction (δx, δy or δz) for the movements on horizontal, vertical and inclined surfaces. The average error magnitude of three attempts is recorded, considering all three error components. This research is the first promising step to quantify the kinematic performance of Excalibur I.

Keywords: accuracy, advanced metrology, hand controller, precision, robot-assisted surgery, tele-operation, workspace

Procedia PDF Downloads 340
1657 Prediction of California Bearing Ratio of a Black Cotton Soil Stabilized with Waste Glass and Eggshell Powder using Artificial Neural Network

Authors: Biruhi Tesfaye, Avinash M. Potdar

Abstract:

The laboratory test process to determine the California bearing ratio (CBR) of black cotton soils is not only overpriced but also time-consuming as well. Hence advanced prediction of CBR plays a significant role as it is applicable In pavement design. The prediction of CBR of treated soil was executed by Artificial Neural Networks (ANNs) which is a Computational tool based on the properties of the biological neural system. To observe CBR values, combined eggshell and waste glass was added to soil as 4, 8, 12, and 16 % of the weights of the soil samples. Accordingly, the laboratory related tests were conducted to get the required best model. The maximum CBR value found at 5.8 at 8 % of eggshell waste glass powder addition. The model was developed using CBR as an output layer variable. CBR was considered as a function of the joint effect of liquid limit, plastic limit, and plastic index, optimum moisture content and maximum dry density. The best model that has been found was ANN with 5, 6 and 1 neurons in the input, hidden and output layer correspondingly. The performance of selected ANN has been 0.99996, 4.44E-05, 0.00353 and 0.0067 which are correlation coefficient (R), mean square error (MSE), mean absolute error (MAE) and root mean square error (RMSE) respectively. The research presented or summarized above throws light on future scope on stabilization with waste glass combined with different percentages of eggshell that leads to the economical design of CBR acceptable to pavement sub-base or base, as desired.

Keywords: CBR, artificial neural network, liquid limit, plastic limit, maximum dry density, OMC

Procedia PDF Downloads 196
1656 The Study of Formal and Semantic Errors of Lexis by Persian EFL Learners

Authors: Mohammad J. Rezai, Fereshteh Davarpanah

Abstract:

Producing a text in a language which is not one’s mother tongue can be a demanding task for language learners. Examining lexical errors committed by EFL learners is a challenging area of investigation which can shed light on the process of second language acquisition. Despite the considerable number of investigations into grammatical errors, few studies have tackled formal and semantic errors of lexis committed by EFL learners. The current study aimed at examining Persian learners’ formal and semantic errors of lexis in English. To this end, 60 students at three different proficiency levels were asked to write on 10 different topics in 10 separate sessions. Finally, 600 essays written by Persian EFL learners were collected, acting as the corpus of the study. An error taxonomy comprising formal and semantic errors was selected to analyze the corpus. The formal category covered misselection and misformation errors, while the semantic errors were classified into lexical, collocational and lexicogrammatical categories. Each category was further classified into subcategories depending on the identified errors. The results showed that there were 2583 errors in the corpus of 9600 words, among which, 2030 formal errors and 553 semantic errors were identified. The most frequent errors in the corpus included formal error commitment (78.6%), which were more prevalent at the advanced level (42.4%). The semantic errors (21.4%) were more frequent at the low intermediate level (40.5%). Among formal errors of lexis, the highest number of errors was devoted to misformation errors (98%), while misselection errors constituted 2% of the errors. Additionally, no significant differences were observed among the three semantic error subcategories, namely collocational, lexical choice and lexicogrammatical. The results of the study can shed light on the challenges faced by EFL learners in the second language acquisition process.

Keywords: collocational errors, lexical errors, Persian EFL learners, semantic errors

Procedia PDF Downloads 146
1655 Continuous Wave Interference Effects on Global Position System Signal Quality

Authors: Fang Ye, Han Yu, Yibing Li

Abstract:

Radio interference is one of the major concerns in using the global positioning system (GPS) for civilian and military applications. Interference signals are produced not only through all electronic systems but also illegal jammers. Among different types of interferences, continuous wave (CW) interference has strong adverse impacts on the quality of the received signal. In this paper, we make more detailed analysis for CW interference effects on GPS signal quality. Based on the C/A code spectrum lines, the influence of CW interference on the acquisition performance of GPS receivers is further analysed. This influence is supported by simulation results using GPS software receiver. As the most important user parameter of GPS receivers, the mathematical expression of bit error probability is also derived in the presence of CW interference, and the expression is consistent with the Monte Carlo simulation results. The research on CW interference provides some theoretical gist and new thoughts on monitoring the radio noise environment and improving the anti-jamming ability of GPS receivers.

Keywords: GPS, CW interference, acquisition performance, bit error probability, Monte Carlo

Procedia PDF Downloads 265
1654 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 327
1653 The Link between Money Market and Economic Growth in Nigeria: Vector Error Correction Model Approach

Authors: Uyi Kizito Ehigiamusoe

Abstract:

The paper examines the impact of money market on economic growth in Nigeria using data for the period 1980-2012. Econometrics techniques such as Ordinary Least Squares Method, Johanson’s Co-integration Test and Vector Error Correction Model were used to examine both the long-run and short-run relationship. Evidence from the study suggest that though a long-run relationship exists between money market and economic growth, but the present state of the Nigerian money market is significantly and negatively related to economic growth. The link between the money market and the real sector of the economy remains very weak. This implies that the market is not yet developed enough to produce the needed growth that will propel the Nigerian economy because of several challenges. It was therefore recommended that government should create the appropriate macroeconomic policies, legal framework and sustain the present reforms with a view to developing the market so as to promote productive activities, investments, and ultimately economic growth.

Keywords: economic growth, investments, money market, money market challenges, money market instruments

Procedia PDF Downloads 350
1652 Modernization of the Economic Price Adjustment Software

Authors: Roger L. Goodwin

Abstract:

The US Consumer Price Indices (CPIs) measures hundreds of items in the US economy. Many social programs and government benefits index to the CPIs. In mid to late 1990, much research went into changes to the CPI by a Congressional Advisory Committee. One thing can be said from the research is that, aside from there are alternative estimators for the CPI; any fundamental change to the CPI will affect many government programs. The purpose of this project is to modernize an existing process. This paper will show the development of a small, visual, software product that documents the Economic Price Adjustment (EPA) for long-term contracts. The existing workbook does not provide the flexibility to calculate EPAs where the base-month and the option-month are different. Nor does the workbook provide automated error checking. The small, visual, software product provides the additional flexibility and error checking. This paper presents the feedback to project.

Keywords: Consumer Price Index, Economic Price Adjustment, contracts, visualization tools, database, reports, forms, event procedures

Procedia PDF Downloads 321
1651 Soil Stress State under Tractive Tire and Compaction Model

Authors: Prathuang Usaborisut, Dithaporn Thungsotanon

Abstract:

Soil compaction induced by a tractor towing trailer becomes a major problem associated to sugarcane productivity. Soil beneath the tractor’s tire is not only under compressing stress but also shearing stress. Therefore, in order to help to understand such effects on soil, this research aimed to determine stress state in soil and predict compaction of soil under a tractive tire. The octahedral stress ratios under the tires were higher than one and much higher under higher draft forces. Moreover, the ratio was increasing with increase of number of tire’s passage. Soil compaction model was developed using data acquired from triaxial tests. The model was then used to predict soil bulk density under tractive tire. The maximum error was about 4% at 15 cm depth under lower draft force and tended to increase with depth and draft force. At depth of 30 cm and under higher draft force, the maximum error was about 16%.

Keywords: draft force, soil compaction model, stress state, tractive tire

Procedia PDF Downloads 356
1650 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 111
1649 A Comparative Evaluation of Finite Difference Methods for the Extended Boussinesq Equations and Application to Tsunamis Modelling

Authors: Aurore Cauquis, Philippe Heinrich, Mario Ricchiuto, Audrey Gailler

Abstract:

In this talk, we look for an accurate time scheme to model the propagation of waves. Several numerical schemes have been developed to solve the extended weakly nonlinear weakly dispersive Boussinesq Equations. The temporal schemes used are two Lax-Wendroff schemes, second or third order accurate, two Runge-Kutta schemes of second and third order and a simplified third order accurate Lax-Wendroff scheme. Spatial derivatives are evaluated with fourth order accuracy. The numerical model is applied to two monodimensional benchmarks on a flat bottom. It is also applied to the simulation of the Algerian tsunami generated by a Mw=6 seism on the 18th March 2021. The tsunami propagation was highly dispersive and propagated across the Mediterranean Sea. We study here the effects of the order of temporal discretization on the accuracy of the results and on the time of computation.

Keywords: numerical analysis, tsunami propagation, water wave, boussinesq equations

Procedia PDF Downloads 246
1648 Parametric Optimization of High-Performance Electric Vehicle E-Gear Drive for Radiated Noise Using 1-D System Simulation

Authors: Sanjai Sureshkumar, Sathish G. Kumar, P. V. V. Sathyanarayana

Abstract:

For e-gear drivetrain, the transmission error and the resulting variation in mesh stiffness is one of the main source of excitation in High performance Electric Vehicle. These vibrations are transferred through the shaft to the bearings and then to the e-Gear drive housing eventually radiating noise. A parametrical model developed in 1-D system simulation by optimizing the micro and macro geometry along with bearing properties and oil filtration to achieve least transmission error and high contact ratio. Histogram analysis is performed to condense the actual road load data into condensed duty cycle to find the bearing forces. The structural vibration generated by these forces will be simulated in a nonlinear solver obtaining the normal surface velocity of the housing and the results will be carried forward to Acoustic software wherein a virtual environment of the surrounding (actual testing scenario) with accurate microphone position will be maintained to predict the sound pressure level of radiated noise and directivity plot of the e-Gear Drive. Order analysis will be carried out to find the root cause of the vibration and whine noise. Broadband spectrum will be checked to find the rattle noise source. Further, with the available results, the design will be optimized, and the next loop of simulation will be performed to build a best e-Gear Drive on NVH aspect. Structural analysis will be also carried out to check the robustness of the e-Gear Drive.

Keywords: 1-D system simulation, contact ratio, e-Gear, mesh stiffness, micro and macro geometry, transmission error, radiated noise, NVH

Procedia PDF Downloads 152
1647 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 104
1646 Forecasting Free Cash Flow of an Industrial Enterprise Using Fuzzy Set Tools

Authors: Elena Tkachenko, Elena Rogova, Daria Koval

Abstract:

The paper examines the ways of cash flows forecasting in the dynamic external environment. The so-called new reality in economy lowers the predictability of the companies’ performance indicators due to the lack of long-term steady trends in external conditions of development and fast changes in the markets. The traditional methods based on the trend analysis lead to a very high error of approximation. The macroeconomic situation for the last 10 years is defined by continuous consequences of financial crisis and arising of another one. In these conditions, the instruments of forecasting on the basis of fuzzy sets show good results. The fuzzy sets based models turn out to lower the error of approximation to acceptable level and to provide the companies with reliable cash flows estimation that helps to reach the financial stability. In the paper, the applicability of the model of cash flows forecasting based on fuzzy logic was analyzed.

Keywords: cash flow, industrial enterprise, forecasting, fuzzy sets

Procedia PDF Downloads 211
1645 Improving Human Hand Localization in Indoor Environment by Using Frequency Domain Analysis

Authors: Wipassorn Vinicchayakul, Pichaya Supanakoon, Sathaporn Promwong

Abstract:

A human’s hand localization is revised by using radar cross section (RCS) measurements with a minimum root mean square (RMS) error matching algorithm on a touchless keypad mock-up model. RCS and frequency transfer function measurements are carried out in an indoor environment on the frequency ranged from 3.0 to 11.0 GHz to cover federal communications commission (FCC) standards. The touchless keypad model is tested in two different distances between the hand and the keypad. The initial distance of 19.50 cm is identical to the heights of transmitting (Tx) and receiving (Rx) antennas, while the second distance is 29.50 cm from the keypad. Moreover, the effects of Rx angles relative to the hand of human factor are considered. The RCS input parameters are compared with power loss parameters at each frequency. From the results, the performance of the RCS input parameters with the second distance, 29.50 cm at 3 GHz is better than the others.

Keywords: radar cross section, fingerprint-based localization, minimum root mean square (RMS) error matching algorithm, touchless keypad model

Procedia PDF Downloads 344
1644 Real-Time Radar Tracking Based on Nonlinear Kalman Filter

Authors: Milca F. Coelho, K. Bousson, Kawser Ahmed

Abstract:

To accurately track an aerospace vehicle in a time-critical situation and in a highly nonlinear environment, is one of the strongest interests within the aerospace community. The tracking is achieved by estimating accurately the state of a moving target, which is composed of a set of variables that can provide a complete status of the system at a given time. One of the main ingredients for a good estimation performance is the use of efficient estimation algorithms. A well-known framework is the Kalman filtering methods, designed for prediction and estimation problems. The success of the Kalman Filter (KF) in engineering applications is mostly due to the Extended Kalman Filter (EKF), which is based on local linearization. Besides its popularity, the EKF presents several limitations. To address these limitations and as a possible solution to tracking problems, this paper proposes the use of the Ensemble Kalman Filter (EnKF). Although the EnKF is being extensively used in the context of weather forecasting and it is being recognized for producing accurate and computationally effective estimation on systems with a very high dimension, it is almost unknown by the tracking community. The EnKF was initially proposed as an attempt to improve the error covariance calculation, which on the classic Kalman Filter is difficult to implement. Also, in the EnKF method the prediction and analysis error covariances have ensemble representations. These ensembles have sizes which limit the number of degrees of freedom, in a way that the filter error covariance calculations are a lot more practical for modest ensemble sizes. In this paper, a realistic simulation of a radar tracking was performed, where the EnKF was applied and compared with the Extended Kalman Filter. The results suggested that the EnKF is a promising tool for tracking applications, offering more advantages in terms of performance.

Keywords: Kalman filter, nonlinear state estimation, optimal tracking, stochastic environment

Procedia PDF Downloads 156
1643 Error Analysis in English Essays Writing of Thai Students with Different English Language Experiences

Authors: Sirirat Choophan Atthaphonphiphat

Abstract:

The objective of the study is to analyze errors in English essay writing of Thai (Suratthani Rajabhat University)’s students with different English language experiences. 16 subjects were divided into 2 groups depending on their English language experience. The data were collected from English essay writing about 'My daily life'. The finding shows that 275 tokens of errors were found from 240 English sentences. The errors were categorized into 4 types based on frequency counts: grammatical errors, mechanical errors, lexical errors, and structural errors, respectively. The findings support all of the researcher’s hypothesizes, i.e. 1) the students with low English language experience made more errors than those with high English language experience; 2) all errors in English essay writing of Suratthani Rajabhat University’s students, the interlingual errors are more than the intralingual ones; 3) systemic and structural differences between English (target language) and Thai (mother-tongue language) lead to the errors in English essays writing of Suratthani Rajabhat University’s students.

Keywords: applied linguistics, error analysis, interference, language transfer

Procedia PDF Downloads 623
1642 Dynamic Analysis of Composite Doubly Curved Panels with Variable Thickness

Authors: I. Algul, G. Akgun, H. Kurtaran

Abstract:

Dynamic analysis of composite doubly curved panels with variable thickness subjected to different pulse types using Generalized Differential Quadrature method (GDQ) is presented in this study. Panels with variable thickness are used in the construction of aerospace and marine industry. Giving variable thickness to panels can allow the designer to get optimum structural efficiency. For this reason, estimating the response of variable thickness panels is very important to design more reliable structures under dynamic loads. Dynamic equations for composite panels with variable thickness are obtained using virtual work principle. Partial derivatives in the equation of motion are expressed with GDQ and Newmark average acceleration scheme is used for temporal discretization. Several examples are used to highlight the effectiveness of the proposed method. Results are compared with finite element method. Effects of taper ratios, boundary conditions and loading type on the response of composite panel are investigated.

Keywords: differential quadrature method, doubly curved panels, laminated composite materials, small displacement

Procedia PDF Downloads 362
1641 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 178
1640 Performance Complexity Measurement of Tightening Equipment Based on Kolmogorov Entropy

Authors: Guoliang Fan, Aiping Li, Xuemei Liu, Liyun Xu

Abstract:

The performance of the tightening equipment will decline with the working process in manufacturing system. The main manifestations are the randomness and discretization degree increasing of the tightening performance. To evaluate the degradation tendency of the tightening performance accurately, a complexity measurement approach based on Kolmogorov entropy is presented. At first, the states of performance index are divided for calibrating the discrete degree. Then the complexity measurement model based on Kolmogorov entropy is built. The model describes the performance degradation tendency of tightening equipment quantitatively. At last, a study case is applied for verifying the efficiency and validity of the approach. The research achievement shows that the presented complexity measurement can effectively evaluate the degradation tendency of the tightening equipment. It can provide theoretical basis for preventive maintenance and life prediction of equipment.

Keywords: complexity measurement, Kolmogorov entropy, manufacturing system, performance evaluation, tightening equipment

Procedia PDF Downloads 264
1639 An Investigation of Crop Diversity’s Impact on Income Risk of Selected Crops

Authors: Saeed Yazdani, Sima Mohamadi Amidabadi, Amir Mohamadi Nejad, Farahnaz Nekoofar

Abstract:

As a result of uncertainty and doubts about the quantity of agricultural products, greater significance has been attached to risk management in the agricultural sector. Normally, farmers seek to minimize risks, and crop diversity has always been a means to reduce risk. The study at hand seeks to explore the long-term impact of crop diversity on income risk reduction. The timeframe of the study is 1998 to 2018. Initially, the Herfindahl index was used to estimate crop diversity in different periods, and next, the Hodrick-Prescott filter was applied to estimate income risk both in nominal and real terms. Finally, using the Vector Error Correction Model (VECM), the long-term impact of crop diversity on two modes of risk for the farmer's income has been estimated. Given the long-term pattern’s results, it is evident that in the long-run, crop diversity can reduce income fluctuations in two nominal and real terms. Moreover, results showed that in case the fluctuation shock affects the agricultural income in the short run, to balance out the shock in nominal and real terms, 4 and 3 cycles are needed respectively. In other words, in each cycle, 25% and 33% of the shock impact can be removed, respectively. Thus, as the results of the error correction coefficient showed, policies need to be put in place to prevent income shocks. In case of a shock, they need to be balanced out in a four-year period, taking inflation into account, and in a three-year period irrespective of the inflation and reparative policies such as insurance services should be developed.

Keywords: risk, long-term model, Herfindahl index, time series model, vector error correction model

Procedia PDF Downloads 30
1638 Far-Field Acoustic Prediction of a Supersonic Expanding Jet Using Large Eddy Simulation

Authors: Jesus Ruano, Asensi Oliva

Abstract:

The hydrodynamic field generated by a jet expansion is computed via three dimensional compressible Large Eddy Simulation (LES). Finite Volume Method (FVM) will be the discretization used during this simulation as well as hybrid schemes based on Kinetic Energy Preserving (KEP) schemes and up-winding Godunov based schemes with instabilities detectors. Velocity and pressure fields will be stored at different surfaces near the jet, but far enough to enclose all the fluctuations, in order to use them as input for the acoustic solver. The acoustic field is obtained in the far-field region at several locations by means of a hybrid method based on Ffowcs-Williams and Hawkings (FWH) equation. This equation will be formulated in the spectral domain, via Fourier Transform of the acoustic sources, which are modeled from the results of the initial simulation. The obtained results will allow the study of the broadband noise generated as well as sound directivities.

Keywords: far-field noise, Ffowcs-Williams and Hawkings, finite volume method, large eddy simulation, jet noise

Procedia PDF Downloads 302
1637 A Modified Decoupled Semi-Analytical Approach Based On SBFEM for Solving 2D Elastodynamic Problems

Authors: M. Fakharian, M. I. Khodakarami

Abstract:

In this paper, a new trend for improvement in semi-analytical method based on scale boundaries in order to solve the 2D elastodynamic problems is provided. In this regard, only the boundaries of the problem domain discretization are by specific sub-parametric elements. Mapping functions are uses as a class of higher-order Lagrange polynomials, special shape functions, Gauss-Lobatto -Legendre numerical integration, and the integral form of the weighted residual method, the matrix is diagonal coefficients in the equations of elastodynamic issues. Differences between study conducted and prior research in this paper is in geometry production procedure of the interpolation function and integration of the different is selected. Validity and accuracy of the present method are fully demonstrated through two benchmark problems which are successfully modeled using a few numbers of DOFs. The numerical results agree very well with the analytical solutions and the results from other numerical methods.

Keywords: 2D elastodynamic problems, lagrange polynomials, G-L-Lquadrature, decoupled SBFEM

Procedia PDF Downloads 446
1636 Energy Consumption Modeling for Strawberry Greenhouse Crop by Adaptive Nero Fuzzy Inference System Technique: A Case Study in Iran

Authors: Azar Khodabakhshi, Elham Bolandnazar

Abstract:

Agriculture as the most important food manufacturing sector is not only the energy consumer, but also is known as energy supplier. Using energy is considered as a helpful parameter for analyzing and evaluating the agricultural sustainability. In this study, the pattern of energy consumption of strawberry greenhouses of Jiroft in Kerman province of Iran was surveyed. The total input energy required in the strawberries production was calculated as 113314.71 MJ /ha. Electricity with 38.34% contribution of the total energy was considered as the most energy consumer in strawberry production. In this study, Neuro Fuzzy networks was used for function modeling in the production of strawberries. Results showed that the best model for predicting the strawberries function had a correlation coefficient, root mean square error (RMSE) and mean absolute percentage error (MAPE) equal to 0.9849, 0.0154 kg/ha and 0.11% respectively. Regards to these results, it can be said that Neuro Fuzzy method can be well predicted and modeled the strawberry crop function.

Keywords: crop yield, energy, neuro-fuzzy method, strawberry

Procedia PDF Downloads 385
1635 Numerical Simulation of Kangimi Reservoir Sedimentation, Kaduna State, Nigeria

Authors: Abdurrasheed Sa'id, Abubakar Isma'il, Waheed Alayande

Abstract:

This study focused on carrying out numerical simulations of Kangimi reservoir sedimentation by reviewing different numerical sediment transport models, and GSTARS3 was selected. The model was developed using the 1977 data. It was calibrated by simulating the 2012 profile and sediment deposition and compared with 2012 hydrographic survey results of NWRI. The model was validated by simulating the 2016 deposition and compared the results with NWRI estimates. Also, the performance of the proposed model was tested using statistical parameters such as MSE (Mean Square Error), MAPE (Mean Average Percentage Error) and R2 (Coefficient of determination) with values of 1.32m, 0.17% and 0.914 respectively which shows strong agreement. After the calibration, validation and performance testing the model was used to simulate the 2032 and 2062 profiles and deposition. The results showed that by 2032 the reservoir will be silted by 25.34MCM or 43.3% of the design capacity and 60.7% of the capacity by the year 2062. A number of sedimentation mitigation measures were recommended.

Keywords: NWRI- national water resources institute, sedimentation, GSTARS3, model

Procedia PDF Downloads 223