Search results for: discretization error
1574 Investigation of User Position Accuracy for Stand-Alone and Hybrid Modes of the Indian Navigation with Indian Constellation Satellite System
Authors: Naveen Kumar Perumalla, Devadas Kuna, Mohammed Akhter Ali
Abstract:
Satellite Navigation System such as the United States Global Positioning System (GPS) plays a significant role in determining the user position. Similar to that of GPS, Indian Regional Navigation Satellite System (IRNSS) is a Satellite Navigation System indigenously developed by Indian Space Research Organization (ISRO), India, to meet the country’s navigation applications. This system is also known as Navigation with Indian Constellation (NavIC). The NavIC system’s main objective, is to offer Positioning, Navigation and Timing (PNT) services to users in its two service areas i.e., covering the Indian landmass and the Indian Ocean. Six NavIC satellites are already deployed in the space and their receivers are in the performance evaluation stage. Four NavIC dual frequency receivers are installed in the ‘Advanced GNSS Research Laboratory’ (AGRL) in the Department of Electronics and Communication Engineering, University College of Engineering, Osmania University, India. The NavIC receivers can be operated in two positioning modes: Stand-alone IRNSS and Hybrid (IRNSS+GPS) modes. In this paper, analysis of various parameters such as Dilution of Precision (DoP), three Dimension (3D) Root Mean Square (RMS) Position Error and Horizontal Position Error with respect to Visibility of Satellites is being carried out using the real-time IRNSS data, obtained by operating the receiver in both positioning modes. Two typical days (6th July 2017 and 7th July 2017) are considered for Hyderabad (Latitude-17°24'28.07’N, Longitude-78°31'4.26’E) station are analyzed. It is found that with respect to the considered parameters, the Hybrid mode operation of NavIC receiver is giving better results than that of the standalone positioning mode. This work finds application in development of NavIC receivers for civilian navigation applications.Keywords: DoP, GPS, IRNSS, GNSS, position error, satellite visibility
Procedia PDF Downloads 2181573 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities
Authors: Retius Chifurira
Abstract:
Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities
Procedia PDF Downloads 2021572 Study of Effect of Gear Tooth Accuracy on Transmission Mount Vibration
Authors: Kalyan Deepak Kolla, Ketan Paua, Rajkumar Bhagate
Abstract:
Transmission dynamics occupy major role in customer perception of the product in both senses of touch and quality of sound. The quantity and quality of sound perceived is more concerned with the whine noise of the gears engaged. Whine noise is tonal in nature and tonal noises cause fatigue and irritation to customers, which in turn affect the quality of the product. Transmission error is the usual suspect for whine noise, which can be caused due to misalignments, tolerances, manufacturing variabilities. In-cabin noise is also more sensitive to the gear design. As the details of the gear tooth design and manufacturing are in microns, anything out of the tolerance zone, either in design or manufacturing, will cause a whine noise. This will also cause high variation in stress and deformation due to change in the load and leads to the fatigue failure of the gears. Hence gear design and development take priority in the transmission development process. This paper aims to study such variability by considering five pairs of helical spur gears and their effect on the transmission error, contact pattern and vibration level on the transmission.Keywords: gears, whine noise, manufacturing variability, mount vibration variability
Procedia PDF Downloads 1521571 English 2A Students’ Oral Presentation Errors: Basis for English Policy Revision
Authors: Marylene N. Tizon
Abstract:
English instructors pay attention on errors committed by students as errors show whether they know or master their oral skills and what difficulties they may have in the process of learning the English language. This descriptive quantitative study aimed at identifying and categorizing the oral presentation errors of the purposively chosen 118 English 2A students enrolled during the first semester of school year 2013 – 2014. The analysis of the data for this study was undertaken using the errors committed by the students in their presentation. Marking and classifying of errors were made by first classifying them into linguistic grammatical errors then all errors were categorized further into Surface Structure Errors Taxonomy with the use of Frequency and Percentage distribution. From the analysis of the data, the researcher found out: Errors in tenses of the verbs (71 or 16%) and in addition 167 or 37% were most frequently uttered by the students. And Question and negation mistakes (12 or 3%) and misordering errors (28 or 7%) were least frequently enunciated by the students. Thus, the respondents in this study most frequently enunciated errors in tenses and in addition while they uttered least frequently the errors in question, negation, and misordering.Keywords: grammatical error, oral presentation error, surface structure errors taxonomy, descriptive quantitative design, Philippines, Asia
Procedia PDF Downloads 3961570 Comparative Study of Accuracy of Land Cover/Land Use Mapping Using Medium Resolution Satellite Imagery: A Case Study
Authors: M. C. Paliwal, A. K. Jain, S. K. Katiyar
Abstract:
Classification of satellite imagery is very important for the assessment of its accuracy. In order to determine the accuracy of the classified image, usually the assumed-true data are derived from ground truth data using Global Positioning System. The data collected from satellite imagery and ground truth data is then compared to find out the accuracy of data and error matrices are prepared. Overall and individual accuracies are calculated using different methods. The study illustrates advanced classification and accuracy assessment of land use/land cover mapping using satellite imagery. IRS-1C-LISS IV data were used for classification of satellite imagery. The satellite image was classified using the software in fourteen classes namely water bodies, agricultural fields, forest land, urban settlement, barren land and unclassified area etc. Classification of satellite imagery and calculation of accuracy was done by using ERDAS-Imagine software to find out the best method. This study is based on the data collected for Bhopal city boundaries of Madhya Pradesh State of India.Keywords: resolution, accuracy assessment, land use mapping, satellite imagery, ground truth data, error matrices
Procedia PDF Downloads 5121569 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 1401568 Optimization of Assay Parameters of L-Glutaminase from Bacillus cereus MTCC1305 Using Artificial Neural Network
Authors: P. Singh, R. M. Banik
Abstract:
Artificial neural network (ANN) was employed to optimize assay parameters viz., time, temperature, pH of reaction mixture, enzyme volume and substrate concentration of L-glutaminase from Bacillus cereus MTCC 1305. ANN model showed high value of coefficient of determination (0.9999), low value of root mean square error (0.6697) and low value of absolute average deviation. A multilayer perceptron neural network trained with an error back-propagation algorithm was incorporated for developing a predictive model and its topology was obtained as 5-3-1 after applying Levenberg Marquardt (LM) training algorithm. The predicted activity of L-glutaminase was obtained as 633.7349 U/l by considering optimum assay parameters, viz., pH of reaction mixture (7.5), reaction time (20 minutes), incubation temperature (35˚C), substrate concentration (40mM), and enzyme volume (0.5ml). The predicted data was verified by running experiment at simulated optimum assay condition and activity was obtained as 634.00 U/l. The application of ANN model for optimization of assay conditions improved the activity of L-glutaminase by 1.499 fold.Keywords: Bacillus cereus, L-glutaminase, assay parameters, artificial neural network
Procedia PDF Downloads 4331567 Design of Parity-Preserving Reversible Logic Signed Array Multipliers
Authors: Mojtaba Valinataj
Abstract:
Reversible logic as a new favorable design domain can be used for various fields especially creating quantum computers because of its speed and intangible power consumption. However, its susceptibility to a variety of environmental effects may lead to yield the incorrect results. In this paper, because of the importance of multiplication operation in various computing systems, some novel reversible logic array multipliers are proposed with error detection capability by incorporating the parity-preserving gates. The new designs are presented for two main parts of array multipliers, partial product generation and multi-operand addition, by exploiting the new arrangements of existing gates, which results in two signed parity-preserving array multipliers. The experimental results reveal that the best proposed 4×4 multiplier in this paper reaches 12%, 24%, and 26% enhancements in the number of constant inputs, number of required gates, and quantum cost, respectively, compared to previous design. Moreover, the best proposed design is generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.Keywords: array multipliers, Baugh-Wooley method, error detection, parity-preserving gates, quantum computers, reversible logic
Procedia PDF Downloads 2611566 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements
Authors: Sabiu Bala Muhammad, Rosli Saad
Abstract:
Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity
Procedia PDF Downloads 2811565 Nurse-Reported Perceptions of Medication Safety in Private Hospitals in Gauteng Province.
Authors: Madre Paarlber, Alwiena Blignaut
Abstract:
Background: Medication administration errors remains a global patient safety problem targeted by the WHO (World Health Organization), yet research on this matter is sparce within the South African context. Objective: The aim was to explore and describe nurses’ (medication administrators) perceptions regarding medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province of South Africa, and to determine any relationships between perceived variables concerned with medication safety (safety culture, incidences, causes, reporting of incidences, and reasons for non-reporting). Method: A quantitative research design was used through which self-administered online surveys were sent to 768 nurses (medication administrators) (n=217). The response rate was 28.26%. The survey instrument was synthesised from the Agency of Healthcare Research and Quality (AHRQ) Hospital Survey on Patient Safety Culture, the Registered Nurse Forecasting (RN4CAST) survey, a survey list prepared from a systematic review aimed at generating a comprehensive list of medication administration error causes and the Medication Administration Error Reporting Survey from Wakefield. Exploratory and confirmatory factor analyses were used to determine the validity and reliability of the survey. Descriptive and inferential statistical data analysis were used to analyse quantitative data. Relationships and correlations were identified between items, subscales and biographic data by using Spearmans’ Rank correlations, T-Tests and ANOVAs (Analysis of Variance). Nurses reported on their perceptions of medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province. Results: Units’ teamwork deemed satisfactory, punitive responses to errors accentuated. “Crisis mode” working, concerns regarding mistake recording and long working hours disclosed as impacting patient safety. Overall medication safety graded mostly positively. Work overload, high patient-nurse ratios, and inadequate staffing implicated as error-inducing. Medication administration errors were reported regularly. Fear and administrative response to errors effected non-report. Non-report of errors’ reasons was affected by non-punitive safety culture. Conclusions: Medication administration safety improvement is contingent on fostering a non-punitive safety culture within units. Anonymous medication error reporting systems and auditing nurses’ workload are recommended in the quest of improved medication safety within Gauteng Province private hospitals.Keywords: incidence, medication administration errors, medication safety, reporting, safety culture
Procedia PDF Downloads 571564 Polynomial Chaos Expansion Combined with Exponential Spline for Singularly Perturbed Boundary Value Problems with Random Parameter
Authors: W. K. Zahra, M. A. El-Beltagy, R. R. Elkhadrawy
Abstract:
So many practical problems in science and technology developed over the past decays. For instance, the mathematical boundary layer theory or the approximation of solution for different problems described by differential equations. When such problems consider large or small parameters, they become increasingly complex and therefore require the use of asymptotic methods. In this work, we consider the singularly perturbed boundary value problems which contain very small parameters. Moreover, we will consider these perturbation parameters as random variables. We propose a numerical method to solve this kind of problems. The proposed method is based on an exponential spline, Shishkin mesh discretization, and polynomial chaos expansion. The polynomial chaos expansion is used to handle the randomness exist in the perturbation parameter. Furthermore, the Monte Carlo Simulations (MCS) are used to validate the solution and the accuracy of the proposed method. Numerical results are provided to show the applicability and efficiency of the proposed method, which maintains a very remarkable high accuracy and it is ε-uniform convergence of almost second order.Keywords: singular perturbation problem, polynomial chaos expansion, Shishkin mesh, two small parameters, exponential spline
Procedia PDF Downloads 1631563 Numerical Investigation of Geotextile Application in Clay Reinforcement in ABAQUS Software
Authors: Seyed Abolhasan Naeini, Eisa Aliagahei
Abstract:
Today, the use of geosynthetic materials in geotechnical activities is increasing significantly. One of the main uses of these materials is to increase the compressive strength of clay reinforced by geotextile layers. In the present study, the effect of clay reinforcement by geotextile layers in increasing the compressive strength of clay has been investigated using modeling in ABAQUS 6.11.3 software. For this purpose, the modified Drager Prager model has been chosen to simulate the stress-strain behavior of soil layers and the linear elastic model for the geotextile layer. Unreinforced samples and reinforced samples are modeled by geotextile layers (1, 2 and 3 geotextile layers) by software. In order to validate the results, an article in the same field was used and the numerical modeling results were calibrated with the laboratory results. Based on the obtained results, the software has a suitable capability for modeling and the results of the numerical model overlap with the laboratory results to a very acceptable extent, by increasing the number of geotextile layers, the error between the results of the laboratory sample and the software model increases. The highest amount of error is related to the sample reinforced with three layers of geotextile and is 7.3%.Keywords: Abaqus, cap model, clay, geotextile layer, reinforced soil
Procedia PDF Downloads 911562 A Longitudinal Case Study of Greek as a Second Language
Authors: M. Vassou, A. Karasimos
Abstract:
A primary concern in the field of Second Language Acquisition (SLA) research is to determine the innate mechanisms of second language learning and acquisition through the systematic study of a learner's interlanguage. Errors emerge while a learner attempts to communicate using the target-language and can be seen either as the observable linguistic product of the latent cognitive and language process of mental representations or as an indispensable learning mechanism. Therefore, the study of the learner’s erroneous forms may depict the various strategies and mechanisms that take place during the language acquisition process resulting in deviations from the target-language norms and difficulties in communication. Mapping the erroneous utterances of a late adult learner in the process of acquiring Greek as a second language constitutes one of the main aims of this study. For our research purposes, we created an error-tagged learner corpus composed of the participant’s written texts produced throughout a period of a 4- year instructed language acquisition. Error analysis and interlanguage theory constitute the methodological and theoretical framework, respectively. The research questions pertain to the learner's most frequent errors per linguistic category and per year as well as his choices concerning the Greek Article System. According to the quantitative analysis of the data, the most frequent errors are observed in the categories of the stress system and syntax, whereas a significant fluctuation and/or gradual reduction throughout the 4 years of instructed acquisition indicate the emergence of developmental stages. The findings with regard to the article usage bespeak fossilization of erroneous structures in certain contexts. In general, our results point towards the existence and further development of an established learner’s (inter-) language system governed not only by mother- tongue and target-language influences but also by the learner’s assumptions and set of rules as the result of a complex cognitive process. It is expected that this study will contribute not only to the knowledge in the field of Greek as a second language and SLA generally, but it will also provide an insight into the cognitive mechanisms and strategies developed by multilingual learners of late adulthood.Keywords: Greek as a second language, error analysis, interlanguage, late adult learner
Procedia PDF Downloads 1331561 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems
Authors: Riadh Zorgati, Thomas Triboulet
Abstract:
In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix
Procedia PDF Downloads 1391560 Estimation of Implicit Colebrook White Equation by Preferable Explicit Approximations in the Practical Turbulent Pipe Flow
Authors: Itissam Abuiziah
Abstract:
In several hydraulic systems, it is necessary to calculate the head losses which depend on the resistance flow friction factor in Darcy equation. Computing the resistance friction is based on implicit Colebrook-White equation which is considered as the standard for the friction calculation, but it needs high computational cost, therefore; several explicit approximation methods are used for solving an implicit equation to overcome this issue. It follows that the relative error is used to determine the most accurate method among the approximated used ones. Steel, cast iron and polyethylene pipe materials investigated with practical diameters ranged from 0.1m to 2.5m and velocities between 0.6m/s to 3m/s. In short, the results obtained show that the suitable method for some cases may not be accurate for other cases. For example, when using steel pipe materials, Zigrang and Silvester's method has revealed as the most precise in terms of low velocities 0.6 m/s to 1.3m/s. Comparatively, Halland method showed a less relative error with the gradual increase in velocity. Accordingly, the simulation results of this study might be employed by the hydraulic engineers, so they can take advantage to decide which is the most applicable method according to their practical pipe system expectations.Keywords: Colebrook–White, explicit equation, friction factor, hydraulic resistance, implicit equation, Reynolds numbers
Procedia PDF Downloads 1901559 Pattern of Anisometropia, Management and Outcome of Anisometropic Amblyopia
Authors: Husain Rajib, T. H. Sheikh, D. G. Jewel
Abstract:
Background: Amblyopia is a frequent cause of monocular blindness in children. It can be unilateral or bilateral reduction of best corrected visual acuity associated with decrement in visual processing, accomodation, motility, spatial perception or spatial projection. Anisometropia is an important risk factor for amblyopia that develops when unequal refractive error causes the image to be blurred in the critical developmental period and central inhibition of the visual signal originating from the affected eye associated with significant visual problems including anisokonia, strabismus, and reduced stereopsis. Methods: It is a prospective hospital based study of newly diagnosed of amblyopia seen at the pediatric clinic of Chittagong Eye Infirmary & Training Complex. There were 50 anisometropic amblyopia subjects were examined & questionnaire was piloted. Included were all patients diagnosed with refractive amblyopia between 3 to 13 years, without previous amblyopia treatment, and whose parents were interested to participate in the study. Patients diagnosed with strabismic amblyopia were excluded. Patients were first corrected with the best correction for a month. When the VA in the amblyopic eye did not improve over month, then occlusion treatment was started. Occlusion was done daily for 6-8 hours (full time) together with vision therapy. The occlusion was carried out for 3 months. Results: In this study about 8% subjects had anisometropia from myopia, 18% from hyperopia, 74% from astigmatism. The initial mean visual acuity was 0.74 ± 0.39 Log MAR and after intervention of amblyopia therapy with active vision therapy mean visual acuity was 0.34 ± 0.26 Log MAR. About 94% of subjects were improving at least two lines. The depth of amblyopia associated with type of anisometropic refractive error and magnitude of Anisometropia (p<0.005). By doing this study 10% mild amblyopia, 64% moderate and 26% severe amblyopia were found. Binocular function also decreases with magnitude of Anisometropia. Conclusion: Anisometropic amblyopia is a most important factor in pediatric age group because it can lead to visual impairment. Occlusion therapy with at least one instructed hour of active visual activity practiced out of school hours was effective in anisometropic amblyopes who were diagnosed at the age of 8 years and older, and the patients complied well with the treatment.Keywords: refractive error, anisometropia, amblyopia, strabismic amblyopia
Procedia PDF Downloads 2761558 Validation Study of Radial Aircraft Engine Model
Authors: Lukasz Grabowski, Tytus Tulwin, Michal Geca, P. Karpinski
Abstract:
This paper presents the radial aircraft engine model which has been created in AVL Boost software. This model is a one-dimensional physical model of the engine, which enables us to investigate the impact of an ignition system design on engine performance (power, torque, fuel consumption). In addition, this model allows research under variable environmental conditions to reflect varied flight conditions (altitude, humidity, cruising speed). Before the simulation research the identifying parameters and validating of model were studied. In order to verify the feasibility to take off power of gasoline radial aircraft engine model, some validation study was carried out. The first stage of the identification was completed with reference to the technical documentation provided by manufacturer of engine and the experiments on the test stand of the real engine. The second stage involved a comparison of simulation results with the results of the engine stand tests performed on a WSK ’PZL-Kalisz’. The engine was loaded by a propeller in a special test bench. Identifying the model parameters referred to a comparison of the test results to the simulation in terms of: pressure behind the throttles, pressure in the inlet pipe, and time course for pressure in the first inlet pipe, power, and specific fuel consumption. Accordingly, the required coefficients and error of simulation calculation relative to the real-object experiments were determined. Obtained the time course for pressure and its value is compatible with the experimental results. Additionally the engine power and specific fuel consumption tends to be significantly compatible with the bench tests. The mapping error does not exceed 1.5%, which verifies positively the model of combustion and allows us to predict engine performance if the process of combustion will be modified. The next conducted tests verified completely model. The maximum mapping error for the pressure behind the throttles and the inlet pipe pressure is 4 %, which proves the model of the inlet duct in the engine with the charging compressor to be correct.Keywords: 1D-model, aircraft engine, performance, validation
Procedia PDF Downloads 3381557 Comparison of the Boundary Element Method and the Method of Fundamental Solutions for Analysis of Potential and Elasticity
Authors: S. Zenhari, M. R. Hematiyan, A. Khosravifard, M. R. Feizi
Abstract:
The boundary element method (BEM) and the method of fundamental solutions (MFS) are well-known fundamental solution-based methods for solving a variety of problems. Both methods are boundary-type techniques and can provide accurate results. In comparison to the finite element method (FEM), which is a domain-type method, the BEM and the MFS need less manual effort to solve a problem. The aim of this study is to compare the accuracy and reliability of the BEM and the MFS. This comparison is made for 2D potential and elasticity problems with different boundary and loading conditions. In the comparisons, both convex and concave domains are considered. Both linear and quadratic elements are employed for boundary element analysis of the examples. The discretization of the problem domain in the BEM, i.e., converting the boundary of the problem into boundary elements, is relatively simple; however, in the MFS, obtaining appropriate locations of collocation and source points needs more attention to obtain reliable solutions. The results obtained from the presented examples show that both methods lead to accurate solutions for convex domains, whereas the BEM is more suitable than the MFS for concave domains.Keywords: boundary element method, method of fundamental solutions, elasticity, potential problem, convex domain, concave domain
Procedia PDF Downloads 961556 Iterative Solver for Solving Large-Scale Frictional Contact Problems
Authors: Thierno Diop, Michel Fortin, Jean Deteix
Abstract:
Since the precise formulation of the elastic part is irrelevant for the description of the algorithm, we shall consider a generic case. In practice, however, we will have to deal with a non linear material (for instance a Mooney-Rivlin model). We are interested in solving a finite element approximation of the problem, leading to large-scale non linear discrete problems and, after linearization, to large linear systems and ultimately to calculations needing iterative methods. This also implies that penalty method, and therefore augmented Lagrangian method, are to be banned because of their negative effect on the condition number of the underlying discrete systems and thus on the convergence of iterative methods. This is in rupture to the mainstream of methods for contact in which augmented Lagrangian is the principal tool. We shall first present the problem and its discretization; this will lead us to describe a general solution algorithm relying on a preconditioner for saddle-point problems which we shall describe in some detail as it is not entirely standard. We will propose an iterative approach for solving three-dimensional frictional contact problems between elastic bodies, including contact with a rigid body, contact between two or more bodies and also self-contact.Keywords: frictional contact, three-dimensional, large-scale, iterative method
Procedia PDF Downloads 2161555 Development of Agricultural Robotic Platform for Inter-Row Plant: An Autonomous Navigation Based on Machine Vision
Authors: Alaa El-Din Rezk
Abstract:
In Egypt, management of crops still away from what is being used today by utilizing the advances of mechanical design capabilities, sensing and electronics technology. These technologies have been introduced in many places and recorm, for Straight Path, Curved Path, Sine Wave ded high accuracy in different field operations. So, an autonomous robotic platform based on machine vision has been developed and constructed to be implemented in Egyptian conditions as self-propelled mobile vehicle for carrying tools for inter/intra-row crop management based on different control modules. The experiments were carried out at plant protection research institute (PPRI) during 2014-2015 to optimize the accuracy of agricultural robotic platform control using machine vision in term of the autonomous navigation and performance of the robot’s guidance system. Results showed that the robotic platform' guidance system with machine vision was able to adequately distinguish the path and resisted image noise and did better than human operators for getting less lateral offset error. The average error of autonomous was 2.75, 19.33, 21.22, 34.18, and 16.69 mm. while the human operator was 32.70, 4.85, 7.85, 38.35 and 14.75 mm Path, Offset Discontinuity and Angle Discontinuity respectively.Keywords: autonomous robotic, Hough transform, image processing, machine vision
Procedia PDF Downloads 3181554 Uncertainty and Optimization Analysis Using PETREL RE
Authors: Ankur Sachan
Abstract:
The ability to make quick yet intelligent and value-added decisions to develop new fields has always been of great significance. In situations where the capital expenses and subsurface risk are high, carefully analyzing the inherent uncertainties in the reservoir and how they impact the predicted hydrocarbon accumulation and production becomes a daunting task. The problem is compounded in offshore environments, especially in the presence of heavy oils and disconnected sands where the margin for error is small. Uncertainty refers to the degree to which the data set may be in error or stray from the predicted values. To understand and quantify the uncertainties in reservoir model is important when estimating the reserves. Uncertainty parameters can be geophysical, geological, petrophysical etc. Identification of these parameters is necessary to carry out the uncertainty analysis. With so many uncertainties working at different scales, it becomes essential to have a consistent and efficient way of incorporating them into our analysis. Ranking the uncertainties based on their impact on reserves helps to prioritize/ guide future data gathering and uncertainty reduction efforts. Assigning probabilistic ranges to key uncertainties also enables the computation of probabilistic reserves. With this in mind, this paper, with the help the uncertainty and optimization process in petrel RE shows how the most influential uncertainties can be determined efficiently and how much impact so they have on the reservoir model thus helping in determining a cost effective and accurate model of the reservoir.Keywords: uncertainty, reservoir model, parameters, optimization analysis
Procedia PDF Downloads 6741553 Feature Extraction and Impact Analysis for Solid Mechanics Using Supervised Finite Element Analysis
Authors: Edward Schwalb, Matthias Dehmer, Michael Schlenkrich, Farzaneh Taslimi, Ketron Mitchell-Wynne, Horen Kuecuekyan
Abstract:
We present a generalized feature extraction approach for supporting Machine Learning (ML) algorithms which perform tasks similar to Finite-Element Analysis (FEA). We report results for estimating the Head Injury Categorization (HIC) of vehicle engine compartments across various impact scenarios. Our experiments demonstrate that models learned using features derived with a simple discretization approach provide a reasonable approximation of a full simulation. We observe that Decision Trees could be as effective as Neural Networks for the HIC task. The simplicity and performance of the learned Decision Trees could offer a trade-off of a multiple order of magnitude increase in speed and cost improvement over full simulation for a reasonable approximation. When used as a complement to full simulation, the approach enables rapid approximate feedback to engineering teams before submission for full analysis. The approach produces mesh independent features and is further agnostic of the assembly structure.Keywords: mechanical design validation, FEA, supervised decision tree, convolutional neural network.
Procedia PDF Downloads 1441552 Integrated Navigation System Using Simplified Kalman Filter Algorithm
Authors: Othman Maklouf, Abdunnaser Tresh
Abstract:
GPS and inertial navigation system (INS) have complementary qualities that make them ideal use for sensor fusion. The limitations of GPS include occasional high noise content, outages when satellite signals are blocked, interference and low bandwidth. The strengths of GPS include its long-term stability and its capacity to function as a stand-alone navigation system. In contrast, INS is not subject to interference or outages, have high bandwidth and good short-term noise characteristics, but have long-term drift errors and require external information for initialization. A combined system of GPS and INS subsystems can exhibit the robustness, higher bandwidth and better noise characteristics of the inertial system with the long-term stability of GPS. The most common estimation algorithm used in integrated INS/GPS is the Kalman Filter (KF). KF is able to take advantages of these characteristics to provide a common integrated navigation implementation with performance superior to that of either subsystem (GPS or INS). This paper presents a simplified KF algorithm for land vehicle navigation application. In this integration scheme, the GPS derived positions and velocities are used as the update measurements for the INS derived PVA. The KF error state vector in this case includes the navigation parameters as well as the accelerometer and gyroscope error states.Keywords: GPS, INS, Kalman filter, inertial navigation system
Procedia PDF Downloads 4751551 Investigating Safe Operation Condition for Iterative Learning Control under Load Disturbances Effect in Singular Values
Authors: Muhammad A. Alsubaie
Abstract:
An iterative learning control framework designed in state feedback structure suffers a lack in investigating load disturbance considerations. The presented work discusses the controller previously designed, highlights the disturbance problem, finds new conditions using singular value principle to assure safe operation conditions with error convergence and reference tracking under the influence of load disturbance. It is known that periodic disturbances can be represented by a delay model in a positive feedback loop acting on the system input. This model can be manipulated by isolating the delay model and finding a controller for the overall system around the delay model to remedy the periodic disturbances using the small signal theorem. The overall system is the base for control design and load disturbance investigation. The major finding of this work is the load disturbance condition found which clearly sets safe operation condition under the influence of load disturbances such that the error tends to nearly zero as the system keeps operating trial after trial.Keywords: iterative learning control, singular values, state feedback, load disturbance
Procedia PDF Downloads 1611550 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 1431549 Fuzzy and Fuzzy-PI Controller for Rotor Speed of Gas Turbine
Authors: Mandar Ghodekar, Sharad Jadhav, Sangram Jadhav
Abstract:
Speed control of rotor during startup and under varying load conditions is one of the most difficult tasks of gas turbine operation. In this paper, power plant gas turbine (GE9001E) is considered for this purpose and fuzzy and fuzzy-PI rotor speed controllers are designed. The goal of the presented controllers is to keep the turbine rotor speed within predefined limits during startup condition as well as during operating condition. The fuzzy controller and fuzzy-PI controller are designed using Takagi-Sugeno method and Mamdani method, respectively. In applying the fuzzy-PI control to a gas-turbine plant, the tuning parameters (Kp and Ki) are modified online by fuzzy logic approach. Error and rate of change of error are inputs and change in fuel flow is output for both the controllers. Hence, rotor speed of gas turbine is controlled by modifying the fuel flow. The identified linear ARX model of gas turbine is considered while designing the controllers. For simulations, demand power is taken as disturbance input. It is assumed that inlet guide vane (IGV) position is fixed. In addition, the constraint on the fuel flow is taken into account. The performance of the presented controllers is compared with each other as well as with H∞ robust and MPC controllers for the same operating conditions in simulations.Keywords: gas turbine, fuzzy controller, fuzzy PI controller, power plant
Procedia PDF Downloads 3411548 Numerical Simulation of Unsteady Natural Convective Nanofluid Flow within a Trapezoidal Enclosure Using Meshfree Method
Authors: S. Nandal, R. Bhargava
Abstract:
The paper contains a numerical study of the unsteady magneto-hydrodynamic natural convection flow of nanofluids within a symmetrical wavy walled trapezoidal enclosure. The length and height of enclosure are both considered equal to L. Two-phase nanofluid model is employed. The governing equations of nanofluid flow along with boundary conditions are non-dimensionalized and are solved using one of Meshfree technique (EFGM method). Meshfree numerical technique does not require a predefined mesh for discretization purpose. The bottom wavy wall of the enclosure is defined using a cosine function. Element free Galerkin method (EFGM) does not require the domain. The effects of various parameters namely time t, amplitude of bottom wavy wall a, Brownian motion parameter Nb and thermophoresis parameter Nt is examined on rate of heat and mass transfer to get a visualization of cooling and heating effects. Such problems have important applications in heat exchangers or solar collectors, as wavy walled enclosures enhance heat transfer in comparison to flat walled enclosures.Keywords: heat transfer, meshfree methods, nanofluid, trapezoidal enclosure
Procedia PDF Downloads 1611547 Forecasting 24-Hour Ahead Electricity Load Using Time Series Models
Authors: Ramin Vafadary, Maryam Khanbaghi
Abstract:
Forecasting electricity load is important for various purposes like planning, operation, and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet, and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria, namely, the mean absolute error and root mean square error. The National Renewable Energy Laboratory (NREL) residential energy consumption data is used to train the models. The results of this study show that the SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts, we can improve the robustness of the models for 24 hours ahead of electricity load forecasting.Keywords: bagging, Fbprophet, Holt-Winters, LSTM, load forecast, SARIMA, TensorFlow probability, time series
Procedia PDF Downloads 1001546 Continuous Blood Pressure Measurement from Pulse Transit Time Techniques
Authors: Chien-Lin Wang, Cha-Ling Ko, Tainsong Chen
Abstract:
Pulse Blood pressure (BP) is one of the vital signs, and is an index that helps determining the stability of life. In this respect, some spinal cord injury patients need to take the tilt table test. While doing the test, the posture changes abruptly, and may cause a patient’s BP to change abnormally. This may cause patients to feel discomfort, and even feel as though their life is threatened. Therefore, if a continuous non-invasive BP assessment system were built, it could help to alert health care professionals in the process of rehabilitation when the BP value is out of range. In our research, BP assessed by the pulse transit time technique was developed. In the system, we use a self-made photoplethysmograph (PPG) sensor and filter circuit to detect two PPG signals and to calculate the time difference. The BP can immediately be assessed by the trend line. According to the results of this study, the relationship between the systolic BP and PTT has a highly negative linear correlation (R2=0.8). Further, we used the trend line to assess the value of the BP and compared it to a commercial sphygmomanometer (Omron MX3); the error rate of the system was found to be in the range of ±10%, which is within the permissible error range of a commercial sphygmomanometer. The continue blood pressure measurement from pulse transit time technique may have potential to become a convenience method for clinical rehabilitation.Keywords: continous blood pressure measurement, PPG, time transit time, transit velocity
Procedia PDF Downloads 3561545 Parametric and Analysis Study of the Melting in Slabs Heated by a Laminar Heat Transfer Fluid in Downward and Upward Flows
Authors: Radouane Elbahjaoui, Hamid El Qarnia
Abstract:
The present work aims to investigate numerically the thermal and flow characteristics of a rectangular latent heat storage unit (LHSU) during the melting process of a phase change material (PCM). The LHSU consists of a number of vertical and identical plates of PCM separated by rectangular channels. The melting process is initiated when the LHSU is heated by a heat transfer fluid (HTF: water) flowing in channels in a downward or upward direction. The proposed study is motivated by the need to optimize the thermal performance of the LHSU by accelerating the charging process. A mathematical model is developed and a fixed-grid enthalpy formulation is adopted for modeling the melting process coupling with convection-conduction heat transfer. The finite volume method was used for discretization. The obtained numerical results are compared with experimental, analytical and numerical ones found in the literature and reasonable agreement is obtained. Thereafter, the numerical investigations were carried out to highlight the effects of the HTF flow direction and the aspect ratio of the PCM slabs on the heat transfer characteristics and thermal performance enhancement of the LHSU.Keywords: PCM, TES, LHSU, melting
Procedia PDF Downloads 264