Search results for: error analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27935

Search results for: error analysis

27875 Intelligent Computing with Bayesian Regularization Artificial Neural Networks for a Nonlinear System of COVID-19 Epidemic Model for Future Generation Disease Control

Authors: Tahir Nawaz Cheema, Dumitru Baleanu, Ali Raza

Abstract:

In this research work, we design intelligent computing through Bayesian Regularization artificial neural networks (BRANNs) introduced to solve the mathematical modeling of infectious diseases (Covid-19). The dynamical transmission is due to the interaction of people and its mathematical representation based on the system's nonlinear differential equations. The generation of the dataset of the Covid-19 model is exploited by the power of the explicit Runge Kutta method for different countries of the world like India, Pakistan, Italy, and many more. The generated dataset is approximately used for training, testing, and validation processes for every frequent update in Bayesian Regularization backpropagation for numerical behavior of the dynamics of the Covid-19 model. The performance and effectiveness of designed methodology BRANNs are checked through mean squared error, error histograms, numerical solutions, absolute error, and regression analysis.

Keywords: mathematical models, beysian regularization, bayesian-regularization backpropagation networks, regression analysis, numerical computing

Procedia PDF Downloads 107
27874 Composite Forecasts Accuracy for Automobile Sales in Thailand

Authors: Watchareeporn Chaimongkol

Abstract:

In this paper, we compare the statistical measures accuracy of composite forecasting model to estimate automobile customer demand in Thailand. A modified simple exponential smoothing and autoregressive integrate moving average (ARIMA) forecasting model is built to estimate customer demand of passenger cars, instead of using information of historical sales data. Our model takes into account special characteristic of the Thai automobile market such as sales promotion, advertising and publicity, petrol price, and interest rate for loan. We evaluate our forecasting model by comparing forecasts with actual data using six accuracy measurements, mean absolute percentage error (MAPE), geometric mean absolute error (GMAE), symmetric mean absolute percentage error (sMAPE), mean absolute scaled error (MASE), median relative absolute error (MdRAE), and geometric mean relative absolute error (GMRAE).

Keywords: composite forecasting, simple exponential smoothing model, autoregressive integrate moving average model selection, accuracy measurements

Procedia PDF Downloads 330
27873 Forecast Based on an Empirical Probability Function with an Adjusted Error Using Propagation of Error

Authors: Oscar Javier Herrera, Manuel Angel Camacho

Abstract:

This paper addresses a cutting edge method of business demand forecasting, based on an empirical probability function when the historical behavior of the data is random. Additionally, it presents error determination based on the numerical method technique ‘propagation of errors’. The methodology was conducted characterization and process diagnostics demand planning as part of the production management, then new ways to predict its value through techniques of probability and to calculate their mistake investigated, it was tools used numerical methods. All this based on the behavior of the data. This analysis was determined considering the specific business circumstances of a company in the sector of communications, located in the city of Bogota, Colombia. In conclusion, using this application it was possible to obtain the adequate stock of the products required by the company to provide its services, helping the company reduce its service time, increase the client satisfaction rate, reduce stock which has not been in rotation for a long time, code its inventory, and plan reorder points for the replenishment of stock.

Keywords: demand forecasting, empirical distribution, propagation of error, Bogota

Procedia PDF Downloads 585
27872 Solar Radiation Studies for Islamabad, Pakistan

Authors: Sidra A. Shaikh, M. A. Ahmed, M. W. Akhtar

Abstract:

Global and diffuse solar radiation studies have been carried out for Islamabad (Lat: 330 43’ N, Long: 370 71’) to access the solar potential of the area using sunshine hour data. A detailed analysis of global solar radiation values measured using several methods is presented. These values are then compared with the NASA SSE model. The variation in direct and diffuse components of solar radiation is observed in summer and winter months for Islamabad along with the clearness index KT. The diffuse solar radiation is found maximum in the month of July. Direct and beam radiation is found to be high in the month of April to June. From the results it appears that with the exception of monsoon months, July and August, solar radiation for electricity generation can be utilized very efficiently throughout the year. Finally, the mean bias error (MBE), root mean square error (RMSE) and mean percent error (MPE) for global solar radiation are also presented.

Keywords: solar potential, global and diffuse solar radiation, Islamabad, errors

Procedia PDF Downloads 408
27871 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error

Authors: R. Sabre, W. Horrigue, J. C. Simon

Abstract:

This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.

Keywords: spectral density, stable processes, aliasing, periodogram

Procedia PDF Downloads 112
27870 Pin Count Aware Volumetric Error Detection in Arbitrary Microfluidic Bio-Chip

Authors: Kunal Das, Priya Sengupta, Abhishek K. Singh

Abstract:

Pin assignment, scheduling, routing and error detection for arbitrary biochemical protocols in Digital Microfluidic Biochip have been reported in this paper. The research work is concentrating on pin assignment for 2 or 3 droplets routing in the arbitrary biochemical protocol, scheduling and routing in m × n biochip. The volumetric error arises due to droplet split in the biochip. The volumetric error detection is also addressed using biochip AND logic gate which is known as microfluidic AND or mAND gate. The algorithm for pin assignment for m × n biochip required m+n-1 numbers of pins. The basic principle of this algorithm is that no same pin will be allowed to be placed in the same column, same row and diagonal and adjacent cells. The same pin should be placed a distance apart such that interference becomes less. A case study also reported in this paper.

Keywords: digital microfludic biochip, cross-contamination, pin assignment, microfluidic AND gate

Procedia PDF Downloads 242
27869 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez

Abstract:

In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.

Keywords: attacker, defender, game theory, information

Procedia PDF Downloads 424
27868 Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are: 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.

Keywords: coded indirect corrective feedback, error correction, error treatment, frequent English writing errors

Procedia PDF Downloads 201
27867 Implementing Digital Control System in Robotics

Authors: Safiullah Abdullahi

Abstract:

This paper describes the design of a digital control system which controls the speed and direction of a robot. The robot is expected to follow a black thick line with the highest possible speed and lowest error around the line. The control system of the robot will correct for the angle error that is made between the frame axis of the robot and the line. The cause for error is the difference in speed of the two driving wheels of the robot which are driven by two separate DC motors, whereas the speed difference in wheels is due to the un-modeled fraction that is available in the wheels with different magnitudes in each. The control scheme is that a number of photo sensors are mounted in the front of the robot and report their position in reference to the black line to the digital controller. The controller then, evaluates the position error and generates the needed duty cycle for the related wheel motor to drive it faster or slower.

Keywords: digital control, robot, controller, control system

Procedia PDF Downloads 519
27866 Forecasting Performance Comparison of Autoregressive Fractional Integrated Moving Average and Jordan Recurrent Neural Network Models on the Turbidity of Stream Flows

Authors: Daniel Fulus Fom, Gau Patrick Damulak

Abstract:

In this study, the Autoregressive Fractional Integrated Moving Average (ARFIMA) and Jordan Recurrent Neural Network (JRNN) models were employed to model the forecasting performance of the daily turbidity flow of White Clay Creek (WCC). The two methods were applied to the log difference series of the daily turbidity flow series of WCC. The measurements of error employed to investigate the forecasting performance of the ARFIMA and JRNN models are the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE). The outcome of the investigation revealed that the forecasting performance of the JRNN technique is better than the forecasting performance of the ARFIMA technique in the mean square error sense. The results of the ARFIMA and JRNN models were obtained by the simulation of the models using MATLAB version 8.03. The significance of using the log difference series rather than the difference series is that the log difference series stabilizes the turbidity flow series than the difference series on the ARFIMA and JRNN.

Keywords: auto regressive, mean absolute error, neural network, root square mean error

Procedia PDF Downloads 240
27865 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts Grey Relational Analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.

Keywords: metal matrix composite, drilling, optimization, step drill, surface roughness, burr height, hole diameter error

Procedia PDF Downloads 284
27864 Design for Error-Proofing Assembly: A Systematic Approach to Prevent Assembly Issues since Early Design Stages, an Industrial Case Study

Authors: Gabriela Estrada, Joaquim Lloveras

Abstract:

Design for error-proofing assembly is a new DFX approach to prevent assembly issues since early design stages. Assembly issues that can happen during the life phases of a system such as: production, installation, operation, and replacement phases. This prevention is possible by designing the product with poka-yoke or error-proofing characteristics. This approach guide designers to make decisions based on poka-yoke assembly design requirements. As a result of applying these requirements designers are able to create solutions to prevent assembly issues for the product in development stage. This paper integrates the needs to design products in an error proofing way into the systematic approach of design process by Pahl and Beitz. A case study is presented applying this approach.

Keywords: poka-yoke, error-proofing, assembly issues, design process, life phases of a system

Procedia PDF Downloads 343
27863 Design for Error-Proofing Assembly: A Systematic Approach to Prevent Assembly Issues since Early Design Stages. An Industry Case Study

Authors: Gabriela Estrada, Joaquim Lloveras

Abstract:

Design for error-proofing assembly is a new DFX approach to prevent assembly issues since early design stages. Assembly issues that can happen during the life phases of a system such as: production, installation, operation and replacement phases. This prevention is possible by designing the product with poka-yoke or error-proofing characteristics. This approach guide designers to make decisions based on poka-yoke assembly design requirements. As a result of applying these requirements designers are able to create solutions to prevent assembly issues for the product in development stage. This paper integrates the needs to design products in an error proofing way into the systematic approach of design process by Pahl and Beitz. A case study is presented applying this approach.

Keywords: poka-yoke, error-proofing, assembly issues, design process, life phases of a system

Procedia PDF Downloads 290
27862 Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment: The Case of Reading and Writing English for Academic Purposes II

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.

Keywords: coded indirect corrective feedback, error correction, error treatment, English writing

Procedia PDF Downloads 274
27861 The Omani Learner of English Corpus: Source and Tools

Authors: Anood Al-Shibli

Abstract:

Designing a learner corpus is not an easy task to accomplish because dealing with learners’ language has many variables which might affect the results of any study based on learners’ language production (spoken and written). Also, it is very essential to systematically design a learner corpus especially when it is aimed to be a reference to language research. Therefore, designing the Omani Learner Corpus (OLEC) has undergone many explicit and systematic considerations. These criteria can be regarded as the foundation to design any learner corpus to be exploited effectively in language use and language learning studies. Added to that, OLEC is manually error-annotated corpus. Error-annotation in learner corpora is very essential; however, it is time-consuming and prone to errors. Consequently, a navigating tool is designed to help the annotators to insert errors’ codes in order to make the error-annotation process more efficient and consistent. To assure accuracy, error annotation procedure is followed to annotate OLEC and some preliminary findings are noted. One of the main results of this procedure is creating an error-annotation system based on the Omani learners of English language production. Because OLEC is still in the first stages, the primary findings are related to only one level of proficiency and one error type which is verb related errors. It is found that Omani learners in OLEC has the tendency to have more errors in forming the verb and followed by problems in agreement of verb. Comparing the results to other error-based studies indicate that the Omani learners tend to have basic verb errors which can found in lower-level of proficiency. To this end, it is essential to admit that examining learners’ errors can give insights to language acquisition and language learning and most errors do not happen randomly but they occur systematically among language learners.

Keywords: error-annotation system, error-annotation manual, learner corpora, verbs related errors

Procedia PDF Downloads 105
27860 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 257
27859 Reduced Complexity of ML Detection Combined with DFE

Authors: Jae-Hyun Ro, Yong-Jun Kim, Chang-Bin Ha, Hyoung-Kyu Song

Abstract:

In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, many detection schemes have been developed to improve the error performance and to reduce the complexity. Maximum likelihood (ML) detection has optimal error performance but it has very high complexity. Thus, this paper proposes reduced complexity of ML detection combined with decision feedback equalizer (DFE). The error performance of the proposed detection scheme is higher than the conventional DFE. But the complexity of the proposed scheme is lower than the conventional ML detection.

Keywords: detection, DFE, MIMO-OFDM, ML

Procedia PDF Downloads 569
27858 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 279
27857 Task Space Synchronization Control of Multi-Robot Arms with Position Synchronous Method

Authors: Zijian Zhang, Yangyang Dong

Abstract:

Synchronization is of great importance to ensure the multi-arm robot to complete the task. Therefore, a synchronous controller is designed to coordinate task space motion of the multi-arm in the paper. The position error, the synchronous position error, and the coupling position error are all considered in the controller. Besides, an adaptive control method is used to adjust parameters of the controller to improve the effectiveness of coordinated control performance. Simulation in the Matlab shows the effectiveness of the method. At last, a robot experiment platform with two 7-DOF (Degree of Freedom) robot arms has been established and the synchronous controller simplified to control dual-arm robot has been validated on the experimental set-up. Experiment results show the position error decreased 10% and the corresponding frequency is also greatly improved.

Keywords: synchronous control, space robot, task space control, multi-arm robot

Procedia PDF Downloads 129
27856 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 117
27855 A Combined Error Control with Forward Euler Method for Dynamical Systems

Authors: R. Vigneswaran, S. Thilakanathan

Abstract:

Variable time-stepping algorithms for solving dynamical systems performed poorly for long time computations which pass close to a fixed point. To overcome this difficulty, several authors considered phase space error controls for numerical simulation of dynamical systems. In one generalized phase space error control, a step-size selection scheme was proposed, which allows this error control to be incorporated into the standard adaptive algorithm as an extra constraint at negligible extra computational cost. For this generalized error control, it was already analyzed the forward Euler method applied to the linear system whose coefficient matrix has real and negative eigenvalues. In this paper, this result was extended to the linear system whose coefficient matrix has complex eigenvalues with negative real parts. Some theoretical results were obtained and numerical experiments were carried out to support the theoretical results.

Keywords: adaptivity, fixed point, long time simulations, stability, linear system

Procedia PDF Downloads 283
27854 Modeling of Diurnal Pattern of Air Temperature in a Tropical Environment: Ile-Ife and Ibadan, Nigeria

Authors: Rufus Temidayo Akinnubi, M. O. Adeniyi

Abstract:

Existing diurnal air temperature models simulate night time air temperature over Nigeria with high biases. An improved parameterization is presented for modeling the diurnal pattern of air temperature (Ta) which is applicable in the calculation of turbulent heat fluxes in Global climate models, based on Nigeria Micrometeorological Experimental site (NIMEX) surface layer observations. Five diurnal Ta models for estimating hourly Ta from daily maximum, daily minimum, and daily mean air temperature were validated using root-mean-square error (RMSE), Mean Error Bias (MBE) and scatter graphs. The original Fourier series model showed better performance for unstable air temperature parameterizations while the stable Ta was strongly overestimated with a large error. The model was improved with the inclusion of the atmospheric cooling rate that accounts for the temperature inversion that occurs during the nocturnal boundary layer condition. The MBE and RMSE estimated by the modified Fourier series model reduced by 4.45 oC and 3.12 oC during the transitional period from dry to wet stable atmospheric conditions. The modified Fourier series model gave good estimation of the diurnal weather patterns of Ta when compared with other existing models for a tropical environment.

Keywords: air temperature, mean bias error, Fourier series analysis, surface energy balance,

Procedia PDF Downloads 198
27853 Parameter Estimation of False Dynamic EIV Model with Additive Uncertainty

Authors: Dalvinder Kaur Mangal

Abstract:

For the past decade, noise corrupted output measurements have been a fundamental research problem to be investigated. On the other hand, the estimation of the parameters for linear dynamic systems when also the input is affected by noise is recognized as more difficult problem which only recently has received increasing attention. Representations where errors or measurement noises/disturbances are present on both the inputs and outputs are usually called errors-in-variables (EIV) models. These disturbances may also have additive effects which are also considered in this paper. Parameter estimation of false EIV problem using equation error, output error and iterative prefiltering identification schemes with and without additive uncertainty, when only the output observation is corrupted by noise has been dealt in this paper. The comparative study of these three schemes has also been carried out.

Keywords: errors-in-variable (EIV), false EIV, equation error, output error, iterative prefiltering, Gaussian noise

Procedia PDF Downloads 452
27852 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 362
27851 A Soft Error Rates (SER) Evaluation Method of Combinational Logic Circuit Based on Linear Energy Transfers

Authors: Man Li, Wanting Zhou, Lei Li

Abstract:

Communication stability is the primary concern of communication satellites. Communication satellites are easily affected by particle radiation to generate single event effects (SEE), which leads to soft errors (SE) of the combinational logic circuit. The existing research on soft error rates (SER) of the combined logic circuit is mostly based on the assumption that the logic gates being bombarded have the same pulse width. However, in the actual radiation environment, the pulse widths of the logic gates being bombarded are different due to different linear energy transfers (LET). In order to improve the accuracy of SER evaluation model, this paper proposes a soft error rate evaluation method based on LET. In this paper, the authors analyze the influence of LET on the pulse width of combinational logic and establish the pulse width model based on the LET. Based on this model, the error rate of test circuit ISCAS'85 is calculated. The effectiveness of the model is proved by comparing it with previous experiments.

Keywords: communication satellite, pulse width, soft error rates, LET

Procedia PDF Downloads 130
27850 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 403
27849 Performance Analysis of a Hybrid DF-AF Hybrid RF/FSO System under Gamma Gamma Atmospheric Turbulence Channel Using MPPM Modulation

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of hybrid amplify and forward - decode and forward (AF-DF) hybrid radio frequency/free space optical (RF/FSO) communication system, that adopts M-ary pulse position modulation (MPPM) techniques, is analyzed. Both exact and approximate symbol-error rates (SERs) are derived. The random variations of the received optical irradiance, produced by the atmospheric turbulence, is modeled by the gamma-gamma (GG) statistical distribution. A closed-form expression for the probability density function (PDF) is derived for the whole above system is obtained. Thanks to the use of hybrid AF-DF hybrid RF/FSO configuration and MPPM, the effects of atmospheric turbulence is mitigated; hence the capacity of combating atmospheric turbulence and the transmissitted signal quality are improved.

Keywords: free space optical, gamma gamma channel, radio frequency, decode and forward, error pointing, M-ary pulse position modulation, symbol error rate

Procedia PDF Downloads 249
27848 Influence of Error Correction Codes on the Quality of Optical Broadband Connections

Authors: Mouna Hemdi, Jamel bel Hadj Tahar

Abstract:

The increasing development of multimedia applications requiring the simultaneous transport of several different services contributes to the evolution of the need for very high-speed network. In this paper, we propose an effective solution to achieve the very high speed while retaining elements of the optical transmission channel. So our study focuses on error correcting codes that aim for quality improvement on duty. We present a comparison of the quality of service for single channels and integrating the code BCH, RS and LDPC in order to find the best code in the different conditions of the transmission.

Keywords: code error correction, high speed broadband, optical transmission, information systems security

Procedia PDF Downloads 357
27847 A Short-Baseline Dual-Antenna BDS/MEMS-IMU Integrated Navigation System

Authors: Tijing Cai, Qimeng Xu, Daijin Zhou

Abstract:

This paper puts forward a short-baseline dual-antenna BDS/MEMS-IMU integrated navigation, constructs the carrier phase double difference model of BDS (BeiDou Navigation Satellite System), and presents a 2-position initial orientation method on BDS. The Extended Kalman-filter has been introduced for the integrated navigation system. The differences between MEMS-IMU and BDS position, velocity and carrier phase indications are used as measurements. To show the performance of the short-baseline dual-antenna BDS/MEMS-IMU integrated navigation system, the experiment results show that the position error is less than 1m, the pitch angle error and roll angle error are less than 0.1°, and the heading angle error is about 1°.

Keywords: MEMS-IMU (Micro-Electro-Mechanical System Inertial Measurement Unit), BDS (BeiDou Navigation Satellite System), dual-antenna, integrated navigation

Procedia PDF Downloads 163
27846 Weighted G2 Multi-Degree Reduction of Bezier Curves

Authors: Salisu ibrahim, Abdalla Rababah

Abstract:

In this research, we use Weighted G2-Multi-degree reduction of Bezier curve of degree n to a Bezier curve of degree m, m < n. The degree reduction of Bezier curves is used to represent a given Bezier curve of n by a Bezier curve of degree m, m < n. Exact degree reduction is not possible, and degree reduction is approximate process in nature. We derive a weighted degree reducing method that is geometrically continuous at the end points. Different norms will be considered, several error minimizations will be given. The proposed methods produce error function that are less than the errors of existing methods.

Keywords: Bezier curves, multiple degree reduction, geometric continuity, error function

Procedia PDF Downloads 446