Search results for: error analysis
9509 Hardware Error Analysis and Severity Characterization in Linux-Based Server Systems
Authors: N. Georgoulopoulos, A. Hatzopoulos, K. Karamitsios, K. Kotrotsios, A. I. Metsai
Abstract:
Current server systems are responsible for critical applications that run in different infrastructures, such as the cloud, physical machines, and virtual machines. A common challenge that these systems face are the various hardware faults that may occur due to the high load, among other reasons, which translates to errors resulting in malfunctions or even server downtime. The most important hardware parts, that are causing most of the errors, are the CPU, RAM, and the hard drive - HDD. In this work, we investigate selected CPU, RAM, and HDD errors, observed or simulated in kernel ring buffer log files from GNU/Linux servers. Moreover, a severity characterization is given for each error type. Understanding these errors is crucial for the efficient analysis of kernel logs that are usually utilized for monitoring servers and diagnosing faults. In addition, to support the previous analysis, we present possible ways of simulating hardware errors in RAM and HDD, aiming to facilitate the testing of methods for detecting and tackling the above issues in a server running on GNU/Linux.
Keywords: hardware errors, Kernel logs, GNU/Linux servers, RAM, HDD, CPU
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6809508 A Novel Forgetting Factor Recursive Least Square Algorithm Applied to the Human Motion Analysis
Authors: Hadi Sadoghi Yazdi, Mehri Sadoghi Yazdi, Mohammad Reza Mohammadi
Abstract:
This paper is concerned with studying the forgetting factor of the recursive least square (RLS). A new dynamic forgetting factor (DFF) for RLS algorithm is presented. The proposed DFF-RLS is compared to other methods. Better performance at convergence and tracking of noisy chirp sinusoid is achieved. The control of the forgetting factor at DFF-RLS is based on the gradient of inverse correlation matrix. Compared with the gradient of mean square error algorithm, the proposed approach provides faster tracking and smaller mean square error. In low signal-to-noise ratios, the performance of the proposed method is superior to other approaches.
Keywords: Forgetting factor, RLS, Inverse correlation matrix, human motion analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22469507 Stepsize Control of the Finite Difference Method for Solving Ordinary Differential Equations
Authors: Davod Khojasteh Salkuyeh
Abstract:
An important task in solving second order linear ordinary differential equations by the finite difference is to choose a suitable stepsize h. In this paper, by using the stochastic arithmetic, the CESTAC method and the CADNA library we present a procedure to estimate the optimal stepsize hopt, the stepsize which minimizes the global error consisting of truncation and round-off error.
Keywords: Ordinary differential equations, optimal stepsize, error, stochastic arithmetic, CESTAC, CADNA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13629506 Bit Error Rate Analysis of Mobile Communication Network in Nakagami Fading Channel: Interference Considerations
Authors: Manoranjan Das, Benudhar Sahu, Urmila Bhanja
Abstract:
Co-channel interference is one of the major problems in wireless systems. The effects of co-channel interference in a Nakagami fading channel on the ABER (Average Bit Error Rate) with static nodes are well analyzed. In this paper, we derive the ABER with the presence of mobile nodes. ABER is also derived for mobile systems in the presence of co-channel interference.
Keywords: ABER, co-channel interference, Nakagami fading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12309505 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite
Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar
Abstract:
This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts grey relational analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.
Keywords: Metal matrix composite, Drilling, Optimization, step drill, Surface roughness, burr height, hole diameter error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32549504 An Adaptive ARQ – HARQ Method with Two RS Codes
Authors: Michal Martinovič, Jaroslav Polec, Kvetoslava Kotuliaková
Abstract:
In this paper we proposed multistage adaptive ARQ/HARQ/HARQ scheme. This method combines pure ARQ (Automatic Repeat reQuest) mode in low channel bit error rate and hybrid ARQ method using two different Reed-Solomon codes in middle and high error rate conditions. It follows, that our scheme has three stages. The main goal is to increase number of states in adaptive HARQ methods and be able to achieve maximum throughput for every channel bit error rate. We will prove the proposal by calculation and then with simulations in land mobile satellite channel environment. Optimization of scheme system parameters is described in order to maximize the throughput in the whole defined Signal-to- Noise Ratio (SNR) range in selected channel environment.Keywords: Signal-to-noise ratio, throughput, forward error correction (FEC), pure and hybrid automatic repeat request (ARQ).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19669503 Studies on Affecting Factors of Wheel Slip and Odometry Error on Real-Time of Wheeled Mobile Robots: A Review
Authors: D. Vidhyaprakash, A. Elango
Abstract:
In real-time applications, wheeled mobile robots are increasingly used and operated in extreme and diverse conditions traversing challenging surfaces such as a pitted, uneven terrain, natural flat, smooth terrain, as well as wet and dry surfaces. In order to accomplish such tasks, it is critical that the motion control functions without wheel slip and odometry error during the navigation of the two-wheeled mobile robot (WMR). Wheel slip and odometry error are disrupting factors on overall WMR performance in the form of deviation from desired trajectory, navigation, travel time and budgeted energy consumption. The wheeled mobile robot’s ability to operate at peak performance on various work surfaces without wheel slippage and odometry error is directly connected to four main parameters, which are the range of payload distribution, speed, wheel diameter, and wheel width. This paper analyses the effects of those parameters on overall performance and is concerned with determining the ideal range of parameters for optimum performance.
Keywords: Wheeled mobile robot (WMR), terrain, wheel slippage, odometry error, navigation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12499502 Optimizing Forecasting for Indonesia's Coal and Palm Oil Exports: A Comparative Analysis of ARIMA, ANN, and LSTM Methods
Authors: Mochammad Dewo, Sumarsono Sudarto
Abstract:
The Exponential Triple Smoothing Algorithm approach nowadays, which is used to anticipate the export value of Indonesia's two major commodities, coal and palm oil, has a Mean Percentage Absolute Error (MAPE) value of 30-50%, which may be considered as a "reasonable" forecasting mistake. Forecasting errors of more than 30% shall have a domino effect on industrial output, as extra production adds to raw material, manufacturing and storage expenses. Whereas, reaching an "excellent" classification with an error value of less than 10% will provide new investors and exporters with confidence in the commercial development of related sectors. Industrial growth will bring out a positive impact on economic development. It can be applied for other commodities if the forecast error is less than 10%. The purpose of this project is to create a forecasting technique that can produce precise forecasting results with an error of less than 10%. This research analyzes forecasting methods such as ARIMA (Autoregressive Integrated Moving Average), ANN (Artificial Neural Network) and LSTM (Long-Short Term Memory). By providing a MAPE of 1%, this study reveals that ANN is the most successful strategy for forecasting coal and palm oil commodities in Indonesia.
Keywords: ANN, Artificial Neural Network, ARIMA, Autoregressive Integrated Moving Average, export value, forecast, LSTM, Long Short Term Memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2239501 A Soft Error Rates Evaluation Method of Combinational Logic Circuit Based on Linear Energy Transfers
Authors: Man Li, Wanting Zhou, Lei Li
Abstract:
Communication stability is the primary concern of communication satellites. Communication satellites are easily affected by particle radiation to generate single event effects (SEE), which leads to soft errors (SE) of combinational logic circuit. The existing research on soft error rates (SER) of combined logic circuit is mostly based on the assumption that the logic gates being bombarded have the same pulse width. However, in the actual radiation environment, the pulse widths of the logic gates being bombarded are different due to different linear energy transfers (LET). In order to improve the accuracy of SER evaluation model, this paper proposes a soft error rates evaluation method based on LET. In this paper, we analyze the influence of LET on the pulse width of combinational logic and establish the pulse width model based on LET. Based on this model, the error rate of test circuit ISCAS’85 is calculated. Experimental results show that this model can be used for SER evaluation.
Keywords: Communication satellite, pulse width, soft error rates, linear energy transfer, LET.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3829500 Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations
Authors: Insah Bhurtah, P. Clarel Catherine, K. M. Sunjiv Soyjaudah
Abstract:
The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Keywords: Error-correcting codes, information theory, low-density parity-check codes, sum-product algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17069499 Determine of Constant Coefficients to RelateTotal Dissolved Solids to Electrical Conductivity
Authors: M. Siosemarde, F. Kave, E. Pazira, H. Sedghi, S. J. Ghaderi
Abstract:
Salinity is a measure of the amount of salts in the water. Total Dissolved Solids (TDS) as salinity parameter are often determined using laborious and time consuming laboratory tests, but it may be more appropriate and economical to develop a method which uses a more simple soil salinity index. Because dissolved ions increase salinity as well as conductivity, the two measures are related. The aim of this research was determine of constant coefficients for predicting of Total Dissolved Solids (TDS) based on Electrical Conductivity (EC) with Statistics of Correlation coefficient, Root mean square error, Maximum error, Mean Bias error, Mean absolute error, Relative error and Coefficient of residual mass. For this purpose, two experimental areas (S1, S2) of Khuzestan province-IRAN were selected and four treatments with three replications by series of double rings were applied. The treatments were included 25cm, 50cm, 75cm and 100cm water application. The results showed the values 16.3 & 12.4 were the best constant coefficients for predicting of Total Dissolved Solids (TDS) based on EC in Pilot S1 and S2 with correlation coefficient 0.977 & 0.997 and 191.1 & 106.1 Root mean square errors (RMSE) respectively.Keywords: constant coefficients, electrical conductivity, Khuzestan plain and total dissolved solids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39029498 Alternative Convergence Analysis for a Kind of Singularly Perturbed Boundary Value Problems
Authors: Jiming Yang
Abstract:
A kind of singularly perturbed boundary value problems is under consideration. In order to obtain its approximation, simple upwind difference discretization is applied. We use a moving mesh iterative algorithm based on equi-distributing of the arc-length function of the current computed piecewise linear solution. First, a maximum norm a posteriori error estimate on an arbitrary mesh is derived using a different method from the one carried out by Chen [Advances in Computational Mathematics, 24(1-4) (2006), 197-212.]. Then, basing on the properties of discrete Green-s function and the presented posteriori error estimate, we theoretically prove that the discrete solutions computed by the algorithm are first-order uniformly convergent with respect to the perturbation parameter ε.
Keywords: Convergence analysis, green's function, singularly perturbed, equi-distribution, moving mesh.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16959497 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
The problems arising from unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many researchers have found that the performance of existing classifiers tends to be biased towards the majority class. The k-nearest neighbors’ nonparametric discriminant analysis is a method that was proposed for classifying unbalanced classes with good performance. In this study, the methods of discriminant analysis are of interest in investigating misclassification error rates for classimbalanced data of three diabetes risk groups. The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification of class-imbalanced data of diabetes risk groups. Data from a project maintaining healthy conditions for 599 employees of a government hospital in Bangkok were obtained for the classification problem. The employees were divided into three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data including the variables of diabetes risk group, age, gender, blood glucose, and BMI were analyzed and bootstrapped for 50 and 100 samples, 599 observations per sample, for additional estimation of the misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples showed nonnormality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. Searching the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10) and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k=3 or k=4 and the defined prior probabilities of non-risk: risk: diabetic as 0.90: 0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of misclassification. The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: Bootstrap, diabetes risk groups, error rate, k-nearest neighbors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20079496 Optimal Combination for Modal Pushover Analysis by Using Genetic Algorithm
Authors: K. Shakeri, M. Mohebbi
Abstract:
In order to consider the effects of the higher modes in the pushover analysis, during the recent years several multi-modal pushover procedures have been presented. In these methods the response of the considered modes are combined by the square-rootof- sum-of-squares (SRSS) rule while application of the elastic modal combination rules in the inelastic phases is no longer valid. In this research the feasibility of defining an efficient alternative combination method is investigated. Two steel moment-frame buildings denoted SAC-9 and SAC-20 under ten earthquake records are considered. The nonlinear responses of the structures are estimated by the directed algebraic combination of the weighted responses of the separate modes. The weight of the each mode is defined so that the resulted response of the combination has a minimum error to the nonlinear time history analysis. The genetic algorithm (GA) is used to minimize the error and optimize the weight factors. The obtained optimal factors for each mode in different cases are compared together to find unique appropriate weight factors for each mode in all cases.Keywords: Genetic Algorithm, Modal Pushover, Optimalweight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18039495 Electric Load Forecasting Using Genetic Based Algorithm, Optimal Filter Estimator and Least Error Squares Technique: Comparative Study
Authors: Khaled M. EL-Naggar, Khaled A. AL-Rumaih
Abstract:
This paper presents performance comparison of three estimation techniques used for peak load forecasting in power systems. The three optimum estimation techniques are, genetic algorithms (GA), least error squares (LS) and, least absolute value filtering (LAVF). The problem is formulated as an estimation problem. Different forecasting models are considered. Actual recorded data is used to perform the study. The performance of the above three optimal estimation techniques is examined. Advantages of each algorithms are reported and discussed.
Keywords: Forecasting, Least error squares, Least absolute Value, Genetic algorithms
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27229494 PID Parameter Optimization of an UAV Longitudinal Flight Control System
Authors: Kamran Turkoglu, Ugur Ozdemir, Melike Nikbay, Elbrous M. Jafarov
Abstract:
In this paper, an automatic control system design based on Integral Squared Error (ISE) parameter optimization technique has been implemented on longitudinal flight dynamics of an UAV. It has been aimed to minimize the error function between the reference signal and the output of the plant. In the following parts, objective function has been defined with respect to error dynamics. An unconstrained optimization problem has been solved analytically by using necessary and sufficient conditions of optimality, optimum PID parameters have been obtained and implemented in control system dynamics.Keywords: Optimum Design, KKT Conditions, UAV, Longitudinal Flight Dynamics, ISE Parameter Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37459493 Method of Parameter Calibration for Error Term in Stochastic User Equilibrium Traffic Assignment Model
Authors: Xiang Zhang, David Rey, S. Travis Waller
Abstract:
Stochastic User Equilibrium (SUE) model is a widely used traffic assignment model in transportation planning, which is regarded more advanced than Deterministic User Equilibrium (DUE) model. However, a problem exists that the performance of the SUE model depends on its error term parameter. The objective of this paper is to propose a systematic method of determining the appropriate error term parameter value for the SUE model. First, the significance of the parameter is explored through a numerical example. Second, the parameter calibration method is developed based on the Logit-based route choice model. The calibration process is realized through multiple nonlinear regression, using sequential quadratic programming combined with least square method. Finally, case analysis is conducted to demonstrate the application of the calibration process and validate the better performance of the SUE model calibrated by the proposed method compared to the SUE models under other parameter values and the DUE model.
Keywords: Parameter calibration, sequential quadratic programming, Stochastic User Equilibrium, traffic assignment, transportation planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21259492 Combining Diverse Neural Classifiers for Complex Problem Solving: An ECOC Approach
Authors: R. Ebrahimpour, M. Abbasnezhad Arabi, H. Babamiri Moghaddam
Abstract:
Combining classifiers is a useful method for solving complex problems in machine learning. The ECOC (Error Correcting Output Codes) method has been widely used for designing combining classifiers with an emphasis on the diversity of classifiers. In this paper, in contrast to the standard ECOC approach in which individual classifiers are chosen homogeneously, classifiers are selected according to the complexity of the corresponding binary problem. We use SATIMAGE database (containing 6 classes) for our experiments. The recognition error rate in our proposed method is %10.37 which indicates a considerable improvement in comparison with the conventional ECOC and stack generalization methods.Keywords: Error correcting output code, combining classifiers, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14009491 Dichotomous Logistic Regression with Leave-One-Out Validation
Authors: Sin Yin Teh, Abdul Rahman Othman, Michael Boon Chong Khoo
Abstract:
In this paper, the concepts of dichotomous logistic regression (DLR) with leave-one-out (L-O-O) were discussed. To illustrate this, the L-O-O was run to determine the importance of the simulation conditions for robust test of spread procedures with good Type I error rates. The resultant model was then evaluated. The discussions included 1) assessment of the accuracy of the model, and 2) parameter estimates. These were presented and illustrated by modeling the relationship between the dichotomous dependent variable (Type I error rates) with a set of independent variables (the simulation conditions). The base SAS software containing PROC LOGISTIC and DATA step functions can be making used to do the DLR analysis.Keywords: Dichotomous logistic regression, leave-one-out, testof spread.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20699490 Principal Component Regression in Noninvasive Pineapple Soluble Solids Content Assessment Based On Shortwave Near Infrared Spectrum
Authors: K. S. Chia, H. Abdul Rahim, R. Abdul Rahim
Abstract:
The Principal component regression (PCR) is a combination of principal component analysis (PCA) and multiple linear regression (MLR). The objective of this paper is to revise the use of PCR in shortwave near infrared (SWNIR) (750-1000nm) spectral analysis. The idea of PCR was explained mathematically and implemented in the non-destructive assessment of the soluble solid content (SSC) of pineapple based on SWNIR spectral data. PCR achieved satisfactory results in this application with root mean squared error of calibration (RMSEC) of 0.7611 Brix°, coefficient of determination (R2) of 0.5865 and root mean squared error of crossvalidation (RMSECV) of 0.8323 Brix° with principal components (PCs) of 14.Keywords: Pineapple, Shortwave near infrared, Principal component regression, Non-invasive measurement; Soluble solids content
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20259489 Implementation of SU-MIMO and MU-MIMOGTD-System under Imperfect CSI Knowledge
Authors: Parit Kanjanavirojkul, Kiatwarakorn Keeratishananond, Prapun Suksompong
Abstract:
We study the performance of compressed beamforming weights feedback technique in generalized triangular decomposition (GTD) based MIMO system. GTD is a beamforming technique that enjoys QoS flexibility. The technique, however, will perform at its optimum only when the full knowledge of channel state information (CSI) is available at the transmitter. This would be impossible in the real system, where there are channel estimation error and limited feedback. We suggest a way to implement the quantized beamforming weights feedback, which can significantly reduce the feedback data, on GTD-based MIMO system and investigate the performance of the system. Interestingly, we found that compressed beamforming weights feedback does not degrade the BER performance of the system at low input power, while the channel estimation error and quantization do. For comparison, GTD is more sensitive to compression and quantization, while SVD is more sensitive to the channel estimation error. We also explore the performance of GTDbased MU-MIMO system, and find that the BER performance starts to degrade largely at around -20 dB channel estimation error.Keywords: MIMO, MU-MIMO, GTD, Imperfect CSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19499488 Maximum Power Point Tracking Based on Estimated Power for PV Energy Conversion System
Authors: Zainab Almukhtar, Adel Merabet
Abstract:
In this paper, a method for maximum power point tracking of a photovoltaic energy conversion system is presented. This method is based on using the difference between the power from the solar panel and an estimated power value to control the DC-DC converter of the photovoltaic system. The difference is continuously compared with a preset error permitted value. If the power difference is more than the error, the estimated power is multiplied by a factor and the operation is repeated until the difference is less or equal to the threshold error. The difference in power will be used to trigger a DC-DC boost converter in order to raise the voltage to where the maximum power point is achieved. The proposed method was experimentally verified through a PV energy conversion system driven by the OPAL-RT real time controller. The method was tested on varying radiation conditions and load requirements, and the Photovoltaic Panel was operated at its maximum power in different conditions of irradiation.Keywords: Control system, power error, solar panel, MPPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13219487 Local Error Control in the RK5GL3 Method
Authors: J.S.C. Prentice
Abstract:
The RK5GL3 method is a numerical method for solving initial value problems in ordinary differential equations, and is based on a combination of a fifth-order Runge-Kutta method and 3-point Gauss-Legendre quadrature. In this paper we describe an effective local error control algorithm for RK5GL3, which uses local extrapolation with an eighth-order Runge-Kutta method in tandem with RK5GL3, and a Hermite interpolating polynomial for solution estimation at the Gauss-Legendre quadrature nodes.Keywords: RK5GL3, RKrGLm, Runge-Kutta, Gauss-Legendre, Hermite interpolating polynomial, initial value problem, local error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14849486 Restarted GMRES Method Augmented with the Combination of Harmonic Ritz Vectors and Error Approximations
Authors: Qiang Niu, Linzhang Lu
Abstract:
Restarted GMRES methods augmented with approximate eigenvectors are widely used for solving large sparse linear systems. Recently a new scheme of augmenting with error approximations is proposed. The main aim of this paper is to develop a restarted GMRES method augmented with the combination of harmonic Ritz vectors and error approximations. We demonstrate that the resulted combination method can gain the advantages of two approaches: (i) effectively deflate the small eigenvalues in magnitude that may hamper the convergence of the method and (ii) partially recover the global optimality lost due to restarting. The effectiveness and efficiency of the new method are demonstrated through various numerical examples.
Keywords: Arnoldi process, GMRES, Krylov subspace, systems of linear equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19369485 Error Rate Probability for Coded MQAM with MRC Diversity in the Presence of Cochannel Interferers over Nakagami-Fading Channels
Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal
Abstract:
Exact expressions for bit-error probability (BEP) for coherent square detection of uncoded and coded M-ary quadrature amplitude modulation (MQAM) using an array of antennas with maximal ratio combining (MRC) in a flat fading channel interference limited system in a Nakagami-m fading environment is derived. The analysis assumes an arbitrary number of independent and identically distributed Nakagami interferers. The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM by plotting error probabilities versus average signal-to-interference ratio (SIR) for various values of order of diversity N, number of distinct symbols M, in order to examine the effect of cochannel interferers on the performance of the digital communication system. The diversity gains and net gains are also presented in tabular form in order to examine the performance of digital communication system in the presence of interferers, as the order of diversity increases. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems with space diversity in wireless fading channels.Keywords: Cochannel interference, maximal ratio combining, Nakagami-m fading, wireless digital communications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18539484 Application of Neural Network on the Loading of Copper onto Clinoptilolite
Authors: John Kabuba
Abstract:
The study investigated the implementation of the Neural Network (NN) techniques for prediction of the loading of Cu ions onto clinoptilolite. The experimental design using analysis of variance (ANOVA) was chosen for testing the adequacy of the Neural Network and for optimizing of the effective input parameters (pH, temperature and initial concentration). Feed forward, multi-layer perceptron (MLP) NN successfully tracked the non-linear behavior of the adsorption process versus the input parameters with mean squared error (MSE), correlation coefficient (R) and minimum squared error (MSRE) of 0.102, 0.998 and 0.004 respectively. The results showed that NN modeling techniques could effectively predict and simulate the highly complex system and non-linear process such as ionexchange.
Keywords: Clinoptilolite, loading, modeling, Neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15719483 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.
Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13719482 CNC Wire-Cut Parameter Optimized Determination of the Stair Shape Workpiece
Authors: Chana Raksiri, Pornchai Chatchaikulsiri
Abstract:
The objective of this research is parameters optimized of the stair shape workpiece which is cut by CNC Wire-Cut EDM (WEDW). The experiment material is SKD-11 steel of stair-shaped with variable height workpiece 10, 20, 30 and 40 mm. with the same 10 mm. thickness are cut by Sodick's CNC Wire-Cut EDM model AD325L. The experiments are designed by 3k full factorial experimental design at 3 level 2 factors and 9 experiments with 2 replicate. The selected two factor are servo voltage (SV) and servo feed rate (SF) and the response is cutting thickness error. The experiment is divided in two experiments. The first experiment determines the significant effective factor at confidential interval 95%. The SV factor is the significant effective factor from first result. In order to result smallest cutting thickness error of workpieces is 17 micron with the SV value is 46 volt. Also show that the lower SV value, the smaller different thickness error of workpiece. Then the second experiment is done to reduce different cutting thickness error of workpiece as small as possible by lower SV. The second experiment result show the significant effective factor at confidential interval 95% is the SV factor and the smallest cutting thickness error of workpieces reduce to 11 micron with the experiment SV value is 36 volt.Keywords: CNC Wire-Cut, Variable Thickness Workpiece, Design of Experiments, Full Factorial Design
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48209481 Statistical Approach to Basis Function Truncation in Digital Interpolation Filters
Authors: F. Castillo, J. Arellano, S. Sánchez
Abstract:
In this paper an alternative analysis in the time domain is described and the results of the interpolation process are presented by means of functions that are based on the rule of conditional mathematical expectation and the covariance function. A comparison between the interpolation error caused by low order filters and the classic sinc(t) truncated function is also presented. When fewer samples are used, low-order filters have less error. If the number of samples increases, the sinc(t) type functions are a better alternative. Generally speaking there is an optimal filter for each input signal which depends on the filter length and covariance function of the signal. A novel scheme of work for adaptive interpolation filters is also presented.Keywords: Interpolation, basis function, over-sampling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15539480 On the Performance Analysis of Coexistence between IEEE 802.11g and IEEE 802.15.4 Networks
Authors: Chompunut Jantarasorn, Chutima Prommak
Abstract:
This paper presents an intensive measurement studying of the network performance analysis when IEEE 802.11g Wireless Local Area Networks (WLAN) coexisting with IEEE 802.15.4 Wireless Personal Area Network (WPAN). The measurement results show that the coexistence between both networks could increase the Frame Error Rate (FER) of the IEEE 802.15.4 networks up to 60% and it could decrease the throughputs of the IEEE 802.11g networks up to 55%.
Keywords: Wireless performance analysis, Coexistence analysis, IEEE 802.11g, IEEE 802.15.4.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942