Search results for: participatory error correction process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6619

Search results for: participatory error correction process

6469 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: Multi-carrier frequency diverse array, adaptive beamforming, correction index, limited snapshot, robust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 625
6468 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts grey relational analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole. 

Keywords: Metal matrix composite, Drilling, Optimization, step drill, Surface roughness, burr height, hole diameter error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3165
6467 Low Complexity Regular LDPC codes for Magnetic Storage Devices

Authors: Gabofetswe Malema, Michael Liebelt

Abstract:

LDPC codes could be used in magnetic storage devices because of their better decoding performance compared to other error correction codes. However, their hardware implementation results in large and complex decoders. This one of the main obstacles the decoders to be incorporated in magnetic storage devices. We construct small high girth and rate 2 columnweight codes from cage graphs. Though these codes have low performance compared to higher column weight codes, they are easier to implement. The ease of implementation makes them more suitable for applications such as magnetic recording. Cages are the smallest known regular distance graphs, which give us the smallest known column-weight 2 codes given the size, girth and rate of the code.

Keywords: Structured LDPC codes, cage graphs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072
6466 Shape Error Concealment for Shape Independent Transform Coding

Authors: Sandra Ondrušová, Jaroslav Polec

Abstract:

Arbitrarily shaped video objects are an important concept in modern video coding methods. The techniques presently used are not based on image elements but rather video objects having an arbitrary shape. In this paper, spatial shape error concealment techniques to be used for object-based image in error-prone environments are proposed. We consider a geometric shape representation consisting of the object boundary, which can be extracted from the α-plane. Three different approaches are used to replace a missing boundary segment: Bézier interpolation, Bézier approximation and NURBS approximation. Experimental results on object shape with different concealment difficulty demonstrate the performance of the proposed methods. Comparisons with proposed methods are also presented.

Keywords: error concealment, shape coding, object-based image, NURBS, Bézier curves.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237
6465 Error Factors in Vertical Positioning System

Authors: Hyun-Gwang Cho, Wan-Seok Yang, Su-Jin Kim, Jeong-Seok Oh, Chun-Hong Park

Abstract:

Machine tools are improved capacity remarkably during the 20th century. Improving the precision of machine tools are related with precision of products and accurate processing is always associated with the subject of interest. There are a lot of the elements that determine the precision of the machine, as guides, motors, structure, control, etc. In this paper we focused on the phenomenon that vertical movement system has worse precision than horizontal movement system even they were made up with same components. The vertical movement system needs to be studied differently from the horizontal movement system to develop its precision. The vertical movement system has load on its transfer direction and it makes the movement system weak in precision than the horizontal one. Some machines have mechanical counter balance, hydraulic or pneumatic counter balance to compensate the weight of the machine head. And there is several type of compensating the weight. It can push the machine head and also can use chain or wire lope to transfer the compensating force from counter balance to machine head. According to the type of compensating, there could be error from friction, pressure error of hydraulic or pressure control error. Also according to what to use for transferring the compensating force, transfer error of compensating force could be occur.

Keywords: Chain chordal action, counter balance, setup error, vertical positioning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
6464 PI Controller for Automatic Generation Control Based on Performance Indices

Authors: Kalyan Chatterjee

Abstract:

The optimal design of PI controller for Automatic Generation Control in two area is presented in this paper. The concept of Dual mode control is applied in the PI controller, such that the proportional mode is made active when the rate of change of the error is sufficiently larger than a specified limit otherwise switched to the integral mode. A digital simulation is used in conjunction with the Hooke-Jeeve’s optimization technique to determine the optimum parameters (individual gain of proportional and integral controller) of the PI controller. Integrated Square of the Error (ISE), Integrated Time multiplied by Absolute Error(ITAE) , and Integrated Absolute Error(IAE) performance indices are considered to measure the appropriateness of the designed controller.  The proposed controller are tested for a two area single nonreheat thermal system considering the practical aspect of the problem such as Deadband and Generation Rate Constraint(GRC). Simulation results show that  dual mode with optimized values of the gains improved the control performance than the commonly used Variable Structure .

Keywords: Load Frequency Control, Area Control Error(ACE), Dual Mode PI Controller, Performance Index

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2061
6463 Integrating Blogging into Peer Assessment on College Students’ English Writing

Authors: Su-Lien Liao

Abstract:

Most of college students in Taiwan do not have sufficient English proficiency to express themselves in written English. Teachers spent a lot of time correcting the errors in students’ English writing, but the results are not satisfactory. This study aims to use blogs as a teaching and learning tool in written English. Before applying peer assessment, students should be trained to be good reviewers.  The teacher starts the course by posting the error analysis of students’ first English composition on blogs as the comment models for students. Then the students will go through the process of drafting, composing, peer response and last revision on blogs. Evaluation questionnaires and interviews will be conducted at the end of the course to see the impact and also students’ perception for the course.

Keywords: Blog, Peer assessment, English writing, Error analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
6462 Two States Mapping Based Neural Network Model for Decreasing of Prediction Residual Error

Authors: Insung Jung, lockjo Koo, Gi-Nam Wang

Abstract:

The objective of this paper is to design a model of human vital sign prediction for decreasing prediction error by using two states mapping based time series neural network BP (back-propagation) model. Normally, lot of industries has been applying the neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has a residual error between real value and prediction output. Therefore, we designed two states of neural network model for compensation of residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We found that most of simulations cases were satisfied by the two states mapping based time series prediction model compared to normal BP. In particular, small sample size of times series were more accurate than the standard MLP model. We expect that this algorithm can be available to sudden death prevention and monitoring AGENT system in a ubiquitous homecare environment.

Keywords: Neural network, U-healthcare, prediction, timeseries, computer aided prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
6461 Novelist Calls Out Poemist: A Psycholinguistic and Contrastive Analysis of the Errors in Turkish EFL Learners- Interlanguage

Authors: Mehmet Ozcan

Abstract:

This study is designed to investigate errors emerged in written texts produced by 30 Turkish EFL learners with an explanatory, and thus, qualitative perspective. Erroneous language elements were identified by the researcher first and then their grammaticality and intelligibility were checked by five native speakers of English. The analysis of the data showed that it is difficult to claim that an error stems from only one single factor since different features of an error are triggered by different factors. Our findings revealed two different types of errors: those which stem from the interference of L1 with L2 and those which are developmental ones. The former type contains more global errors whereas the errors in latter type are more intelligible.

Keywords: Contrastive analysis, Error analysis, Language acquisition, Language transfer, Turkish

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055
6460 Simulation of a Boost PFC Converter with Electro Magnetic Interference Filter

Authors: P. Ram Mohan, M. Vijaya Kumar, O. V. Raghava Reddy

Abstract:

This paper deals with the simulation of a Boost Power Factor Correction (PFC) Converter with Electro Magnetic Interference (EMI) Filter. The diode rectifier with output capacitor gives poor power factor. The Boost Converter of PFC Circuit is analyzed and then simulated with diode rectifier. The Boost PFC Converter with EMI Filter is simulated for resistive load. The power factor is improved using the proposed converter.

Keywords: Boost Converter, Power Factor Correction, Electro Magnetic Interference, Diode Rectifier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3384
6459 Stepsize Control of the Finite Difference Method for Solving Ordinary Differential Equations

Authors: Davod Khojasteh Salkuyeh

Abstract:

An important task in solving second order linear ordinary differential equations by the finite difference is to choose a suitable stepsize h. In this paper, by using the stochastic arithmetic, the CESTAC method and the CADNA library we present a procedure to estimate the optimal stepsize hopt, the stepsize which minimizes the global error consisting of truncation and round-off error.

Keywords: Ordinary differential equations, optimal stepsize, error, stochastic arithmetic, CESTAC, CADNA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
6458 Studies on Affecting Factors of Wheel Slip and Odometry Error on Real-Time of Wheeled Mobile Robots: A Review

Authors: D. Vidhyaprakash, A. Elango

Abstract:

In real-time applications, wheeled mobile robots are increasingly used and operated in extreme and diverse conditions traversing challenging surfaces such as a pitted, uneven terrain, natural flat, smooth terrain, as well as wet and dry surfaces. In order to accomplish such tasks, it is critical that the motion control functions without wheel slip and odometry error during the navigation of the two-wheeled mobile robot (WMR). Wheel slip and odometry error are disrupting factors on overall WMR performance in the form of deviation from desired trajectory, navigation, travel time and budgeted energy consumption. The wheeled mobile robot’s ability to operate at peak performance on various work surfaces without wheel slippage and odometry error is directly connected to four main parameters, which are the range of payload distribution, speed, wheel diameter, and wheel width. This paper analyses the effects of those parameters on overall performance and is concerned with determining the ideal range of parameters for optimum performance.

Keywords: Wheeled mobile robot (WMR), terrain, wheel slippage, odometry error, navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1197
6457 A Real Time Ultra-Wideband Location System for Smart Healthcare

Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang

Abstract:

Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.

Keywords: Intelligent monitoring, IoT devices, real-time location, smart healthcare, ultra-wideband technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 838
6456 On the Efficiency and Robustness of Commingle Wiener and Lévy Driven Processes for Vasciek Model

Authors: Rasaki O. Olanrewaju

Abstract:

The driven processes of Wiener and Lévy are known self-standing Gaussian-Markov processes for fitting non-linear dynamical Vasciek model. In this paper, a coincidental Gaussian density stationarity condition and autocorrelation function of the two driven processes were established. This led to the conflation of Wiener and Lévy processes so as to investigate the efficiency of estimates incorporated into the one-dimensional Vasciek model that was estimated via the Maximum Likelihood (ML) technique. The conditional laws of drift, diffusion and stationarity process was ascertained for the individual Wiener and Lévy processes as well as the commingle of the two processes for a fixed effect and Autoregressive like Vasciek model when subjected to financial series; exchange rate of Naira-CFA Franc. In addition, the model performance error of the sub-merged driven process was miniature compared to the self-standing driven process of Wiener and Lévy.

Keywords: Wiener process, Lévy process, Vasciek model, drift, diffusion, Gaussian density stationary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 631
6455 A Soft Error Rates Evaluation Method of Combinational Logic Circuit Based on Linear Energy Transfers

Authors: Man Li, Wanting Zhou, Lei Li

Abstract:

Communication stability is the primary concern of communication satellites. Communication satellites are easily affected by particle radiation to generate single event effects (SEE), which leads to soft errors (SE) of combinational logic circuit. The existing research on soft error rates (SER) of combined logic circuit is mostly based on the assumption that the logic gates being bombarded have the same pulse width. However, in the actual radiation environment, the pulse widths of the logic gates being bombarded are different due to different linear energy transfers (LET). In order to improve the accuracy of SER evaluation model, this paper proposes a soft error rates evaluation method based on LET. In this paper, we analyze the influence of LET on the pulse width of combinational logic and establish the pulse width model based on LET. Based on this model, the error rate of test circuit ISCAS’85 is calculated. Experimental results show that this model can be used for SER evaluation.

Keywords: Communication satellite, pulse width, soft error rates, linear energy transfer, LET.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 312
6454 The Fiscal-Monetary Policy and Economic Growth in Algeria: VECM Approach

Authors: K. Bokreta, D. Benanaya

Abstract:

The objective of this study is to examine the relative effectiveness of monetary and fiscal policy in Algeria using the econometric modelling techniques of cointegration and vector error correction modelling to analyse and draw policy inferences. The chosen variables of fiscal policy are government expenditure and net taxes on products, while the effect of monetary policy is presented by the inflation rate and the official exchange rate. From the results, we find that in the long-run, the impact of government expenditures is positive, while the effect of taxes is negative on growth. Additionally, we find that the inflation rate is found to have little effect on GDP per capita but the impact of the exchange rate is insignificant. We conclude that fiscal policy is more powerful then monetary policy in promoting economic growth in Algeria.

Keywords: Economic growth, fiscal policy, monetary policy, VECM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2554
6453 Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations

Authors: Insah Bhurtah, P. Clarel Catherine, K. M. Sunjiv Soyjaudah

Abstract:

The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.

Keywords: Error-correcting codes, information theory, low-density parity-check codes, sum-product algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
6452 Determine of Constant Coefficients to RelateTotal Dissolved Solids to Electrical Conductivity

Authors: M. Siosemarde, F. Kave, E. Pazira, H. Sedghi, S. J. Ghaderi

Abstract:

Salinity is a measure of the amount of salts in the water. Total Dissolved Solids (TDS) as salinity parameter are often determined using laborious and time consuming laboratory tests, but it may be more appropriate and economical to develop a method which uses a more simple soil salinity index. Because dissolved ions increase salinity as well as conductivity, the two measures are related. The aim of this research was determine of constant coefficients for predicting of Total Dissolved Solids (TDS) based on Electrical Conductivity (EC) with Statistics of Correlation coefficient, Root mean square error, Maximum error, Mean Bias error, Mean absolute error, Relative error and Coefficient of residual mass. For this purpose, two experimental areas (S1, S2) of Khuzestan province-IRAN were selected and four treatments with three replications by series of double rings were applied. The treatments were included 25cm, 50cm, 75cm and 100cm water application. The results showed the values 16.3 & 12.4 were the best constant coefficients for predicting of Total Dissolved Solids (TDS) based on EC in Pilot S1 and S2 with correlation coefficient 0.977 & 0.997 and 191.1 & 106.1 Root mean square errors (RMSE) respectively.

Keywords: constant coefficients, electrical conductivity, Khuzestan plain and total dissolved solids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3845
6451 Visualization of Sediment Thickness Variation for Sea Bed Logging using Spline Interpolation

Authors: Hanita Daud, Noorhana Yahya, Vijanth Sagayan, Muizuddin Talib

Abstract:

This paper discusses on the use of Spline Interpolation and Mean Square Error (MSE) as tools to process data acquired from the developed simulator that shall replicate sea bed logging environment. Sea bed logging (SBL) is a new technique that uses marine controlled source electromagnetic (CSEM) sounding technique and is proven to be very successful in detecting and characterizing hydrocarbon reservoirs in deep water area by using resistivity contrasts. It uses very low frequency of 0.1Hz to 10 Hz to obtain greater wavelength. In this work the in house built simulator was used and was provided with predefined parameters and the transmitted frequency was varied for sediment thickness of 1000m to 4000m for environment with and without hydrocarbon. From series of simulations, synthetics data were generated. These data were interpolated using Spline interpolation technique (degree of three) and mean square error (MSE) were calculated between original data and interpolated data. Comparisons were made by studying the trends and relationship between frequency and sediment thickness based on the MSE calculated. It was found that the MSE was on increasing trends in the set up that has the presence of hydrocarbon in the setting than the one without. The MSE was also on decreasing trends as sediment thickness was increased and with higher transmitted frequency.

Keywords: Spline Interpolation, Mean Square Error, Sea Bed Logging, Controlled Source Electromagnetic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
6450 Growth, Population, Exports and Wagner's Law: A Case Study of Pakistan (1972-2007)

Authors: T. Hussain, A. Iqbal, M. W. Siddiqi

Abstract:

The objective of this study is to examine the validity of Wagner-s law and relationship between economic growth, population and export for Pakistan. The ARDL Bounds cointegration and ECM are utilized for long and short run equilibrium for the period of 1972-2007. Population has considerable role in an economy and exports are the main source to raise the GDP. With the increase in GDP, the government expenditures may or may not increase. The empirical results indicate that the Wagner-s Law does hold, as economic growth is significantly and positively correlated with government expenditures. However, population and exports have also significant and positive impact on government expenditures both in short and long run. The significant and negative coefficient of error correction term in ECM indicates that after a shock, the long rum equilibrium will again converge towards equilibrium about 70.82 percent within a year.

Keywords: ARDL Cointegration, Growth, Pakistan, Wagner's law.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426
6449 Application of Adaptive Neuro-Fuzzy Inference Systems Technique for Modeling of Postweld Heat Treatment Process of Pressure Vessel Steel ASTM A516 Grade 70

Authors: Omar Al Denali, Abdelaziz Badi

Abstract:

The ASTM A516 Grade 70 steel is a suitable material used for the fabrication of boiler pressure vessels working in moderate and lower temperature services, and it has good weldability and excellent notch toughness. The post-weld heat treatment (PWHT) or stress-relieving heat treatment has significant effects on avoiding the martensite transformation and resulting in high hardness, which can lead to cracking in the heat-affected zone (HAZ). An adaptive neuro-fuzzy inference system (ANFIS) was implemented to predict the material tensile strength of PWHT experiments. The ANFIS models presented excellent predictions, and the comparison was carried out based on the mean absolute percentage error between the predicted values and the experimental values. The ANFIS model gave a Mean Absolute Percentage Error of 0.556%, which confirms the high accuracy of the model.

Keywords: Prediction, post-weld heat treatment, adaptive neuro-fuzzy inference system, ANFIS, mean absolute percentage error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 320
6448 The Study of Formal and Semantic Errors of Lexis by Persian EFL Learners

Authors: Mohammad J. Rezai, Fereshteh Davarpanah

Abstract:

Producing a text in a language which is not one’s mother tongue can be a demanding task for language learners. Examining lexical errors committed by EFL learners is a challenging area of investigation which can shed light on the process of second language acquisition. Despite the considerable number of investigations into grammatical errors, few studies have tackled formal and semantic errors of lexis committed by EFL learners. The current study aimed at examining Persian learners’ formal and semantic errors of lexis in English. To this end, 60 students at three different proficiency levels were asked to write on 10 different topics in 10 separate sessions. Finally, 600 essays written by Persian EFL learners were collected, acting as the corpus of the study. An error taxonomy comprising formal and semantic errors was selected to analyze the corpus. The formal category covered misselection and misformation errors, while the semantic errors were classified into lexical, collocational and lexicogrammatical categories. Each category was further classified into subcategories depending on the identified errors. The results showed that there were 2583 errors in the corpus of 9600 words, among which, 2030 formal errors and 553 semantic errors were identified. The most frequent errors in the corpus included formal error commitment (78.6%), which were more prevalent at the advanced level (42.4%). The semantic errors (21.4%) were more frequent at the low intermediate level (40.5%). Among formal errors of lexis, the highest number of errors was devoted to misformation errors (98%), while misselection errors constituted 2% of the errors. Additionally, no significant differences were observed among the three semantic error subcategories, namely collocational, lexical choice and lexicogrammatical. The results of the study can shed light on the challenges faced by EFL learners in the second language acquisition process.

Keywords: Collocational errors, lexical errors, Persian EFL learners, semantic errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1165
6447 Performance of Block Codes Using the Eigenstructure of the Code Correlation Matrixand Soft-Decision Decoding of BPSK

Authors: Vitalice K. Oduol, C. Ardil

Abstract:

A method is presented for obtaining the error probability for block codes. The method is based on the eigenvalueeigenvector properties of the code correlation matrix. It is found that under a unary transformation and for an additive white Gaussian noise environment, the performance evaluation of a block code becomes a one-dimensional problem in which only one eigenvalue and its corresponding eigenvector are needed in the computation. The obtained error rate results show remarkable agreement between simulations and analysis.

Keywords: bit error rate, block codes, code correlation matrix, eigenstructure, soft-decision decoding, weight vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741
6446 Developing New Processes and Optimizing Performance Using Response Surface Methodology

Authors: S. Raissi

Abstract:

Response surface methodology (RSM) is a very efficient tool to provide a good practical insight into developing new process and optimizing them. This methodology could help engineers to raise a mathematical model to represent the behavior of system as a convincing function of process parameters. Through this paper the sequential nature of the RSM surveyed for process engineers and its relationship to design of experiments (DOE), regression analysis and robust design reviewed. The proposed four-step procedure in two different phases could help system analyst to resolve the parameter design problem involving responses. In order to check accuracy of the designed model, residual analysis and prediction error sum of squares (PRESS) described. It is believed that the proposed procedure in this study can resolve a complex parameter design problem with one or more responses. It can be applied to those areas where there are large data sets and a number of responses are to be optimized simultaneously. In addition, the proposed procedure is relatively simple and can be implemented easily by using ready-made standard statistical packages.

Keywords: Response Surface Methodology (RSM), Design of Experiments (DOE), Process modeling, Process setting, Process optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792
6445 Variable Step-Size APA with Decorrelation of AR Input Process

Authors: Jae Wook Shin, Ju-man Song, Hyun-Taek Choi, Poo Gyeon Park

Abstract:

This paper introduces a new variable step-size APA with decorrelation of AR input process is based on the MSD analysis. To achieve a fast convergence rate and a small steady-state estimation error, he proposed algorithm uses variable step size that is determined by minimising the MSD. In addition, experimental results show that the proposed algorithm is achieved better performance than the other algorithms.

Keywords: adaptive filter, affine projection algorithm, variable step size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
6444 The Performance Analysis of Error Saturation Nonlinearity LMS in Impulsive Noise based on Weighted-Energy Conservation

Authors: T Panigrahi, G Panda, Mulgrew

Abstract:

This paper introduces a new approach for the performance analysis of adaptive filter with error saturation nonlinearity in the presence of impulsive noise. The performance analysis of adaptive filters includes both transient analysis which shows that how fast a filter learns and the steady-state analysis gives how well a filter learns. The recursive expressions for mean-square deviation(MSD) and excess mean-square error(EMSE) are derived based on weighted energy conservation arguments which provide the transient behavior of the adaptive algorithm. The steady-state analysis for co-related input regressor data is analyzed, so this approach leads to a new performance results without restricting the input regression data to be white.

Keywords: Error saturation nonlinearity, transient analysis, impulsive noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
6443 On Adaptive Optimization of Filter Performance Based on Markov Representation for Output Prediction Error

Authors: Hong Son Hoang, Remy Baraille

Abstract:

This paper addresses the problem of how one can improve the performance of a non-optimal filter. First the theoretical question on dynamical representation for a given time correlated random process is studied. It will be demonstrated that for a wide class of random processes, having a canonical form, there exists a dynamical system equivalent in the sense that its output has the same covariance function. It is shown that the dynamical approach is more effective for simulating and estimating a Markov and non- Markovian random processes, computationally is less demanding, especially with increasing of the dimension of simulated processes. Numerical examples and estimation problems in low dimensional systems are given to illustrate the advantages of the approach. A very useful application of the proposed approach is shown for the problem of state estimation in very high dimensional systems. Here a modified filter for data assimilation in an oceanic numerical model is presented which is proved to be very efficient due to introducing a simple Markovian structure for the output prediction error process and adaptive tuning some parameters of the Markov equation.

Keywords: Statistical simulation, canonical form, dynamical system, Markov and non-Markovian processes, data assimilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1258
6442 Electromagnetic Interference Radiation Prediction and Final Measurement Process Optimization by Neural Network

Authors: Hussam Elias, Ninovic Perez, Holger Hirsch

Abstract:

The completion of the EMC regulations worldwide is growing steadily as the usage of electronics in our daily lives is increasing more than ever. In this paper, we present a method to perform the final phase of Electromagnetic Compatibility (EMC) measurement and to reduce the required test time according to the norm EN 55032 by using a developed tool and the Conventional Neural Network (CNN). The neural network was trained using real EMC measurements which were performed in the Semi Anechoic Chamber (SAC) by CETECOM GmbH in Essen Germany. To implement our proposed method, we wrote software to perform the radiated electromagnetic interference (EMI) measurements and use the CNN to predict and determine the position of the turntable that meet the maximum radiation value.

Keywords: Conventional neural network, electromagnetic compatibility measurement, mean absolute error, position error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 259
6441 Electric Load Forecasting Using Genetic Based Algorithm, Optimal Filter Estimator and Least Error Squares Technique: Comparative Study

Authors: Khaled M. EL-Naggar, Khaled A. AL-Rumaih

Abstract:

This paper presents performance comparison of three estimation techniques used for peak load forecasting in power systems. The three optimum estimation techniques are, genetic algorithms (GA), least error squares (LS) and, least absolute value filtering (LAVF). The problem is formulated as an estimation problem. Different forecasting models are considered. Actual recorded data is used to perform the study. The performance of the above three optimal estimation techniques is examined. Advantages of each algorithms are reported and discussed.

Keywords: Forecasting, Least error squares, Least absolute Value, Genetic algorithms

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2667
6440 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (RSM)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarseaggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: Mix proportioning, response surface methodology, compressive strength, optimal design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160