Search results for: stochastic errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1358

Search results for: stochastic errors

1268 Collocation Errors in English as Second Language (ESL) Essay Writing

Authors: Fatima Muhammad Shitu

Abstract:

In language learning, Second language learners like their native speaker counter parts, commit errors in their attempt to achieve competence in the target language. The realm of Collocation has to do with meaning relation between lexical items. In all human language, there is a kind of ‘natural order’ in which words are arranged or relate to one another in sentences so much so that when a word occurs in a given context, the related or naturally co -occurring word will automatically come to the mind. It becomes an error, therefore, if students inappropriately pair or arrange such ‘naturally’ co – occurring lexical items in a text. It has been observed that most of the second language learners in this research group commit collocational errors. A study of this kind is very significant as it gives insight into the kinds of errors committed by learners. This will help the language teacher to be able to identify the sources and causes of such errors as well as correct them thereby guiding, helping and leading the learners towards achieving some level of competence in the language. The aim of the study is to understand the nature of these errors as stumbling blocks to effective essay writing. The objective of the study is to identify the errors, analyse their structural compositions so as to determine whether there are similarities between students in this regard and to find out whether there are patterns to these kinds of errors which will enable the researcher to understand their sources and causes. As a descriptive research, the researcher samples some nine hundred essays collected from three hundred undergraduate learners of English as a second language in the Federal College of Education, Kano, North- West Nigeria, i.e. three essays per each student. The essays which were given on three different lecture times were of similar thematic preoccupations (i.e. same topics) and length (i.e. same number of words). The essays were written during the lecture hour at three different lecture occasions. The errors were identified in a systematic manner whereby errors so identified were recorded only once even if they occur severally in students’ essays. The data was collated using percentages in which the identified number of occurrences were converted accordingly in percentages. The findings from the study indicates that there are similarities as well as regular and repeated errors which provided a pattern. Based on the pattern identified, the conclusion is that students’ collocational errors are attributable to poor teaching and learning which resulted in wrong generalisation of rules.

Keywords: collocations, errors, second language learning, ESL students

Procedia PDF Downloads 317
1267 A Reactive Flexible Job Shop Scheduling Model in a Stochastic Environment

Authors: Majid Khalili, Hamed Tayebi

Abstract:

This paper considers a stochastic flexible job-shop scheduling (SFJSS) problem in the presence of production disruptions, and reactive scheduling is implemented in order to find the optimal solution under uncertainty. In this problem, there are two main disruptions including machine failure which influences operation time, and modification or cancellation of the order delivery date during production. In order to decrease the negative effects of these difficulties, two derived strategies from reactive scheduling are used; the first one is relevant to being able to allocate multiple machine to each job, and the other one is related to being able to select the best alternative process from other job while some disruptions would be created in the processes of a job. For this purpose, a Mixed Integer Linear Programming model is proposed.

Keywords: flexible job-shop scheduling, reactive scheduling, stochastic environment, mixed integer linear programming

Procedia PDF Downloads 340
1266 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio

Authors: Urvee B. Trivedi, U. D. Dalal

Abstract:

As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.

Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)

Procedia PDF Downloads 330
1265 A Mathematical Model for 3-DOF Rotary Accuracy Measurement Method Based on a Ball Lens

Authors: Hau-Wei Lee, Yu-Chi Liu, Chien-Hung Liu

Abstract:

A mathematical model is presented for a system that measures rotational errors in a shaft using a ball lens. The geometric optical characteristics of the ball lens mounted on the shaft allows the measurement of rotation axis errors in both the radial and axial directions. The equipment used includes two quadrant detectors (QD), two laser diodes and a ball lens that is mounted on the rotating shaft to be evaluated. Rotational errors in the shaft cause changes in the optical geometry of the ball lens. The resulting deflection of the laser beams is detected by the QDs and their output signals are used to determine rotational errors. The radial and the axial rotational errors can be calculated as explained by the mathematical model. Results from system calibration show that the measurement error is within ±1 m and resolution is about 20 nm. Using a direct drive motor (DD motor) as an example, experimental results show a rotational error of less than 20 m. The most important features of this system are that it does not require the use of expensive optical components, it is small, very easy to set up, and measurements are highly accurate.

Keywords: ball lens, quadrant detector, axial error, radial error

Procedia PDF Downloads 450
1264 Regularization of Gene Regulatory Networks Perturbed by White Noise

Authors: Ramazan I. Kadiev, Arcady Ponosov

Abstract:

Mathematical models of gene regulatory networks can in many cases be described by ordinary differential equations with switching nonlinearities, where the initial value problem is ill-posed. Several regularization methods are known in the case of deterministic networks, but the presence of stochastic noise leads to several technical difficulties. In the presentation, it is proposed to apply the methods of the stochastic singular perturbation theory going back to Yu. Kabanov and Yu. Pergamentshchikov. This approach is used to regularize the above ill-posed problem, which, e.g., makes it possible to design stable numerical schemes. Several examples are provided in the presentation, which support the efficiency of the suggested analysis. The method can also be of interest in other fields of biomathematics, where differential equations contain switchings, e.g., in neural field models.

Keywords: ill-posed problems, singular perturbation analysis, stochastic differential equations, switching nonlinearities

Procedia PDF Downloads 175
1263 English Grammatical Errors of Arabic Sentence Translations Done by Machine Translations

Authors: Muhammad Fathurridho

Abstract:

Grammar as a rule used by every language to be understood by everyone is always related to syntax and morphology. Arabic grammar is different with another languages’ grammars. It has more rules and difficulties. This paper aims to investigate and describe the English grammatical errors of machine translation systems in translating Arabic sentences, including declarative, exclamation, imperative, and interrogative sentences, specifically in year 2018 which can be supported with artificial intelligence’s role. The Arabic sample sentences which are divided into two; verbal and nominal sentence of several Arabic published texts will be examined as the source language samples. The translated sentences done by several popular online machine translation systems, including Google Translate, Microsoft Bing, Babylon, Facebook, Hellotalk, Worldlingo, Yandex Translate, and Tradukka Translate are the material objects of this research. Descriptive method that will be taken to finish this research will show the grammatical errors of English target language, and classify them. The conclusion of this paper has showed that the grammatical errors of machine translation results are varied and generally classified into morphological, syntactical, and semantic errors in all type of Arabic words (Noun, Verb, and Particle), and it will be one of the evaluations for machine translation’s providers to correct them in order to improve their understandable results.

Keywords: Arabic, Arabic-English translation, machine translation, grammatical errors

Procedia PDF Downloads 136
1262 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs

Authors: H. M. Soroush

Abstract:

The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.

Keywords: number of late jobs, scheduling, single server, stochastic

Procedia PDF Downloads 479
1261 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes

Authors: Nadarajah I. Ramesh

Abstract:

Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.

Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model

Procedia PDF Downloads 256
1260 Modelling High-Frequency Crude Oil Dynamics Using Affine and Non-Affine Jump-Diffusion Models

Authors: Katja Ignatieva, Patrick Wong

Abstract:

We investigated the dynamics of high frequency energy prices, including crude oil and electricity prices. The returns of underlying quantities are modelled using various parametric models such as stochastic framework with jumps and stochastic volatility (SVCJ) as well as non-parametric alternatives, which are purely data driven and do not require specification of the drift or the diffusion coefficient function. Using different statistical criteria, we investigate the performance of considered parametric and nonparametric models in their ability to forecast price series and volatilities. Our models incorporate possible seasonalities in the underlying dynamics and utilise advanced estimation techniques for the dynamics of energy prices.

Keywords: stochastic volatility, affine jump-diffusion models, high frequency data, model specification, markov chain monte carlo

Procedia PDF Downloads 76
1259 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach

Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo

Abstract:

Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.

Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation

Procedia PDF Downloads 159
1258 Calibration of Hybrid Model and Arbitrage-Free Implied Volatility Surface

Authors: Kun Huang

Abstract:

This paper investigates whether the combination of local and stochastic volatility models can be calibrated exactly to any arbitrage-free implied volatility surface of European option. The risk neutral Brownian Bridge density is applied for calibration of the leverage function of our Hybrid model. Furthermore, the tails of marginal risk neutral density are generated by Generalized Extreme Value distribution in order to capture the properties of asset returns. The local volatility is generated from the arbitrage-free implied volatility surface using stochastic volatility inspired parameterization.

Keywords: arbitrage free implied volatility, calibration, extreme value distribution, hybrid model, local volatility, risk-neutral density, stochastic volatility

Procedia PDF Downloads 248
1257 A Stochastic Volatility Model for Optimal Market-Making

Authors: Zubier Arfan, Paul Johnson

Abstract:

The electronification of financial markets and the rise of algorithmic trading has sparked a lot of interest from the mathematical community, for the market making-problem in particular. The research presented in this short paper solves the classic stochastic control problem in order to derive the strategy for a market-maker. It also shows how to calibrate and simulate the strategy with real limit order book data for back-testing. The ambiguity of limit-order priority in back-testing is dealt with by considering optimistic and pessimistic priority scenarios. The model, although it does outperform a naive strategy, assumes constant volatility, therefore, is not best suited to the LOB data. The Heston model is introduced to describe the price and variance process of the asset. The Trader's constant absolute risk aversion utility function is optimised by numerically solving a 3-dimensional Hamilton-Jacobi-Bellman partial differential equation to find the optimal limit order quotes. The results show that the stochastic volatility market-making model is more suitable for a risk-averse trader and is also less sensitive to calibration error than the constant volatility model.

Keywords: market-making, market-microsctrucure, stochastic volatility, quantitative trading

Procedia PDF Downloads 129
1256 Computational Simulations on Stability of Model Predictive Control for Linear Discrete-Time Stochastic Systems

Authors: Tomoaki Hashimoto

Abstract:

Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial time and a moving terminal time. This paper examines the stability of model predictive control for linear discrete-time systems with additive stochastic disturbances. A sufficient condition for the stability of the closed-loop system with model predictive control is derived by means of a linear matrix inequality. The objective of this paper is to show the results of computational simulations in order to verify the validity of the obtained stability condition.

Keywords: computational simulations, optimal control, predictive control, stochastic systems, discrete-time systems

Procedia PDF Downloads 409
1255 Capability Prediction of Machining Processes Based on Uncertainty Analysis

Authors: Hamed Afrasiab, Saeed Khodaygan

Abstract:

Prediction of machining process capability in the design stage plays a key role to reach the precision design and manufacturing of mechanical products. Inaccuracies in machining process lead to errors in position and orientation of machined features on the part, and strongly affect the process capability in the final quality of the product. In this paper, an efficient systematic approach is given to investigate the machining errors to predict the manufacturing errors of the parts and capability prediction of corresponding machining processes. A mathematical formulation of fixture locators modeling is presented to establish the relationship between the part errors and the related sources. Based on this method, the final machining errors of the part can be accurately estimated by relating them to the combined dimensional and geometric tolerances of the workpiece – fixture system. This method is developed for uncertainty analysis based on the Worst Case and statistical approaches. The application of the presented method is illustrated through presenting an example and the computational results are compared with the Monte Carlo simulation results.

Keywords: process capability, machining error, dimensional and geometrical tolerances, uncertainty analysis

Procedia PDF Downloads 290
1254 Implementation of Successive Interference Cancellation Algorithms in the 5g Downlink

Authors: Mokrani Mohamed Amine

Abstract:

In this paper, we have implemented successive interference cancellation algorithms in the 5G downlink. We have calculated the maximum throughput in Frequency Division Duplex (FDD) mode in the downlink, where we have obtained a value equal to 836932 b/ms. The transmitter is of type Multiple Input Multiple Output (MIMO) with eight transmitting and receiving antennas. Each antenna among eight transmits simultaneously a data rate of 104616 b/ms that contains the binary messages of the three users; in this case, the Cyclic Redundancy Check CRC is negligible, and the MIMO category is the spatial diversity. The technology used for this is called Non-Orthogonal Multiple Access (NOMA) with a Quadrature Phase Shift Keying (QPSK) modulation. The transmission is done in a Rayleigh fading channel with the presence of obstacles. The MIMO Successive Interference Cancellation (SIC) receiver with two transmitting and receiving antennas recovers its binary message without errors for certain values of transmission power such as 50 dBm, with 0.054485% errors when the transmitted power is 20dBm and with 0.00286763% errors for a transmitted power of 32 dBm(in the case of user 1) as well as with 0.0114705% errors when the transmitted power is 20 dBm also with 0.00286763% errors for a power of 24 dBm(in the case of user2) by applying the steps involved in SIC.

Keywords: 5G, NOMA, QPSK, TBS, LDPC, SIC, capacity

Procedia PDF Downloads 85
1253 A Cohort and Empirical Based Multivariate Mortality Model

Authors: Jeffrey Tzu-Hao Tsai, Yi-Shan Wong

Abstract:

This article proposes a cohort-age-period (CAP) model to characterize multi-population mortality processes using cohort, age, and period variables. Distinct from the factor-based Lee-Carter-type decomposition mortality model, this approach is empirically based and includes the age, period, and cohort variables into the equation system. The model not only provides a fruitful intuition for explaining multivariate mortality change rates but also has a better performance in forecasting future patterns. Using the US and the UK mortality data and performing ten-year out-of-sample tests, our approach shows smaller mean square errors in both countries compared to the models in the literature.

Keywords: longevity risk, stochastic mortality model, multivariate mortality rate, risk management

Procedia PDF Downloads 31
1252 Stochastic Modeling for Parameters of Modified Car-Following Model in Area-Based Traffic Flow

Authors: N. C. Sarkar, A. Bhaskar, Z. Zheng

Abstract:

The driving behavior in area-based (i.e., non-lane based) traffic is induced by the presence of other individuals in the choice space from the driver’s visual perception area. The driving behavior of a subject vehicle is constrained by the potential leaders and leaders are frequently changed over time. This paper is to determine a stochastic model for a parameter of modified intelligent driver model (MIDM) in area-based traffic (as in developing countries). The parametric and non-parametric distributions are presented to fit the parameters of MIDM. The goodness of fit for each parameter is measured in two different ways such as graphically and statistically. The quantile-quantile (Q-Q) plot is used for a graphical representation of a theoretical distribution to model a parameter and the Kolmogorov-Smirnov (K-S) test is used for a statistical measure of fitness for a parameter with a theoretical distribution. The distributions are performed on a set of estimated parameters of MIDM. The parameters are estimated on the real vehicle trajectory data from India. The fitness of each parameter with a stochastic model is well represented. The results support the applicability of the proposed modeling for parameters of MIDM in area-based traffic flow simulation.

Keywords: area-based traffic, car-following model, micro-simulation, stochastic modeling

Procedia PDF Downloads 129
1251 Multi-Period Supply Chain Design under Uncertainty

Authors: Amir Azaron

Abstract:

In this research, a stochastic programming approach is developed for designing supply chains with uncertain parameters. Demands and selling prices of products at markets are considered as the uncertain parameters. The proposed mathematical model will be multi-period two-stage stochastic programming, which takes into account the selection of retailer sites, suppliers, production levels, inventory levels, transportation modes to be used for shipping goods, and shipping quantities among the entities of the supply chain network. The objective function is to maximize the chain’s net present value. In order to maximize the chain’s NPV, the sum of first-stage investment costs on retailers, and the expected second-stage processing, inventory-holding and transportation costs should be kept as low as possible over multiple periods. The effects of supply uncertainty where suppliers are unreliable will also be investigated on the efficiency of the supply chain.

Keywords: supply chain management, stochastic programming, multiobjective programming, inventory control

Procedia PDF Downloads 280
1250 Stochastic Control of Decentralized Singularly Perturbed Systems

Authors: Walid S. Alfuhaid, Saud A. Alghamdi, John M. Watkins, M. Edwin Sawan

Abstract:

Designing a controller for stochastic decentralized interconnected large scale systems usually involves a high degree of complexity and computation ability. Noise, observability, and controllability of all system states, connectivity, and channel bandwidth are other constraints to design procedures for distributed large scale systems. The quasi-steady state model investigated in this paper is a reduced order model of the original system using singular perturbation techniques. This paper results in an optimal control synthesis to design an observer based feedback controller by standard stochastic control theory techniques using Linear Quadratic Gaussian (LQG) approach and Kalman filter design with less complexity and computation requirements. Numerical example is given at the end to demonstrate the efficiency of the proposed method.

Keywords: decentralized, optimal control, output, singular perturb

Procedia PDF Downloads 348
1249 Supplier Selection in a Scenario Based Stochastic Model with Uncertain Defectiveness and Delivery Lateness Rates

Authors: Abeer Amayri, Akif A. Bulgak

Abstract:

Due to today’s globalization as well as outsourcing practices of the companies, the Supply Chain (SC) performances have become more dependent on the efficient movement of material among places that are geographically dispersed, where there is more chance for disruptions. One such disruption is the quality and delivery uncertainties of outsourcing. These uncertainties could lead the products to be unsafe and, as is the case in a number of recent examples, companies may have to end up in recalling their products. As a result of these problems, there is a need to develop a methodology for selecting suppliers globally in view of risks associated with low quality and late delivery. Accordingly, we developed a two-stage stochastic model that captures the risks associated with uncertainty in quality and delivery as well as a solution procedure for the model. The stochastic model developed simultaneously optimizes supplier selection and purchase quantities under price discounts over a time horizon. In particular, our target is the study of global organizations with multiple sites and multiple overseas suppliers, where the pricing is offered in suppliers’ local currencies. Our proposed methodology is applied to a case study for a US automotive company having two assembly plants and four potential global suppliers to illustrate how the proposed model works in practice.

Keywords: global supply chains, quality, stochastic programming, supplier selection

Procedia PDF Downloads 434
1248 Human Error Analysis in the USA Marine Accidents Reports

Authors: J. Sánchez-Beaskoetxea

Abstract:

The analysis of accidents, such as marine accidents, is one of the most useful instruments to avoid future accidents. In the case of marine accidents, from a simple collision of a small boat in a port to the wreck of a gigantic tanker ship, the study of the causes of the accidents is the basis of a great part of the marine international legislation. Some countries have official institutions who investigate all the accidents in which a ship with their flag is involved. In the case of the USA, the National Transportation Safety Board (NTSB) is responsible for these researches. The NTSB, after a deep investigation into each accident, publishes a Marine Accident Report with the possible cause of the accident. This paper analyses all the Marine Accident Reports published by the NTBS and focuses its attention especially in the Human Errors that led to reported accidents. In this research, the different Human Errors made by crew members are cataloged in 10 different groups. After a complete analysis of all the reports, the statistical analysis on the Human Errors typology in marine accidents is presented in order to use it as a tool to avoid the same errors in the future.

Keywords: human error, marine accidents, ship crew, USA

Procedia PDF Downloads 397
1247 The Omani Learner of English Corpus: Source and Tools

Authors: Anood Al-Shibli

Abstract:

Designing a learner corpus is not an easy task to accomplish because dealing with learners’ language has many variables which might affect the results of any study based on learners’ language production (spoken and written). Also, it is very essential to systematically design a learner corpus especially when it is aimed to be a reference to language research. Therefore, designing the Omani Learner Corpus (OLEC) has undergone many explicit and systematic considerations. These criteria can be regarded as the foundation to design any learner corpus to be exploited effectively in language use and language learning studies. Added to that, OLEC is manually error-annotated corpus. Error-annotation in learner corpora is very essential; however, it is time-consuming and prone to errors. Consequently, a navigating tool is designed to help the annotators to insert errors’ codes in order to make the error-annotation process more efficient and consistent. To assure accuracy, error annotation procedure is followed to annotate OLEC and some preliminary findings are noted. One of the main results of this procedure is creating an error-annotation system based on the Omani learners of English language production. Because OLEC is still in the first stages, the primary findings are related to only one level of proficiency and one error type which is verb related errors. It is found that Omani learners in OLEC has the tendency to have more errors in forming the verb and followed by problems in agreement of verb. Comparing the results to other error-based studies indicate that the Omani learners tend to have basic verb errors which can found in lower-level of proficiency. To this end, it is essential to admit that examining learners’ errors can give insights to language acquisition and language learning and most errors do not happen randomly but they occur systematically among language learners.

Keywords: error-annotation system, error-annotation manual, learner corpora, verbs related errors

Procedia PDF Downloads 120
1246 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić

Abstract:

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation

Procedia PDF Downloads 366
1245 Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are: 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.

Keywords: coded indirect corrective feedback, error correction, error treatment, frequent English writing errors

Procedia PDF Downloads 216
1244 Teaching Practices for Subverting Significant Retentive Learner Errors in Arithmetic

Authors: Michael Lousis

Abstract:

The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic throughout the development of the Kassel Project in England and Greece was accomplished. How much retentive these errors were over three-years in the officially provided school instruction of Arithmetic in these countries has also been shown. The learners’ errors in Arithmetic stemmed from a sample, which was comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. The sample was purposefully selected according to the students’ participation in each testing session in the development of the three-year project, in both domains simultaneously in Arithmetic and Algebra. Specific teaching practices have been invented and are presented in this study for subverting these learners’ errors, which were found out to be retentive to the level of the nationally provided mathematical education of each country. The invention and the development of these proposed teaching practices were founded on the rationality of the theoretical accounts concerning the explanation, prediction and control of the errors, on the conceptual metaphor and on an analysis, which tried to identify the required cognitive components and skills of the specific tasks, in terms of Psychology and Cognitive Science as applied to information-processing. The aim of the implementation of these instructional practices is not only the subversion of these errors but the achievement of the mathematical competence, as this was defined to be constituted of three elements: appropriate representations - appropriate meaning - appropriately developed schemata. However, praxis is of paramount importance, because there is no independent of science ‘real-truth’ and because praxis serves as quality control when it takes the form of a cognitive method.

Keywords: arithmetic, cognitive science, cognitive psychology, information-processing paradigm, Kassel project, level of the nationally provided mathematical education, praxis, remedial mathematical teaching practices, retentiveness of errors

Procedia PDF Downloads 296
1243 Post-Secondary Faculty Treatment of Non-Native English-Speaking Student Writing Errors in Academic Subject Courses

Authors: Laura E. Monroe

Abstract:

As more non-native English-speaking students enroll in English-medium universities, even more faculty will instruct students who are unprepared for the rigors of post-secondary academic writing in English. Many faculty members lack training and knowledge regarding the assessment of non-native English-speaking students’ writing, as well as the ability to provide effective feedback. This quantitative study investigated the possible attitudinal factors, including demographics, which might affect faculty preparedness and grading practices for both native and non-native English-speaking students’ academic writing and plagiarism, as well as the reasons faculty do not deduct points from both populations’ writing errors. Structural equation modeling and SPSS Statistics were employed to analyze the results of a faculty questionnaire disseminated to individuals who had taught non-native English-speaking students in academic subject courses. The findings from this study illustrated that faculty’s native language, years taught, and institution type were significant factors in not deducting points for academic writing errors and plagiarism, and the major reasons for not deducting points for errors were that faculty had too many students to grade, not enough training in assessing student written errors and plagiarism and that the errors and plagiarism would have taken too long to explain. The practical implications gleaned from these results can be applied to most departments in English-medium post-secondary institutions regarding faculty preparedness and training in student academic writing errors and plagiarism, and recommendations for future research are given for similar types of preparation and guidance for post-secondary faculty, regardless of degree path or academic subject.

Keywords: assessment, faculty, non-native English-speaking students, writing

Procedia PDF Downloads 128
1242 Collocation Errors Made by Saudi Learners of English

Authors: Pakenam Shiha, Nadine Lacsina

Abstract:

Systematic and in-depth analysis of ESL learners’ lexical errors, in general, and of collocation errors, in particular, are relatively rare. Analysis as such proves crucial in understanding how ESL learners construct and use these fixed expressions. Collocational competence of ESL learners is necessary for achieving a native-like proficiency level, which is one of the objectives of foundation programs. This study aims to examine the collocational competence of 50 Saudi foundation program students and identify the collocation errors that they often make. Furthermore, using a questionnaire, the challenges that students encounter in learning collocations and the ways in which their L1 affects their ability to recognize these expressions are identified. To identify the lexical errors and the collocational competence of the students a collocation test was administered. The 150-item lexical collocation test consists of verb-noun and adjective-noun structures. Results of the study reveal that there is a significant difference between the scores of students in the verb-noun and adjective-noun structures. The majority of errors were recorded in the adjective-noun structures due to the students’ L1 influence on the English collocations and the inability to distinguish between synonyms. Moreover, some challenges that students encountered were problems in translation, non-exposure to certain collocations, and degree of L1-L2 difference. All in all, the findings of this study can be interpreted in relation to the student's proficiency level and L2 instruction. Other findings of the study provide insights into language pedagogy—specifically strategies to help students learn collocations more effectively.

Keywords: collocations, ESL, applied linguistics, lexical collocations

Procedia PDF Downloads 101
1241 An Accelerated Stochastic Gradient Method with Momentum

Authors: Liang Liu, Xiaopeng Luo

Abstract:

In this paper, we propose an accelerated stochastic gradient method with momentum. The momentum term is the weighted average of generated gradients, and the weights decay inverse proportionally with the iteration times. Stochastic gradient descent with momentum (SGDM) uses weights that decay exponentially with the iteration times to generate the momentum term. Using exponential decay weights, variants of SGDM with inexplicable and complicated formats have been proposed to achieve better performance. However, the momentum update rules of our method are as simple as that of SGDM. We provide theoretical convergence analyses, which show both the exponential decay weights and our inverse proportional decay weights can limit the variance of the parameter moving directly to a region. Experimental results show that our method works well with many practical problems and outperforms SGDM.

Keywords: exponential decay rate weight, gradient descent, inverse proportional decay rate weight, momentum

Procedia PDF Downloads 140
1240 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

We have conducted the optimal synthesis of root-mean-squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous-time.

Keywords: mathematical expectation, filtration, anomalous noise, memory

Procedia PDF Downloads 217
1239 Estimation of Probabilistic Fatigue Crack Propagation Models of AZ31 Magnesium Alloys under Various Load Ratio Conditions by Using the Interpolation of a Random Variable

Authors: Seon Soon Choi

Abstract:

The essential purpose is to present the good fatigue crack propagation model describing a stochastic fatigue crack growth behavior in a rolled magnesium alloy, AZ31, under various load ratio conditions. Fatigue crack propagation experiments were carried out in laboratory air under four conditions of load ratio, R, using AZ31 to investigate the crack growth behavior. The stochastic fatigue crack growth behavior was analyzed using an interpolation of random variable, Z, introduced to an empirical fatigue crack propagation model. The empirical fatigue models used in this study are Paris-Erdogan model, Walker model, Forman model, and modified Forman model. It was found that the random variable is useful in describing the stochastic fatigue crack growth behaviors under various load ratio conditions. The good probabilistic model describing a stochastic fatigue crack growth behavior under various load ratio conditions was also proposed.

Keywords: magnesium alloys, fatigue crack propagation model, load ratio, interpolation of random variable

Procedia PDF Downloads 396