Search results for: grammatical error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2003

Search results for: grammatical error

1853 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 544
1852 Estimation of Slab Depth, Column Size and Rebar Location of Concrete Specimen Using Impact Echo Method

Authors: Y. T. Lee, J. H. Na, S. H. Kim, S. U. Hong

Abstract:

In this study, an experimental research for estimation of slab depth, column size and location of rebar of concrete specimen is conducted using the Impact Echo Method (IE) based on stress wave among non-destructive test methods. Estimation of slab depth had total length of 1800×300 and 6 different depths including 150 mm, 180 mm, 210 mm, 240 mm, 270 mm and 300 mm. The concrete column specimen was manufactured by differentiating the size into 300×300×300 mm, 400×400×400 mm and 500×500×500 mm. In case of the specimen for estimation of rebar, rebar of ∅22 mm was used in a specimen of 300×370×200 and arranged at 130 mm and 150 mm from the top to the rebar top. As a result of error rate of slab depth was overall mean of 3.1%. Error rate of column size was overall mean of 1.7%. Mean error rate of rebar location was 1.72% for top, 1.19% for bottom and 1.5% for overall mean showing relative accuracy.

Keywords: impact echo method, estimation, slab depth, column size, rebar location, concrete

Procedia PDF Downloads 317
1851 High Performance of Direct Torque and Flux Control of a Double Stator Induction Motor Drive with a Fuzzy Stator Resistance Estimator

Authors: K. Kouzi

Abstract:

In order to have stable and high performance of direct torque and flux control (DTFC) of double star induction motor drive (DSIM), proper on-line adaptation of the stator resistance is very important. This is inevitably due to the variation of the stator resistance during operating conditions, which introduces error in estimated flux position and the magnitude of the stator flux. Error in the estimated stator flux deteriorates the performance of the DTFC drive. Also, the effect of error in estimation is very important especially at low speed. Due to this, our aim is to overcome the sensitivity of the DTFC to the stator resistance variation by proposing on-line fuzzy estimation stator resistance. The fuzzy estimation method is based on an on-line stator resistance correction through the variations of the stator current estimation error and its variations. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of the suggested algorithm control is to avoid the drive instability that may occur in certain situations and ensure the tracking of the actual stator resistance. The validity of the technique and the improvement of the whole system performance are proved by the results.

Keywords: direct torque control, dual stator induction motor, Fuzzy Logic estimation, stator resistance adaptation

Procedia PDF Downloads 296
1850 Forecasting Container Throughput: Using Aggregate or Terminal-Specific Data?

Authors: Gu Pang, Bartosz Gebka

Abstract:

We forecast the demand of total container throughput at the Indonesia’s largest seaport, Tanjung Priok Port. We propose four univariate forecasting models, including SARIMA, the additive Seasonal Holt-Winters, the multiplicative Seasonal Holt-Winters and the Vector Error Correction Model. Our aim is to provide insights into whether forecasting the total container throughput obtained by historical aggregated port throughput time series is superior to the forecasts of the total throughput obtained by summing up the best individual terminal forecasts. We test the monthly port/individual terminal container throughput time series between 2003 and 2013. The performance of forecasting models is evaluated based on Mean Absolute Error and Root Mean Squared Error. Our results show that the multiplicative Seasonal Holt-Winters model produces the most accurate forecasts of total container throughput, whereas SARIMA generates the worst in-sample model fit. The Vector Error Correction Model provides the best model fits and forecasts for individual terminals. Our results report that the total container throughput forecasts based on modelling the total throughput time series are consistently better than those obtained by combining those forecasts generated by terminal-specific models. The forecasts of total throughput until the end of 2018 provide an essential insight into the strategic decision-making on the expansion of port's capacity and construction of new container terminals at Tanjung Priok Port.

Keywords: SARIMA, Seasonal Holt-Winters, Vector Error Correction Model, container throughput

Procedia PDF Downloads 474
1849 Modeling of Diurnal Pattern of Air Temperature in a Tropical Environment: Ile-Ife and Ibadan, Nigeria

Authors: Rufus Temidayo Akinnubi, M. O. Adeniyi

Abstract:

Existing diurnal air temperature models simulate night time air temperature over Nigeria with high biases. An improved parameterization is presented for modeling the diurnal pattern of air temperature (Ta) which is applicable in the calculation of turbulent heat fluxes in Global climate models, based on Nigeria Micrometeorological Experimental site (NIMEX) surface layer observations. Five diurnal Ta models for estimating hourly Ta from daily maximum, daily minimum, and daily mean air temperature were validated using root-mean-square error (RMSE), Mean Error Bias (MBE) and scatter graphs. The original Fourier series model showed better performance for unstable air temperature parameterizations while the stable Ta was strongly overestimated with a large error. The model was improved with the inclusion of the atmospheric cooling rate that accounts for the temperature inversion that occurs during the nocturnal boundary layer condition. The MBE and RMSE estimated by the modified Fourier series model reduced by 4.45 oC and 3.12 oC during the transitional period from dry to wet stable atmospheric conditions. The modified Fourier series model gave good estimation of the diurnal weather patterns of Ta when compared with other existing models for a tropical environment.

Keywords: air temperature, mean bias error, Fourier series analysis, surface energy balance,

Procedia PDF Downloads 202
1848 Is the Okun's Law Valid in Tunisia?

Authors: El Andari Chifaa, Bouaziz Rached

Abstract:

The central focus of this paper was to check whether the Okun’s law in Tunisia is valid or not. For this purpose, we have used quarterly time series data during the period 1990Q1-2014Q1. Firstly, we applied the error correction model instead of the difference version of Okun's Law, the Engle-Granger and Johansen test are employed to find out long run association between unemployment, production, and how error correction mechanism (ECM) is used for short run dynamic. Secondly, we used the gap version of Okun’s law where the estimation is done from three band pass filters which are mathematical tools used in macro-economic and especially in business cycles theory. The finding of the study indicates that the inverse relationship between unemployment and output is verified in the short and long term, and the Okun's law holds for the Tunisian economy, but with an Okun’s coefficient lower than required. Therefore, our empirical results have important implications for structural and cyclical policymakers in Tunisia to promote economic growth in a context of lower unemployment growth.

Keywords: Okun’s law, validity, unit root, cointegration, error correction model, bandpass filters

Procedia PDF Downloads 290
1847 IPO Valuation and Profitability Expectations: Evidence from the Italian Exchange

Authors: Matteo Bonaventura, Giancarlo Giudici

Abstract:

This paper analyses the valuation process of companies listed on the Italian Exchange in the period 2000-2009 at their Initial Public Offering (IPO). One the most common valuation techniques declared in the IPO prospectus to determine the offer price is the Discounted Cash Flow (DCF) method. We develop a ‘reverse engineering’ model to discover the short term profitability implied in the offer prices. We show that there is a significant optimistic bias in the estimation of future profitability compared to ex-post actual realization and the mean forecast error is substantially large. Yet we show that such error characterizes also the estimations carried out by analysts evaluating non-IPO companies. The forecast error is larger the faster has been the recent growth of the company, the higher is the leverage of the IPO firm, the more companies issued equity on the market. IPO companies generally exhibit better operating performance before the listing, with respect to comparable listed companies, while after the flotation they do not perform significantly different in term of return on invested capital. Pre-IPO book building activity plays a significant role in partially reducing the forecast error and revising expectations, while the market price of the first day of trading does not contain information for further reducing forecast errors.

Keywords: initial public offerings, DCF, book building, post-IPO profitability drop

Procedia PDF Downloads 319
1846 Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries

Authors: Elham Alaee, Mousa Shamsi, Hossein Ahmadi, Soroosh Nazem, Mohammad Hossein Sedaaghi

Abstract:

Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy C-Means (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic C-Means (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm.

Keywords: facial image, segmentation, PCM, FCM, skin error, facial surgery

Procedia PDF Downloads 552
1845 Low-Cost Reversible Logic Serial Multipliers with Error Detection Capability

Authors: Mojtaba Valinataj

Abstract:

Nowadays reversible logic has received many attentions as one of the new fields for reducing the power consumption. On the other hand, the processing systems have weaknesses against different external effects. In this paper, some error detecting reversible logic serial multipliers are proposed by incorporating the parity-preserving gates. This way, the new designs are presented for signed parity-preserving serial multipliers based on the Booth's algorithm by exploiting the new arrangements of existing gates. The experimental results show that the proposed 4×4 multipliers in this paper reach up to 20%, 35%, and 41% enhancements in the number of constant inputs, quantum cost, and gate count, respectively, as the reversible logic criteria, compared to previous designs. Furthermore, all the proposed designs have been generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.

Keywords: Booth’s algorithm, error detection, multiplication, parity-preserving gates, quantum computers, reversible logic

Procedia PDF Downloads 184
1844 Error Analysis in Academic Writing of EFL Learners: A Case Study for Undergraduate Students at Pathein University

Authors: Aye Pa Pa Myo

Abstract:

Writing in English is accounted as a complex process for English as a foreign language learners. Besides, committing errors in writing can be found as an inevitable part of language learners’ writing. Generally, academic writing is quite difficult for most of the students to manage for getting better scores. Students can commit common errors in their writings when they try to write academic writing. Error analysis deals with identifying and detecting the errors and also explains the reason for the occurrence of these errors. In this paper, the researcher has an attempt to examine the common errors of undergraduate students in their academic writings at Pathein University. The purpose of doing this research is to investigate the errors which students usually commit in academic writing and to find out the better ways for correcting these errors in EFL classrooms. In this research, fifty-third-year non-English specialization students attending Pathein University were selected as participants. This research took one month. It was conducted with a mixed methodology method. Two mini-tests were used as research tools. Data were collected with a quantitative research method. Findings from this research pointed that most of the students noticed their common errors after getting the necessary input, and they became more decreased committing these errors after taking mini-test; hence, all findings will be supportive for further researches related to error analysis in academic writing.

Keywords: academic writing, error analysis, EFL learners, mini-tests, mixed methodology

Procedia PDF Downloads 101
1843 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 354
1842 Wind Power Forecast Error Simulation Model

Authors: Josip Vasilj, Petar Sarajcev, Damir Jakus

Abstract:

One of the major difficulties introduced with wind power penetration is the inherent uncertainty in production originating from uncertain wind conditions. This uncertainty impacts many different aspects of power system operation, especially the balancing power requirements. For this reason, in power system development planing, it is necessary to evaluate the potential uncertainty in future wind power generation. For this purpose, simulation models are required, reproducing the performance of wind power forecasts. This paper presents a wind power forecast error simulation models which are based on the stochastic process simulation. Proposed models capture the most important statistical parameters recognized in wind power forecast error time series. Furthermore, two distinct models are presented based on data availability. First model uses wind speed measurements on potential or existing wind power plant locations, while the seconds model uses statistical distribution of wind speeds.

Keywords: wind power, uncertainty, stochastic process, Monte Carlo simulation

Procedia PDF Downloads 451
1841 Co-Integration Model for Predicting Inflation Movement in Nigeria

Authors: Salako Rotimi, Oshungade Stephen, Ojewoye Opeyemi

Abstract:

The maintenance of price stability is one of the macroeconomic challenges facing Nigeria as a nation. This paper attempts to build a co-integration multivariate time series model for inflation movement in Nigeria using data extracted from the abstract of statistics of the Central Bank of Nigeria (CBN) from 2008 to 2017. The Johansen cointegration test suggests at least one co-integration vector describing the long run relationship between Consumer Price Index (CPI), Food Price Index (FPI) and Non-Food Price Index (NFPI). All three series show increasing pattern, which indicates a sign of non-stationary in each of the series. Furthermore, model predictability was established with root-mean-square-error, mean absolute error, mean average percentage error, and Theil’s unbiased statistics for n-step forecasting. The result depicts that the long run coefficient of a consumer price index (CPI) has a positive long-run relationship with the food price index (FPI) and non-food price index (NFPI).

Keywords: economic, inflation, model, series

Procedia PDF Downloads 214
1840 Bit Error Rate (BER) Performance of Coherent Homodyne BPSK-OCDMA Network for Multimedia Applications

Authors: Morsy Ahmed Morsy Ismail

Abstract:

In this paper, the structure of a coherent homodyne receiver for the Binary Phase Shift Keying (BPSK) Optical Code Division Multiple Access (OCDMA) network is introduced based on the Multi-Length Weighted Modified Prime Code (ML-WMPC) for multimedia applications. The Bit Error Rate (BER) of this homodyne detection is evaluated as a function of the number of active users and the signal to noise ratio for different code lengths according to the multimedia application such as audio, voice, and video. Besides, the Mach-Zehnder interferometer is used as an external phase modulator in homodyne detection. Furthermore, the Multiple Access Interference (MAI) and the receiver noise in a shot-noise limited regime are taken into consideration in the BER calculations.

Keywords: OCDMA networks, bit error rate, multiple access interference, binary phase-shift keying, multimedia

Procedia PDF Downloads 141
1839 On Differential Growth Equation to Stochastic Growth Model Using Hyperbolic Sine Function in Height/Diameter Modeling of Pines

Authors: S. O. Oyamakin, A. U. Chukwu

Abstract:

Richard's growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richard's growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richard's growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richard's nonlinear growth models better than the classical Richard's growth model.

Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, Richard's, stochastic

Procedia PDF Downloads 443
1838 Webster´s Spelling Book: A Product of Language-in-Education Policies in the United States in the Early 1800s

Authors: Virginia Andrea Garrido Meirelles

Abstract:

Noah Webster was a lexicographer and a language reformer and is considered the ‘Father of American Scholarship and Education’ because of the exceptional contributions he made as a teacher and grammarian. The goal of this study is to show that the success of his plan can be explained by the fact that it matched the language-in-education policies of his time. To accomplish that goal the present study analyzes the Massachusetts School Laws of 1642, 1647 and 1648 and compares them to the preface of the first edition of The Grammatical Institute of the English Language. The referred laws were three legislative acts enacted in the Massachusetts Colony and replicated almost identically in the other New England colonies. The purpose of those laws was to eradicate pauperism and poverty, on the one side, and to disseminate the idea of right citizenship, on the other. However, until the Declaration of Independence in 1776, all the primers used in the colony were printed in Britain. In 1783, Noah Webster published the first part of his Grammatical Institute of the English Language. In this book, the author states that his goal is to promote the republican principles that guide the civil rights of that time. The material included many texts taken from the Bible to inspire aversion to inadequate behavior and preference for service and good manners. In addition, its goal was to present ‘a new plan of reducing the pronunciation of our language to an easy standard,’ and in that way, create a unified language to abolish ignorance and language corruption. The comparison between the laws and Webster’s Spelling Book shows that the book is the result of the historical and political situation when it was conceived and it satisfied the requirements of the language-in-education policies of the time.

Keywords: American English, language policy, the Massachusetts school laws, webster's spelling book

Procedia PDF Downloads 180
1837 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index

Authors: Shih-Pin Chen, Shih-Syuan You

Abstract:

This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.

Keywords: dissemblance index, forecasting, fuzzy sets, linear regression

Procedia PDF Downloads 330
1836 Hardware Error Analysis and Severity Characterization in Linux-Based Server Systems

Authors: Nikolaos Georgoulopoulos, Alkis Hatzopoulos, Konstantinos Karamitsios, Konstantinos Kotrotsios, Alexandros I. Metsai

Abstract:

In modern server systems, business critical applications run in different types of infrastructure, such as cloud systems, physical machines and virtualization. Often, due to high load and over time, various hardware faults occur in servers that translate to errors, resulting to malfunction or even server breakdown. CPU, RAM and hard drive (HDD) are the hardware parts that concern server administrators the most regarding errors. In this work, selected RAM, HDD and CPU errors, that have been observed or can be simulated in kernel ring buffer log files from two groups of Linux servers, are investigated. Moreover, a severity characterization is given for each error type. Better understanding of such errors can lead to more efficient analysis of kernel logs that are usually exploited for fault diagnosis and prediction. In addition, this work summarizes ways of simulating hardware errors in RAM and HDD, in order to test the error detection and correction mechanisms of a Linux server.

Keywords: hardware errors, Kernel logs, Linux servers, RAM, hard disk, CPU

Procedia PDF Downloads 122
1835 GPU Based High Speed Error Protection for Watermarked Medical Image Transmission

Authors: Md Shohidul Islam, Jongmyon Kim, Ui-pil Chong

Abstract:

Medical image is an integral part of e-health care and e-diagnosis system. Medical image watermarking is widely used to protect patients’ information from malicious alteration and manipulation. The watermarked medical images are transmitted over the internet among patients, primary and referred physicians. The images are highly prone to corruption in the wireless transmission medium due to various noises, deflection, and refractions. Distortion in the received images leads to faulty watermark detection and inappropriate disease diagnosis. To address the issue, this paper utilizes error correction code (ECC) with (8, 4) Hamming code in an existing watermarking system. In addition, we implement the high complex ECC on a graphics processing units (GPU) to accelerate and support real-time requirement. Experimental results show that GPU achieves considerable speedup over the sequential CPU implementation, while maintaining 100% ECC efficiency.

Keywords: medical image watermarking, e-health system, error correction, Hamming code, GPU

Procedia PDF Downloads 261
1834 An Alternative Richards’ Growth Model Based on Hyperbolic Sine Function

Authors: Samuel Oluwafemi Oyamakin, Angela Unna Chukwu

Abstract:

Richrads growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richards growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richards growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richards nonlinear growth models better than the classical Richards growth model.

Keywords: height, diameter at breast height, DBH, hyperbolic sine function, Pinus caribaea, Richards' growth model

Procedia PDF Downloads 363
1833 Performance Analysis of Multichannel OCDMA-FSO Network under Different Pervasive Conditions

Authors: Saru Arora, Anurag Sharma, Harsukhpreet Singh

Abstract:

To meet the growing need of high data rate and bandwidth, various efforts has been made nowadays for the efficient communication systems. Optical Code Division Multiple Access over Free space optics communication system seems an effective role for providing transmission at high data rate with low bit error rate and low amount of multiple access interference. This paper demonstrates the OCDMA over FSO communication system up to the range of 7000 m at a data rate of 5 Gbps. Initially, the 8 user OCDMA-FSO system is simulated and pseudo orthogonal codes are used for encoding. Also, the simulative analysis of various performance parameters like power and core effective area that are having an effect on the Bit error rate (BER) of the system is carried out. The simulative analysis reveals that the length of the transmission is limited by the multi-access interference (MAI) effect which arises when the number of users increases in the system.

Keywords: FSO, PSO, bit error rate (BER), opti system simulation, multiple access interference (MAI), q-factor

Procedia PDF Downloads 342
1832 Evaluation of the Grammar Questions at the Undergraduate Level

Authors: Preeti Gacche

Abstract:

A considerable part of undergraduate level English Examination papers is devoted to grammar. Hence the grammar questions in the question papers are evaluated and the opinions of both students and teachers about them are obtained and analyzed. A grammar test of 100 marks is administered to 43 students to check their performance. The question papers have been evaluated by 10 different teachers and their scores compared. The analysis of 38 University question papers reveals that on an average 20 percent marks are allotted to grammar. Almost all the grammar topics are tested. Abundant use of grammatical terminology is observed in the questions. Decontextualization, repetition, possibility of multiple correct answers and grammatical errors in framing the questions have been observed. Opinions of teachers and students about grammar questions vary in many respects. The students responses are analyzed medium-wise and sex-wise. The Medium at the School level and the sex of the students are found to play no role as far as interest in the study of grammar is concerned. English medium students solve grammar questions intuitively whereas non-English medium students are required to recollect the rules of grammar. Prepositions, Verbs, Articles and Model auxiliaries are found to be easy topics for most students whereas the use of conjunctions is the most difficult topic. Out of context items of grammar are difficult to answer in comparison with contextualized items of grammar. Hence contextualized texts to test grammar items are desirable. No formal training in setting questions is imparted to teachers by the competent authorities like the University. They need to be trained in testing. Statistically there is no significant change of score with the change in the rater in testing of grammar items. There is scope of future improvement. The question papers need to be evaluated and feedback needs to be obtained from students and teachers for future improvement.

Keywords: context, evaluation, grammar, tests

Procedia PDF Downloads 325
1831 Redundancy in Malay Morphology: School Grammar versus Corpus Grammar

Authors: Zaharani Ahmad, Nor Hashimah Jalaluddin

Abstract:

The aim of this paper is to examine and identify the issue of linguistic redundancy in two competing grammars of Malay, namely the school grammar and the corpus grammar. The former is a normative grammar which is formally and prescriptively taught in the classroom, whereas the latter is a descriptive grammar that is informally acquired and mastered by the students as native speakers of the language outside the classroom. Corpus grammar is depicted based on its actual used in natural occurring texts, as attested in the corpus. It is observed that the grammar taught in schools is incompatible with the grammar used in the corpus. For instance, a noun phrase containing nominal reduplicated form which denotes plurality (i.e. murid-murid ‘students’ which is derived from murid ‘student’) and a modifier categorized as quantifiers (i.e. semua ‘all’, seluruh ‘entire’, and kebanyakan ‘most’) is not acceptable in the school grammar because the formation (i.e. semua murid-murid ‘all the students’ kebanyakan pelajar-pelajar ‘most of the students’) is claimed to be redundant, and redundancy is prohibited in the grammar. Redundancy is generally construed as the property of speech and language by which more information is provided than is precisely required for the message to be understood, so that, if some information is omitted, the remaining information will still be sufficient for the message to be comprehended. Thus, the correct construction to be used is strictly the reduplicated form (i.e. murid-murid ‘students’) or the quantifier plus the root (i.e. semua murid ‘all the students’) with the intention that the grammatical meaning of plural is not repeated. Nevertheless, the so-called redundant form (i.e. kebanyakan pelajar-pelajar ‘most of the students’) is frequently used in the corpus grammar. This study shows that there are a number of redundant forms occur in the morphology of the language, particularly in affixation, reduplication and combination of both. Apparently, the so-called redundancy has grammatical and socio-cultural functions in communication that is to give emphasis and to stress the importance of the information delivered by the speakers or writers.

Keywords: corpus grammar, morphology, redundancy, school grammar

Procedia PDF Downloads 310
1830 Experimental Characterization of the Color Quality and Error Rate for an Red, Green, and Blue-Based Light Emission Diode-Fixture Used in Visible Light Communications

Authors: Juan F. Gutierrez, Jesus M. Quintero, Diego Sandoval

Abstract:

An important feature of LED technology is the fast on-off commutation, which allows data transmission. Visible Light Communication (VLC) is a wireless method to transmit data with visible light. Modulation formats such as On-Off Keying (OOK) and Color Shift Keying (CSK) are used in VLC. Since CSK is based on three color bands uses red, green, and blue monochromatic LED (RGB-LED) to define a pattern of chromaticities. This type of CSK provides poor color quality in the illuminated area. This work presents the design and implementation of a VLC system using RGB-based CSK with 16, 8, and 4 color points, mixing with a steady baseline of a phosphor white-LED, to improve the color quality of the LED-Fixture. The experimental system was assessed in terms of the Color Rendering Index (CRI) and the Symbol Error Rate (SER). Good color quality performance of the LED-Fixture was obtained with an acceptable SER. The laboratory setup used to characterize and calibrate an LED-Fixture is described.

Keywords: VLC, indoor lighting, color quality, symbol error rate, color shift keying

Procedia PDF Downloads 72
1829 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization

Authors: Subhajit Das, Nirjhar Dhang

Abstract:

Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.

Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization

Procedia PDF Downloads 188
1828 A Posteriori Analysis of the Spectral Element Discretization of Heat Equation

Authors: Chor Nejmeddine, Ines Ben Omrane, Mohamed Abdelwahed

Abstract:

In this paper, we present a posteriori analysis of the discretization of the heat equation by spectral element method. We apply Euler's implicit scheme in time and spectral method in space. We propose two families of error indicators, both of which are built from the residual of the equation and we prove that they satisfy some optimal estimates. We present some numerical results which are coherent with the theoretical ones.

Keywords: heat equation, spectral elements discretization, error indicators, Euler

Procedia PDF Downloads 273
1827 Adaptive Decision Feedback Equalizer Utilizing Fixed-Step Error Signal for Multi-Gbps Serial Links

Authors: Alaa Abdullah Altaee

Abstract:

This paper presents an adaptive decision feedback equalizer (ADFE) for multi-Gbps serial links utilizing a fix-step error signal extracted from cross-points of received data symbols. The extracted signal is generated based on violation of received data symbols with minimum detection requirements at the clock and data recovery (CDR) stage. The iterations of the adaptation process search for the optimum feedback tap coefficients to maximize the data eye-opening and minimize the adaptation convergence time. The effectiveness of the proposed architecture is validated using the simulation results of a serial link designed in an IBM 130 nm 1.2V CMOS technology. The data link with variable channel lengths is analyzed using Spectre from Cadence Design Systems with BSIM4 device models.

Keywords: adaptive DFE, CMOS equalizer, error detection, serial links, timing jitter, wire-line communication

Procedia PDF Downloads 88
1826 Performance Analysis of a Hybrid DF-AF Hybrid RF/FSO System under Gamma Gamma Atmospheric Turbulence Channel Using MPPM Modulation

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of hybrid amplify and forward - decode and forward (AF-DF) hybrid radio frequency/free space optical (RF/FSO) communication system, that adopts M-ary pulse position modulation (MPPM) techniques, is analyzed. Both exact and approximate symbol-error rates (SERs) are derived. The random variations of the received optical irradiance, produced by the atmospheric turbulence, is modeled by the gamma-gamma (GG) statistical distribution. A closed-form expression for the probability density function (PDF) is derived for the whole above system is obtained. Thanks to the use of hybrid AF-DF hybrid RF/FSO configuration and MPPM, the effects of atmospheric turbulence is mitigated; hence the capacity of combating atmospheric turbulence and the transmissitted signal quality are improved.

Keywords: free space optical, gamma gamma channel, radio frequency, decode and forward, error pointing, M-ary pulse position modulation, symbol error rate

Procedia PDF Downloads 255
1825 Reliability of the Estimate of Earthwork Quantity Based on 3D-BIM

Authors: Jaechoul Shin, Juhwan Hwang

Abstract:

In case of applying the BIM method to the civil engineering in the area of free formed structure, we can expect comparatively high rate of construction productivity as it is in the building engineering area. In this research, we developed quantity calculation error applying it to earthwork and bridge construction (e.g. PSC-I type segmental girder bridge amd integrated bridge of steel I-girders and inverted-Tee bent cap), NATM (New Austrian Tunneling Method) tunnel construction, retaining wall construction, culvert construction and implemented BIM based 3D modeling quantity survey. we confirmed high reliability of the BIM-based method in structure work in which errors occurred in range between -6% ~ +5%. Especially, understanding of the problem and improvement of the existing 2D-CAD based of quantity calculation through rock type quantity calculation error in range of -14% ~ +13% of earthwork quantity calculation. It is benefit and applicability of BIM method in civil engineering. In addition, routine method for quantity of earthwork has the same error tolerance negligible for that of structure work. But, rock type's quantity calculated as the error appears significantly to the reliability of 2D-based volume calculation shows that the problem could be. Through the estimating quantity of earthwork based 3D-BIM, proposed method has better reliability than routine method. BIM, as well as the design, construction, maintenance levels of information when you consider the benefits of integration, the introduction of BIM design in civil engineering and the possibility of applying for the effectiveness was confirmed.

Keywords: BIM, 3D modeling, 3D-BIM, quantity of earthwork

Procedia PDF Downloads 414
1824 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach

Authors: Chen-Yin Kuo, Yung-Hsin Lee

Abstract:

Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.

Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy

Procedia PDF Downloads 286