Search results for: bit error rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9509

Search results for: bit error rate

9209 Error Analysis of Wavelet-Based Image Steganograhy Scheme

Authors: Geeta Kasana, Kulbir Singh, Satvinder Singh

Abstract:

In this paper, a steganographic scheme for digital images using Integer Wavelet Transform (IWT) is proposed. The cover image is decomposed into wavelet sub bands using IWT. Each of the subband is divided into blocks of equal size and secret data is embedded into the largest and smallest pixel values of each block of the subband. Visual quality of stego images is acceptable as PSNR between cover image and stego is above 40 dB, imperceptibility is maintained. Experimental results show better tradeoff between capacity and visual perceptivity compared to the existing algorithms. Maximum possible error analysis is evaluated for each of the wavelet subbands of an image.

Keywords: DWT, IWT, MSE, PSNR

Procedia PDF Downloads 499
9208 An Efficient Strategy for Relay Selection in Multi-Hop Communication

Authors: Jung-In Baik, Seung-Jun Yu, Young-Min Ko, Hyoung-Kyu Song

Abstract:

This paper proposes an efficient relaying algorithm to obtain diversity for improving the reliability of a signal. The algorithm achieves time or space diversity gain by multiple versions of the same signal through two routes. Relays are separated between a source and destination. The routes between the source and destination are set adaptive in order to deal with different channels and noises. The routes consist of one or more relays and the source transmits its signal to the destination through the routes. The signals from the relays are combined and detected at the destination. The proposed algorithm provides a better performance than the conventional algorithms in bit error rate (BER).

Keywords: multi-hop, OFDM, relay, relaying selection

Procedia PDF Downloads 441
9207 Feedback in the Language Class: An Action Research Process

Authors: Arash Golzari Koloor

Abstract:

Feedback seems to be an inseparable part of teaching a second/foreign language. One type of feedback is corrective feedback which is one type of error treatment in second language classrooms. This study is a report on the types of corrective feedback employed in an IELTS preparation course. The types of feedback, their frequencies, and their effectiveness are enlisted, enumerated, and interpreted. The results showed that explicit correction and recast were the most frequent types of feedback while repetition and elicitation were the least. The results also revealed that metalinguistic feedback, elicitation, and explicit correction were the most effective types of feedback and affected learners performance greatly.

Keywords: classroom interaction, corrective feedback, error treatment, oral performance

Procedia PDF Downloads 326
9206 Error Estimation for the Reconstruction Algorithm with Fan Beam Geometry

Authors: Nirmal Yadav, Tanuja Srivastava

Abstract:

Shannon theory is an exact method to recover a band limited signals from its sampled values in discrete implementation, using sinc interpolators. But sinc based results are not much satisfactory for band-limited calculations so that convolution with window function, having compact support, has been introduced. Convolution Backprojection algorithm with window function is an approximation algorithm. In this paper, the error has been calculated, arises due to this approximation nature of reconstruction algorithm. This result will be defined for fan beam projection data which is more faster than parallel beam projection.

Keywords: computed tomography, convolution backprojection, radon transform, fan beam

Procedia PDF Downloads 487
9205 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 167
9204 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution

Authors: Md. Rashidul Hasan, Atikur Rahman Baizid

Abstract:

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.

Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function

Procedia PDF Downloads 380
9203 Application Use of Slaughterhouse Waste to Improve Nutrient Level in Apium glaviolens

Authors: Hasan Basri Jumin

Abstract:

Using the slaughterhouse waste combined to suitable dose of nitrogen fertilizer to Apium glaviolen gives the significant effect to mean relative growth rate. The same pattern also showed significantly in net assimilation rate. The net assimilation rate increased significantly during 42 days old plants. Combination of treatment of 100 ml/l animal slaughterhouse waste and 0.1 g/kg nitrogen fertilizer/kg soil increased the vegetative growth of Apium glaviolens. The biomass of plant and mean relative growth rate of Apium glaviolens were rapidly increased in 4 weeks after planting and gradually decreased after 35 days at the harvest time. Combination of 100 ml/l slaughterhouse waste and applied 0.1 g/kg nitrogen fertilizer has increased all parameters. The highest vegetative growth, biomass, mean relative growth rate and net assimilation rate were received from 0.56 mg-l.m-2.days-1.

Keywords: Apium glaviolent, nitrogen, pollutant, slaughterhouse, waste

Procedia PDF Downloads 362
9202 A Low Order Thermal Envelope Model for Heat Transfer Characteristics of Low-Rise Residential Buildings

Authors: Nadish Anand, Richard D. Gould

Abstract:

A simplistic model is introduced for determining the thermal characteristics of a Low-rise Residential (LRR) building and then predicts the energy usage by its Heating Ventilation & Air Conditioning (HVAC) system according to changes in weather conditions which are reflected in the Ambient Temperature (Outside Air Temperature). The LRR buildings are treated as a simple lump for solving the heat transfer problem and the model is derived using the lumped capacitance model of transient conduction heat transfer from bodies. Since most contemporary HVAC systems have a thermostat control which will have an offset temperature and user defined set point temperatures which define when the HVAC system will switch on and off. The aim is to predict without any error the Body Temperature (i.e. the Inside Air Temperature) which will estimate the switching on and off of the HVAC system. To validate the mathematical model derived from lumped capacitance we have used EnergyPlus simulation engine, which simulates Buildings with considerable accuracy. We have predicted through the low order model the Inside Air Temperature of a single house kept in three different climate zones (Detroit, Raleigh & Austin) and different orientations for summer and winter seasons. The prediction error from the model for the same day as that of model parameter calculation has showed an error of < 10% in winter for almost all the orientations and climate zones. Whereas the prediction error is only <10% for all the orientations in the summer season for climate zone at higher latitudes (Raleigh & Detroit). Possible factors responsible for the large variations are also noted in the work, paving way for future research.

Keywords: building energy, energy consumption, energy+, HVAC, low order model, lumped capacitance

Procedia PDF Downloads 264
9201 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 119
9200 Forecasting Unemployment Rate in Selected European Countries Using Smoothing Methods

Authors: Ksenija Dumičić, Anita Čeh Časni, Berislav Žmuk

Abstract:

The aim of this paper is to select the most accurate forecasting method for predicting the future values of the unemployment rate in selected European countries. In order to do so, several forecasting techniques adequate for forecasting time series with trend component, were selected, namely: double exponential smoothing (also known as Holt`s method) and Holt-Winters` method which accounts for trend and seasonality. The results of the empirical analysis showed that the optimal model for forecasting unemployment rate in Greece was Holt-Winters` additive method. In the case of Spain, according to MAPE, the optimal model was double exponential smoothing model. Furthermore, for Croatia and Italy the best forecasting model for unemployment rate was Holt-Winters` multiplicative model, whereas in the case of Portugal the best model to forecast unemployment rate was Double exponential smoothing model. Our findings are in line with European Commission unemployment rate estimates.

Keywords: European Union countries, exponential smoothing methods, forecast accuracy unemployment rate

Procedia PDF Downloads 365
9199 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 146
9198 Evaluation of Medication Administration Process in a Paediatric Ward

Authors: Zayed Alsulami, Asma Aldosseri, Ahmed Ezziden, Abdulrahman Alonazi

Abstract:

Children are more susceptible to medication errors than adults. Medication administration process is the last stage in the medication treatment process and most of the errors detected in this stage. Little research has been undertaken about medication errors in children in the Middle East countries. This study was aimed to evaluate how the paediatric nurses adhere to the medication administration policy and also to identify any medication preparation and administration errors or any risk factors. An observational, prospective study of medication administration process from when the nurses preparing patient medication until administration stage (May to August 2014) was conducted in Saudi Arabia. Twelve paediatric nurses serving 90 paediatric patients were observed. 456 drug administered doses were evaluated. Adherence rate was variable in 7 steps out of 16 steps. Patient allergy information, dose calculation, drug expiry date were the steps in medication administration with lowest adherence rates. 63 medication preparation and administration errors were identified with error rate 13.8% of medication administrations. No potentially life-threating errors were witnessed. Few logistic and administrative factors were reported. The results showed that the medication administration policy and procedure need an urgent revision to be more sensible for nurses in practice. Nurses’ knowledge and skills regarding the medication administration process should be improved.

Keywords: medication sasfety, paediatric, medication errors, paediatric ward

Procedia PDF Downloads 389
9197 Mixture statistical modeling for predecting mortality human immunodeficiency virus (HIV) and tuberculosis(TB) infection patients

Authors: Mohd Asrul Affendi Bi Abdullah, Nyi Nyi Naing

Abstract:

The purpose of this study was to identify comparable manner between negative binomial death rate (NBDR) and zero inflated negative binomial death rate (ZINBDR) with died patients with (HIV + T B+) and (HIV + T B−). HIV and TB is a serious world wide problem in the developing country. Data were analyzed with applying NBDR and ZINBDR to make comparison which a favorable model is better to used. The ZINBDR model is able to account for the disproportionately large number of zero within the data and is shown to be a consistently better fit than the NBDR model. Hence, as a results ZINBDR model is a superior fit to the data than the NBDR model and provides additional information regarding the died mechanisms HIV+TB. The ZINBDR model is shown to be a use tool for analysis death rate according age categorical.

Keywords: zero inflated negative binomial death rate, HIV and TB, AIC and BIC, death rate

Procedia PDF Downloads 428
9196 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 145
9195 Impact of Health Indicators on Economic Growth: Application of Ardl Model on Pakistan’s Data Set

Authors: Sheraz Ahmad Choudhary

Abstract:

Health plays a vital role in the growth. The study examined the effect of health indicator on the growth of Pakistan. ARDL model is used to check the growth rate which is affected by the health by using the time series date of Pakistan from 1990 to 2017. Health indicator, fertility rate, life expectancy, foreign direct investment, and infant mortality rate are variables Where the unit root is applied to check the stationarity of the model. consequences find a significant relationship between GDP, foreign direct investment, fertility rate, and life expectancy in the short run, whereas mortality rate effected negatively to economic growth but have significant values. In the long run, foreign direct investment (FDI) and fertility rate(FR) have significantly influenced the GDP. The results show thateconomic growth is positively stimulated by most of the health indicators. The study accomplishes that nations can achieve a high level of economic growth by increasing wellbeing human capital.

Keywords: economic growth, health expenditures, fertility rate, human capital, life expectancy, foreign direct investment, and infant mortality rate

Procedia PDF Downloads 125
9194 Setting Uncertainty Conditions Using Singular Values for Repetitive Control in State Feedback

Authors: Muhammad A. Alsubaie, Mubarak K. H. Alhajri, Tarek S. Altowaim

Abstract:

A repetitive controller designed to accommodate periodic disturbances via state feedback is discussed. Periodic disturbances can be represented by a time delay model in a positive feedback loop acting on system output. A direct use of the small gain theorem solves the periodic disturbances problem via 1) isolating the delay model, 2) finding the overall system representation around the delay model and 3) designing a feedback controller that assures overall system stability and tracking error convergence. This paper addresses uncertainty conditions for the repetitive controller designed in state feedback in either past error feedforward or current error feedback using singular values. The uncertainty investigation is based on the overall system found and the stability condition associated with it; depending on the scheme used, to set an upper/lower limit weighting parameter. This creates a region that should not be exceeded in selecting the weighting parameter which in turns assures performance improvement against system uncertainty. Repetitive control problem can be described in lifted form. This allows the usage of singular values principle in setting the range for the weighting parameter selection. The Simulation results obtained show a tracking error convergence against dynamic system perturbation if the weighting parameter chosen is within the range obtained. Simulation results also show the advantage of weighting parameter usage compared to the case where it is omitted.

Keywords: model mismatch, repetitive control, singular values, state feedback

Procedia PDF Downloads 150
9193 Modelling Exchange-Rate Pass-Through: A Model of Oil Prices and Asymmetric Exchange Rate Fluctuations in Selected African Countries

Authors: Fajana Sola Isaac

Abstract:

In the last two decades, we have witnessed an increased interest in exchange rate pass-through (ERPT) in developing economies and emerging markets. This is perhaps due to the acknowledged significance of the pattern of exchange rate pass-through as a key instrument in monetary policy design, principally in retort to a shock in exchange rate in literature. This paper analyzed Exchange Rate Pass-Through by A Model of Oil Prices and Asymmetric Exchange Rate Fluctuations in Selected African Countries. The study adopted A Non-Linear Autoregressive Distributed Lag approach using yearly data on Algeria, Burundi, Nigeria and South Africa from 1986 to 2022. The paper found asymmetry in exchange rate pass-through in net oil-importing and net oil-exporting countries in the short run during the period under review. An ERPT exhibited a complete pass-through in the short run in the case of net oil-importing countries but an incomplete pass-through in the case of the net oil-exporting countries that were examined. An extended result revealed a significant impact of oil price shock on exchange rate pass-through to domestic price in the long run only for net oil importing countries. The Wald restriction test also confirms the evidence of asymmetric with the role of oil price acting as an accelerator to exchange rate pass-through to domestic price in the countries examined. The study found the outcome to be very useful for gaining expansive knowledge on the external shock impact on ERPT and could be of critical value for national monetary policy decisions on inflation targeting, especially for countries examined and other developing net oil importers and exporters.

Keywords: pass through, exchange rate, ARDL, monetary policy

Procedia PDF Downloads 72
9192 Simulation of Growth and Yield of Rice Under Irrigation and Nitrogen Management Using ORYZA2000

Authors: Mojtaba Esmaeilzad Limoudehi

Abstract:

To evaluate the model ORYZA2000, under the management of irrigation and nitrogen fertilization experiment, a split plot with a randomized complete block design with three replications on hybrid cultivars (spring) in the 1388-1387 crop year was conducted at the Rice Research Institute. Permanent flood irrigation as the main plot in the fourth level, around 5 days, from 11 days to 8 days away, and the four levels of nitrogen fertilizer as the subplots 0, 90, 120, and 150 kg N Ha were considered. Simulated and measured values of leaf area index, grain yield, and biological parameters using the regression coefficient, t-test, the root mean square error (RMSE), and normalized root mean square error (RMSEn) were performed. Results, the normalized root mean square error of 10% in grain yield, the biological yield of 9%, and 23% of maximum LAI was determined. The simulation results show that grain yield and biological ORYZA2000 model accuracy are good but do not simulate maximum LAI well. The results show that the model can support ORYZA2000 test results and can be used under conditions of nitrogen fertilizer and irrigation management.

Keywords: evaluation, rice, nitrogen fertilizer, model ORYZA2000

Procedia PDF Downloads 67
9191 Performance Analysis of PAPR Reduction in OFDM Systems based on Partial Transmit Sequence (PTS) Technique

Authors: Alcardo Alex Barakabitze, Tan Xiaoheng

Abstract:

Orthogonal Frequency Division Multiplexing (OFDM) is a special case of Multi-Carrier Modulation (MCM) technique which transmits a stream of data over a number of lower data rate subcarriers. OFDM splits the total transmission bandwidth into a number of orthogonal and non-overlapping subcarriers and transmit the collection of bits called symbols in parallel using these subcarriers. This paper explores the Peak to Average Power Reduction (PAPR) using the Partial Transmit Sequence technique. We provide the distribution analysis and the basics of OFDM signals and then show how the PAPR increases as the number of subcarriers increases. We provide the performance analysis of CCDF and PAPR expressed in decibels through MATLAB simulations. The simulation results show that, in PTS technique, the performance of PAPR reduction in OFDM systems improves significantly as the number of sub-blocks increases. However, by keeping the same number of sub-blocks variation, oversampling factor and the number of OFDM blocks’ iteration for generating the CCDF, the OFDM systems with 128 subcarriers have an improved performance in PAPR reduction compared to OFDM systems with 256, 512 or >512 subcarriers.

Keywords: OFDM, peak to average power reduction (PAPR), bit error rate (BER), subcarriers, wireless communications

Procedia PDF Downloads 509
9190 Circular Approximation by Trigonometric Bézier Curves

Authors: Maria Hussin, Malik Zawwar Hussain, Mubashrah Saddiqa

Abstract:

We present a trigonometric scheme to approximate a circular arc with its two end points and two end tangents/unit tangents. A rational cubic trigonometric Bézier curve is constructed whose end control points are defined by the end points of the circular arc. Weight functions and the remaining control points of the cubic trigonometric Bézier curve are estimated by variational approach to reproduce a circular arc. The radius error is calculated and found less than the existing techniques.

Keywords: control points, rational trigonometric Bézier curves, radius error, shape measure, weight functions

Procedia PDF Downloads 468
9189 Integrating Blogging into Peer Assessment on College Students’ English Writing

Authors: Su-Lien Liao

Abstract:

Most of college students in Taiwan do not have sufficient English proficiency to express themselves in written English. Teachers spent a lot of time correcting students’ English writing, but the results are not satisfactory. This study aims to use blogs as a teaching and learning tool in written English. Before applying peer assessment, students should be trained to be good reviewers. The teacher starts the course by posting the error analysis of students’ first English composition on blogs as the comment models for students. Then the students will go through the process of drafting, composing, peer response and last revision on blogs. Evaluation Questionnaires and interviews will be conducted at the end of the course to see the impact and students’ perception for the course.

Keywords: blog, peer assessment, English writing, error analysis

Procedia PDF Downloads 415
9188 EEG Correlates of Trait and Mathematical Anxiety during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatiana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Tatiana A. Golovko, Yulia V. Kovas

Abstract:

EEG correlates of mathematical and trait anxiety level were studied in 52 healthy Russian-speakers during execution of error-recognition tasks with lexical, arithmetic and algebraic conditions. Event-related spectral perturbations were used as a measure of brain activity. The ERSP plots revealed alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three conditions. The correlates of anxiety were found in theta (4-8 Hz) and beta2 (16-20 Hz) frequency bands. In theta band the effects of mathematical anxiety were stronger expressed in lexical, than in arithmetic and algebraic condition. The mathematical anxiety effects in theta band were associated with differences between anterior and posterior cortical areas, whereas the effects of trait anxiety were associated with inter-hemispherical differences. In beta1 and beta2 bands effects of trait and mathematical anxiety were directed oppositely. The trait anxiety was associated with increase of amplitude of desynchronization, whereas the mathematical anxiety was associated with decrease of this amplitude. The effect of mathematical anxiety in beta2 band was insignificant for lexical condition but was the strongest in algebraic condition. EEG correlates of anxiety in theta band could be interpreted as indexes of task emotionality, whereas the reaction in beta2 band is related to tension of intellectual resources.

Keywords: EEG, brain activity, lexical and numerical error-recognition tasks, mathematical and trait anxiety

Procedia PDF Downloads 559
9187 Denoising of Magnetotelluric Signals by Filtering

Authors: Rodrigo Montufar-Chaveznava, Fernando Brambila-Paz, Ivette Caldelas

Abstract:

In this paper, we present the advances corresponding to the denoising processing of magnetotelluric signals using several filters. In particular, we use the most common spatial domain filters such as median and mean, but we are also using the Fourier and wavelet transform for frequency domain filtering. We employ three datasets obtained at the different sampling rate (128, 4096 and 8192 bps) and evaluate the mean square error, signal-to-noise relation, and peak signal-to-noise relation to compare the kernels and determine the most suitable for each case. The magnetotelluric signals correspond to earth exploration when water is searched. The object is to find a denoising strategy different to the one included in the commercial equipment that is employed in this task.

Keywords: denoising, filtering, magnetotelluric signals, wavelet transform

Procedia PDF Downloads 367
9186 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner

Authors: Aika Umemuro, Mitsuru Sato, Mizuki Narita, Saya Hori, Saya Sakurai, Tomomi Nakayama, Ayano Nakazawa, Toshihiro Ogura

Abstract:

Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.

Keywords: EEG scanner, eye-detector, mammography, observers

Procedia PDF Downloads 213
9185 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines

Authors: Xiaogang Li, Jieqiong Miao

Abstract:

As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square error

Keywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error

Procedia PDF Downloads 456
9184 Effect of Heating Rate on Microstructural Developments in Cold Heading Quality Steel Used for Automotive Applications

Authors: Shahid Hussain Abro, F. Mufadi, A. Boodi

Abstract:

Microstructural study and phase transformation in steels is a basic and important step during the design of structural steel. There are huge efforts and study has been done so far on phase transformations, due to so many steel grades available commercially the phase development in steel has different consequences. In the present work an effort has been made to study the effect of heating rate on microstructural features of cold heading quality steel. The SEM, optical microscopy, and heat treatment techniques have been applied to observe the microstructural features in the experimental steel. It was observed that heating rate has the strong influence on phase transformation of CHQ steel under investigation. Heating rate increases the austenite formation kinetics with respect to holding time, and this austenite has been transformed to martensite upon cooling. Heating rate also plays a vital role on nucleation sites of austenite formation in the experimental steel.

Keywords: CHQ steel, austenite formation, heating rate, nucleation

Procedia PDF Downloads 406
9183 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 262
9182 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach

Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo

Abstract:

Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.

Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation

Procedia PDF Downloads 176
9181 To Study the Effect of Optic Fibre Laser Cladding of Cast Iron with Silicon Carbide on Wear Rate

Authors: Kshitij Sawke, Pradnyavant Kamble, Shrikant Patil

Abstract:

The study investigates the effect on wear rate of laser clad of cast iron with silicon carbide. Metal components fail their desired use because they wear, which causes them to lose their functionality. The laser has been used as a heating source to create a melt pool over the surface of cast iron, and then a layer of hard silicon carbide is deposited. Various combinations of power and feed rate of laser have experimented. A suitable range of laser processing parameters was identified. Wear resistance and wear rate properties were evaluated and the result showed that the wear resistance of the laser treated samples was exceptional to that of the untreated samples.

Keywords: laser clad, processing parameters, wear rate, wear resistance

Procedia PDF Downloads 253
9180 The Effects of Different Parameters of Wood Floating Debris on Scour Rate Around Bridge Piers

Authors: Muhanad Al-Jubouri

Abstract:

A local scour is the most important of the several scours impacting bridge performance and security. Even though scour is widespread in bridges, especially during flood seasons, the experimental tests could not be applied to many standard highway bridges. A computational fluid dynamics numerical model was used to solve the problem of calculating local scouring and deposition for non-cohesive silt and clear water conditions near single and double cylindrical piers with the effect of floating debris. When FLOW-3D software is employed with the Rang turbulence model, the Nilsson bed-load transfer equation and fine mesh size are considered. The numerical findings of single cylindrical piers correspond pretty well with the physical model's results. Furthermore, after parameter effectiveness investigates the range of outcomes based on predicted user inputs such as the bed-load equation, mesh cell size, and turbulence model, the final numerical predictions are compared to experimental data. When the findings are compared, the error rate for the deepest point of the scour is equivalent to 3.8% for the single pier example.

Keywords: local scouring, non-cohesive, clear water, computational fluid dynamics, turbulence model, bed-load equation, debris

Procedia PDF Downloads 66