Search results for: mean bias error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2466

Search results for: mean bias error

2256 The Impact of the Cross Race Effect on Eyewitness Identification

Authors: Leah Wilck

Abstract:

Eyewitness identification is arguably one of the most utilized practices within our legal system; however, exoneration cases indicate that this practice may lead to accuracy and conviction errors. The purpose of this study was to examine the effects of the cross-race effect, the phenomena in which people are able to more easily and accurately identify faces from within their racial category, on the accuracy of eyewitness identification. Participants watched three separate videos of a perpetrator trying to steal a bicycle. In each video, the perpetrator was of a different race and gender. Participants watched a video where the perpetrator was a Black male, a White male, and a White female. Following the completion of watching each video, participants were asked to recall everything they could about the perpetrator they witnessed. The initial results of the study did not find the expected cross-race effect impacted the eyewitness identification accuracy. These surprising results are discussed in terms of cross-race bias and recognition theory as well as applied implications.

Keywords: cross race effect, eyewitness identification, own-race bias, racial profiling

Procedia PDF Downloads 149
2255 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 361
2254 Improving the LDMOS Temperature Compensation Bias Circuit to Optimize Back-Off

Authors: Antonis Constantinides, Christos Yiallouras, Christakis Damianou

Abstract:

The application of today's semiconductor transistors in high power UHF DVB-T linear amplifiers has evolved significantly by utilizing LDMOS technology. This fact provides engineers with the option to design a single transistor signal amplifier which enables output power and linearity that was unobtainable previously using bipolar junction transistors or later type first generation MOSFETS. The quiescent current stability in terms of thermal variations of the LDMOS guarantees a robust operation in any topology of DVB-T signal amplifiers. Otherwise, progressively uncontrolled heat dissipation enhancement on the LDMOS case can degrade the amplifier’s crucial parameters in regards to the gain, linearity, and RF stability, resulting in dysfunctional operation or a total destruction of the unit. This paper presents one more sophisticated approach from the traditional biasing circuits used so far in LDMOS DVB-T amplifiers. It utilizes a microprocessor control technology, providing stability in topologies where IDQ must be perfectly accurate.

Keywords: LDMOS, amplifier, back-off, bias circuit

Procedia PDF Downloads 324
2253 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 404
2252 Forecasting Models for Steel Demand Uncertainty Using Bayesian Methods

Authors: Watcharin Sangma, Onsiri Chanmuang, Pitsanu Tongkhow

Abstract:

A forecasting model for steel demand uncertainty in Thailand is proposed. It consists of trend, autocorrelation, and outliers in a hierarchical Bayesian frame work. The proposed model uses a cumulative Weibull distribution function, latent first-order autocorrelation, and binary selection, to account for trend, time-varying autocorrelation, and outliers, respectively. The Gibbs sampling Markov Chain Monte Carlo (MCMC) is used for parameter estimation. The proposed model is applied to steel demand index data in Thailand. The root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE) criteria are used for model comparison. The study reveals that the proposed model is more appropriate than the exponential smoothing method.

Keywords: forecasting model, steel demand uncertainty, hierarchical Bayesian framework, exponential smoothing method

Procedia PDF Downloads 342
2251 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset

Procedia PDF Downloads 117
2250 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 565
2249 Estimation of Slab Depth, Column Size and Rebar Location of Concrete Specimen Using Impact Echo Method

Authors: Y. T. Lee, J. H. Na, S. H. Kim, S. U. Hong

Abstract:

In this study, an experimental research for estimation of slab depth, column size and location of rebar of concrete specimen is conducted using the Impact Echo Method (IE) based on stress wave among non-destructive test methods. Estimation of slab depth had total length of 1800×300 and 6 different depths including 150 mm, 180 mm, 210 mm, 240 mm, 270 mm and 300 mm. The concrete column specimen was manufactured by differentiating the size into 300×300×300 mm, 400×400×400 mm and 500×500×500 mm. In case of the specimen for estimation of rebar, rebar of ∅22 mm was used in a specimen of 300×370×200 and arranged at 130 mm and 150 mm from the top to the rebar top. As a result of error rate of slab depth was overall mean of 3.1%. Error rate of column size was overall mean of 1.7%. Mean error rate of rebar location was 1.72% for top, 1.19% for bottom and 1.5% for overall mean showing relative accuracy.

Keywords: impact echo method, estimation, slab depth, column size, rebar location, concrete

Procedia PDF Downloads 335
2248 Building Information Modelling for Construction Delay Management

Authors: Essa Alenazi, Zulfikar Adamu

Abstract:

The Kingdom of Saudi Arabia (KSA) is not an exception in relying on the growth of its construction industry to support rapid population growth. However, its need for infrastructure development is constrained by low productivity levels and cost overruns caused by factors such as delays to project completion. Delays in delivering a construction project are a global issue and while theories such as Optimism Bias have been used to explain such delays, in KSA, client-related causes of delays are also significant. The objective of this paper is to develop a framework-based approach to explore how the country’s construction industry can manage and reduce delays in construction projects through building information modelling (BIM) in order to mitigate the cost consequences of such delays.  It comprehensively and systematically reviewed the global literature on the subject and identified gaps, critical delay factors and the specific benefits that BIM can deliver for the delay management.  A case study comprising of nine hospital projects that have experienced delay and cost overruns was also carried out. Five critical delay factors related to the clients were identified as candidates that can be mitigated through BIM’s benefits. These factors are: Ineffective planning and scheduling of the project; changes during construction by the client; delay in progress payment; slowness in decision making by the client; and poor communication between clients and other stakeholders. In addition, data from the case study projects strongly suggest that optimism bias is present in many of the hospital projects. Further validation via key stakeholder interviews and documentations are planned.

Keywords: building information modelling (BIM), clients perspective, delay management, optimism bias, public sector projects

Procedia PDF Downloads 315
2247 Hybrid Robust Estimation via Median Filter and Wavelet Thresholding with Automatic Boundary Correction

Authors: Alsaidi M. Altaher, Mohd Tahir Ismail

Abstract:

Wavelet thresholding has been a power tool in curve estimation and data analysis. In the presence of outliers this non parametric estimator can not suppress the outliers involved. This study proposes a new two-stage combined method based on the use of the median filter as primary step before applying wavelet thresholding. After suppressing the outliers in a signal through the median filter, the classical wavelet thresholding is then applied for removing the remaining noise. We use automatic boundary corrections; using a low order polynomial model or local polynomial model as a more realistic rule to correct the bias at the boundary region; instead of using the classical assumptions such periodic or symmetric. A simulation experiment has been conducted to evaluate the numerical performance of the proposed method. Results show strong evidences that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating outlier’s sensitivity.

Keywords: boundary correction, median filter, simulation, wavelet thresholding

Procedia PDF Downloads 419
2246 High Performance of Direct Torque and Flux Control of a Double Stator Induction Motor Drive with a Fuzzy Stator Resistance Estimator

Authors: K. Kouzi

Abstract:

In order to have stable and high performance of direct torque and flux control (DTFC) of double star induction motor drive (DSIM), proper on-line adaptation of the stator resistance is very important. This is inevitably due to the variation of the stator resistance during operating conditions, which introduces error in estimated flux position and the magnitude of the stator flux. Error in the estimated stator flux deteriorates the performance of the DTFC drive. Also, the effect of error in estimation is very important especially at low speed. Due to this, our aim is to overcome the sensitivity of the DTFC to the stator resistance variation by proposing on-line fuzzy estimation stator resistance. The fuzzy estimation method is based on an on-line stator resistance correction through the variations of the stator current estimation error and its variations. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of the suggested algorithm control is to avoid the drive instability that may occur in certain situations and ensure the tracking of the actual stator resistance. The validity of the technique and the improvement of the whole system performance are proved by the results.

Keywords: direct torque control, dual stator induction motor, Fuzzy Logic estimation, stator resistance adaptation

Procedia PDF Downloads 309
2245 Forecasting Container Throughput: Using Aggregate or Terminal-Specific Data?

Authors: Gu Pang, Bartosz Gebka

Abstract:

We forecast the demand of total container throughput at the Indonesia’s largest seaport, Tanjung Priok Port. We propose four univariate forecasting models, including SARIMA, the additive Seasonal Holt-Winters, the multiplicative Seasonal Holt-Winters and the Vector Error Correction Model. Our aim is to provide insights into whether forecasting the total container throughput obtained by historical aggregated port throughput time series is superior to the forecasts of the total throughput obtained by summing up the best individual terminal forecasts. We test the monthly port/individual terminal container throughput time series between 2003 and 2013. The performance of forecasting models is evaluated based on Mean Absolute Error and Root Mean Squared Error. Our results show that the multiplicative Seasonal Holt-Winters model produces the most accurate forecasts of total container throughput, whereas SARIMA generates the worst in-sample model fit. The Vector Error Correction Model provides the best model fits and forecasts for individual terminals. Our results report that the total container throughput forecasts based on modelling the total throughput time series are consistently better than those obtained by combining those forecasts generated by terminal-specific models. The forecasts of total throughput until the end of 2018 provide an essential insight into the strategic decision-making on the expansion of port's capacity and construction of new container terminals at Tanjung Priok Port.

Keywords: SARIMA, Seasonal Holt-Winters, Vector Error Correction Model, container throughput

Procedia PDF Downloads 492
2244 Is the Okun's Law Valid in Tunisia?

Authors: El Andari Chifaa, Bouaziz Rached

Abstract:

The central focus of this paper was to check whether the Okun’s law in Tunisia is valid or not. For this purpose, we have used quarterly time series data during the period 1990Q1-2014Q1. Firstly, we applied the error correction model instead of the difference version of Okun's Law, the Engle-Granger and Johansen test are employed to find out long run association between unemployment, production, and how error correction mechanism (ECM) is used for short run dynamic. Secondly, we used the gap version of Okun’s law where the estimation is done from three band pass filters which are mathematical tools used in macro-economic and especially in business cycles theory. The finding of the study indicates that the inverse relationship between unemployment and output is verified in the short and long term, and the Okun's law holds for the Tunisian economy, but with an Okun’s coefficient lower than required. Therefore, our empirical results have important implications for structural and cyclical policymakers in Tunisia to promote economic growth in a context of lower unemployment growth.

Keywords: Okun’s law, validity, unit root, cointegration, error correction model, bandpass filters

Procedia PDF Downloads 303
2243 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.

Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions

Procedia PDF Downloads 466
2242 Multimodal Direct Neural Network Positron Emission Tomography Reconstruction

Authors: William Whiteley, Jens Gregor

Abstract:

In recent developments of direct neural network based positron emission tomography (PET) reconstruction, two prominent architectures have emerged for converting measurement data into images: 1) networks that contain fully-connected layers; and 2) networks that primarily use a convolutional encoder-decoder architecture. In this paper, we present a multi-modal direct PET reconstruction method called MDPET, which is a hybrid approach that combines the advantages of both types of networks. MDPET processes raw data in the form of sinograms and histo-images in concert with attenuation maps to produce high quality multi-slice PET images (e.g., 8x440x440). MDPET is trained on a large whole-body patient data set and evaluated both quantitatively and qualitatively against target images reconstructed with the standard PET reconstruction benchmark of iterative ordered subsets expectation maximization. The results show that MDPET outperforms the best previously published direct neural network methods in measures of bias, signal-to-noise ratio, mean absolute error, and structural similarity.

Keywords: deep learning, image reconstruction, machine learning, neural network, positron emission tomography

Procedia PDF Downloads 97
2241 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators

Authors: K. O'Malley

Abstract:

Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.

Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university

Procedia PDF Downloads 13
2240 Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries

Authors: Elham Alaee, Mousa Shamsi, Hossein Ahmadi, Soroosh Nazem, Mohammad Hossein Sedaaghi

Abstract:

Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy C-Means (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic C-Means (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm.

Keywords: facial image, segmentation, PCM, FCM, skin error, facial surgery

Procedia PDF Downloads 575
2239 Low-Cost Reversible Logic Serial Multipliers with Error Detection Capability

Authors: Mojtaba Valinataj

Abstract:

Nowadays reversible logic has received many attentions as one of the new fields for reducing the power consumption. On the other hand, the processing systems have weaknesses against different external effects. In this paper, some error detecting reversible logic serial multipliers are proposed by incorporating the parity-preserving gates. This way, the new designs are presented for signed parity-preserving serial multipliers based on the Booth's algorithm by exploiting the new arrangements of existing gates. The experimental results show that the proposed 4×4 multipliers in this paper reach up to 20%, 35%, and 41% enhancements in the number of constant inputs, quantum cost, and gate count, respectively, as the reversible logic criteria, compared to previous designs. Furthermore, all the proposed designs have been generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.

Keywords: Booth’s algorithm, error detection, multiplication, parity-preserving gates, quantum computers, reversible logic

Procedia PDF Downloads 216
2238 Error Analysis in Academic Writing of EFL Learners: A Case Study for Undergraduate Students at Pathein University

Authors: Aye Pa Pa Myo

Abstract:

Writing in English is accounted as a complex process for English as a foreign language learners. Besides, committing errors in writing can be found as an inevitable part of language learners’ writing. Generally, academic writing is quite difficult for most of the students to manage for getting better scores. Students can commit common errors in their writings when they try to write academic writing. Error analysis deals with identifying and detecting the errors and also explains the reason for the occurrence of these errors. In this paper, the researcher has an attempt to examine the common errors of undergraduate students in their academic writings at Pathein University. The purpose of doing this research is to investigate the errors which students usually commit in academic writing and to find out the better ways for correcting these errors in EFL classrooms. In this research, fifty-third-year non-English specialization students attending Pathein University were selected as participants. This research took one month. It was conducted with a mixed methodology method. Two mini-tests were used as research tools. Data were collected with a quantitative research method. Findings from this research pointed that most of the students noticed their common errors after getting the necessary input, and they became more decreased committing these errors after taking mini-test; hence, all findings will be supportive for further researches related to error analysis in academic writing.

Keywords: academic writing, error analysis, EFL learners, mini-tests, mixed methodology

Procedia PDF Downloads 123
2237 Wind Power Forecast Error Simulation Model

Authors: Josip Vasilj, Petar Sarajcev, Damir Jakus

Abstract:

One of the major difficulties introduced with wind power penetration is the inherent uncertainty in production originating from uncertain wind conditions. This uncertainty impacts many different aspects of power system operation, especially the balancing power requirements. For this reason, in power system development planing, it is necessary to evaluate the potential uncertainty in future wind power generation. For this purpose, simulation models are required, reproducing the performance of wind power forecasts. This paper presents a wind power forecast error simulation models which are based on the stochastic process simulation. Proposed models capture the most important statistical parameters recognized in wind power forecast error time series. Furthermore, two distinct models are presented based on data availability. First model uses wind speed measurements on potential or existing wind power plant locations, while the seconds model uses statistical distribution of wind speeds.

Keywords: wind power, uncertainty, stochastic process, Monte Carlo simulation

Procedia PDF Downloads 470
2236 Co-Integration Model for Predicting Inflation Movement in Nigeria

Authors: Salako Rotimi, Oshungade Stephen, Ojewoye Opeyemi

Abstract:

The maintenance of price stability is one of the macroeconomic challenges facing Nigeria as a nation. This paper attempts to build a co-integration multivariate time series model for inflation movement in Nigeria using data extracted from the abstract of statistics of the Central Bank of Nigeria (CBN) from 2008 to 2017. The Johansen cointegration test suggests at least one co-integration vector describing the long run relationship between Consumer Price Index (CPI), Food Price Index (FPI) and Non-Food Price Index (NFPI). All three series show increasing pattern, which indicates a sign of non-stationary in each of the series. Furthermore, model predictability was established with root-mean-square-error, mean absolute error, mean average percentage error, and Theil’s unbiased statistics for n-step forecasting. The result depicts that the long run coefficient of a consumer price index (CPI) has a positive long-run relationship with the food price index (FPI) and non-food price index (NFPI).

Keywords: economic, inflation, model, series

Procedia PDF Downloads 232
2235 Bit Error Rate (BER) Performance of Coherent Homodyne BPSK-OCDMA Network for Multimedia Applications

Authors: Morsy Ahmed Morsy Ismail

Abstract:

In this paper, the structure of a coherent homodyne receiver for the Binary Phase Shift Keying (BPSK) Optical Code Division Multiple Access (OCDMA) network is introduced based on the Multi-Length Weighted Modified Prime Code (ML-WMPC) for multimedia applications. The Bit Error Rate (BER) of this homodyne detection is evaluated as a function of the number of active users and the signal to noise ratio for different code lengths according to the multimedia application such as audio, voice, and video. Besides, the Mach-Zehnder interferometer is used as an external phase modulator in homodyne detection. Furthermore, the Multiple Access Interference (MAI) and the receiver noise in a shot-noise limited regime are taken into consideration in the BER calculations.

Keywords: OCDMA networks, bit error rate, multiple access interference, binary phase-shift keying, multimedia

Procedia PDF Downloads 159
2234 On Differential Growth Equation to Stochastic Growth Model Using Hyperbolic Sine Function in Height/Diameter Modeling of Pines

Authors: S. O. Oyamakin, A. U. Chukwu

Abstract:

Richard's growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richard's growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richard's growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richard's nonlinear growth models better than the classical Richard's growth model.

Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, Richard's, stochastic

Procedia PDF Downloads 467
2233 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index

Authors: Shih-Pin Chen, Shih-Syuan You

Abstract:

This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.

Keywords: dissemblance index, forecasting, fuzzy sets, linear regression

Procedia PDF Downloads 343
2232 Hardware Error Analysis and Severity Characterization in Linux-Based Server Systems

Authors: Nikolaos Georgoulopoulos, Alkis Hatzopoulos, Konstantinos Karamitsios, Konstantinos Kotrotsios, Alexandros I. Metsai

Abstract:

In modern server systems, business critical applications run in different types of infrastructure, such as cloud systems, physical machines and virtualization. Often, due to high load and over time, various hardware faults occur in servers that translate to errors, resulting to malfunction or even server breakdown. CPU, RAM and hard drive (HDD) are the hardware parts that concern server administrators the most regarding errors. In this work, selected RAM, HDD and CPU errors, that have been observed or can be simulated in kernel ring buffer log files from two groups of Linux servers, are investigated. Moreover, a severity characterization is given for each error type. Better understanding of such errors can lead to more efficient analysis of kernel logs that are usually exploited for fault diagnosis and prediction. In addition, this work summarizes ways of simulating hardware errors in RAM and HDD, in order to test the error detection and correction mechanisms of a Linux server.

Keywords: hardware errors, Kernel logs, Linux servers, RAM, hard disk, CPU

Procedia PDF Downloads 145
2231 Understanding the Interplay between Consumer Knowledge, Trust and Relationship Satisfaction in Financial Services

Authors: Torben Hansen, Lars Gronholdt, Alexander Josiassen, Anne Martensen

Abstract:

Consumers often exhibit a bias in their knowledge; they often think that they know more or less than they do. The concept of 'knowledge over/underconfidence' (O/U) has in previous studies been used to investigate such knowledge bias. O/U appears as a combination of subjective and objective knowledge. Subjective knowledge relates to consumers’ perception of their knowledge, while objective knowledge relates to consumers’ absolute knowledge measured by objective standards. This separation leads to three scenarios: The consumer can either be knowledge calibrated (subjective and objective knowledge are similar), overconfident (subjective knowledge exceeds objective knowledge) or underconfident (objective knowledge exceeds subjective knowledge). Knowledge O/U is a highly useful concept in understanding consumer choice behavior. For example, knowledge overconfident individuals are likely to exaggerate their ability to make right choices, are more likely to opt out of necessary information search, spend less time to carry out a specific task than less knowledge confident consumers, and are more likely to show high financial trading volumes. Through the use of financial services as a case study, this study contributes to previous research by examining how consumer knowledge O/U affects two types of trust (broad-scope trust and narrow-scope trust) and consumer relationship satisfaction. Trust does not only concern consumer trust in individual companies (i.e., narrow.-scope confidence NST), but also concerns consumer confidence in the broader business context in which consumers plan and implement their behavior (i.e., broad scope trust, BST). NST is defined as "the expectation that the service provider can be relied on to deliver on its promises’, while BST is defined as ‘the expectation that companies within a particular business type can generally be relied on to deliver on their promises.’ This study expands our understanding of the interplay between consumer knowledge bias, consumer trust, and relationship marketing in two main ways: First, it is demonstrated that the more knowledge O/U a consumer becomes, the higher/lower NST and levels of relationship satisfaction will be. Second, it is demonstrated that BST has a negative moderating effect on the relationship between knowledge O/U and satisfaction, such that knowledge O/U has a higher positive/negative effect on relationship satisfaction when BST is low vs. high. The data for this study comprises 756 mutual fund investors. Trust is particularly important in consumers’ mutual fund behavior because mutual funds have important responsibilities in providing financial advice and in managing consumers’ funds.

Keywords: knowledge, cognitive bias, trust, customer-seller relationships, financial services

Procedia PDF Downloads 290
2230 GPU Based High Speed Error Protection for Watermarked Medical Image Transmission

Authors: Md Shohidul Islam, Jongmyon Kim, Ui-pil Chong

Abstract:

Medical image is an integral part of e-health care and e-diagnosis system. Medical image watermarking is widely used to protect patients’ information from malicious alteration and manipulation. The watermarked medical images are transmitted over the internet among patients, primary and referred physicians. The images are highly prone to corruption in the wireless transmission medium due to various noises, deflection, and refractions. Distortion in the received images leads to faulty watermark detection and inappropriate disease diagnosis. To address the issue, this paper utilizes error correction code (ECC) with (8, 4) Hamming code in an existing watermarking system. In addition, we implement the high complex ECC on a graphics processing units (GPU) to accelerate and support real-time requirement. Experimental results show that GPU achieves considerable speedup over the sequential CPU implementation, while maintaining 100% ECC efficiency.

Keywords: medical image watermarking, e-health system, error correction, Hamming code, GPU

Procedia PDF Downloads 277
2229 An Alternative Richards’ Growth Model Based on Hyperbolic Sine Function

Authors: Samuel Oluwafemi Oyamakin, Angela Unna Chukwu

Abstract:

Richrads growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richards growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richards growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richards nonlinear growth models better than the classical Richards growth model.

Keywords: height, diameter at breast height, DBH, hyperbolic sine function, Pinus caribaea, Richards' growth model

Procedia PDF Downloads 379
2228 An Unbiased Profiling of Immune Repertoire via Sequencing and Analyzing T-Cell Receptor Genes

Authors: Yi-Lin Chen, Sheng-Jou Hung, Tsunglin Liu

Abstract:

Adaptive immune system recognizes a wide range of antigens via expressing a large number of structurally distinct T cell and B cell receptor genes. The distinct receptor genes arise from complex rearrangements called V(D)J recombination, and constitute the immune repertoire. A common method of profiling immune repertoire is via amplifying recombined receptor genes using multiple primers and high-throughput sequencing. This multiplex-PCR approach is efficient; however, the resulting repertoire can be distorted because of primer bias. To eliminate primer bias, 5’ RACE is an alternative amplification approach. However, the application of RACE approach is limited by its low efficiency (i.e., the majority of data are non-regular receptor sequences, e.g., containing intronic segments) and lack of the convenient tool for analysis. We propose a computational tool that can correctly identify non-regular receptor sequences in RACE data via aligning receptor sequences against the whole gene instead of only the exon regions as done in all other tools. Using our tool, the remaining regular data allow for an accurate profiling of immune repertoire. In addition, a RACE approach is improved to yield a higher fraction of regular T-cell receptor sequences. Finally, we quantify the degree of primer bias of a multiplex-PCR approach via comparing it to the RACE approach. The results reveal significant differences in frequency of VJ combination by the two approaches. Together, we provide a new experimental and computation pipeline for an unbiased profiling of immune repertoire. As immune repertoire profiling has many applications, e.g., tracing bacterial and viral infection, detection of T cell lymphoma and minimal residual disease, monitoring cancer immunotherapy, etc., our work should benefit scientists who are interested in the applications.

Keywords: immune repertoire, T-cell receptor, 5' RACE, high-throughput sequencing, sequence alignment

Procedia PDF Downloads 179
2227 The Effect of Undernutrition on Sputum Culture Conversion and Treatment Outcomes among People with Multidrug-Resistant Tuberculosis: A Systematic Review and Meta-Analysis

Authors: Fasil Wagnew, Kerri Viney, Kefyalew Addis Alene, Matthew Kelly, Darren Gray

Abstract:

Background: Undernutrition is a risk factor for tuberculosis (TB), including poor treatment outcomes. However, evidence regarding the effect of undernutrition on TB treatment outcomes is not well understood. We aimed to evaluate the effect of undernutrition on sputum culture conversion and treatment outcomes among people with multi-drug resistance (MDR)-TB. Methods: We searched for publications in the Medline, Embase, Scopus, and Web of Science databases without restrictions on geography or year of publication. We conducted a random-effect meta-analysis to estimate the effects of undernutrition on sputum culture conversion and treatment outcomes. Two reviewers independently assessed the study eligibility, extracted the necessary information, and assessed the risk of bias. Depending on the nature of the data, odds ratio (OR) and hazard ratio (HR) with 95% confidence intervals (CIs) were used to summarize the effect estimates. Potential publication bias was checked using funnel plots and Egger’s tests. Results: Of 2358 records screened, 59 studies comprising a total of 31,254 people with MDR-TB were included. Undernutrition was significantly associated with a lower sputum culture conversion rate (HR 0·7, 95% CI 0·6–0·9, I2=67·1%) and a higher rate of mortality (OR 2·9, 95%CI 2·1–3·8, I2=23·7%) and unfavourable treatment outcomes (OR 1·8, 95%CI 1·5–2·0, I2=72·7%). There was no statistically significant publication bias in the included studies. Three studies were low, forty-two studies were moderate, and fourteen studies were high quality. Interpretations: Undernutrition was significantly associated with unfavourable treatment outcomes, including mortality and lower sputum culture conversion among people with MDR-TB. These findings have implications for supporting targeted nutritional interventions alongside standardised second-line TB drugs.

Keywords: undernutrition, MDR-TB, sputum culture conversion, treatment outcomes, meta-analysis

Procedia PDF Downloads 138