Search results for: type-I error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1893

Search results for: type-I error

1743 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 250
1742 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation

Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu

Abstract:

This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.

Keywords: machine learning, neural network, pressurized water reactor, supervisory controller

Procedia PDF Downloads 153
1741 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study

Authors: Ignatio Madanhire, Charles Mbohwa

Abstract:

This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firms

Keywords: aggregate production planning, trial and error, linear programming, furniture industry

Procedia PDF Downloads 555
1740 Error Analysis of Wavelet-Based Image Steganograhy Scheme

Authors: Geeta Kasana, Kulbir Singh, Satvinder Singh

Abstract:

In this paper, a steganographic scheme for digital images using Integer Wavelet Transform (IWT) is proposed. The cover image is decomposed into wavelet sub bands using IWT. Each of the subband is divided into blocks of equal size and secret data is embedded into the largest and smallest pixel values of each block of the subband. Visual quality of stego images is acceptable as PSNR between cover image and stego is above 40 dB, imperceptibility is maintained. Experimental results show better tradeoff between capacity and visual perceptivity compared to the existing algorithms. Maximum possible error analysis is evaluated for each of the wavelet subbands of an image.

Keywords: DWT, IWT, MSE, PSNR

Procedia PDF Downloads 503
1739 Feedback in the Language Class: An Action Research Process

Authors: Arash Golzari Koloor

Abstract:

Feedback seems to be an inseparable part of teaching a second/foreign language. One type of feedback is corrective feedback which is one type of error treatment in second language classrooms. This study is a report on the types of corrective feedback employed in an IELTS preparation course. The types of feedback, their frequencies, and their effectiveness are enlisted, enumerated, and interpreted. The results showed that explicit correction and recast were the most frequent types of feedback while repetition and elicitation were the least. The results also revealed that metalinguistic feedback, elicitation, and explicit correction were the most effective types of feedback and affected learners performance greatly.

Keywords: classroom interaction, corrective feedback, error treatment, oral performance

Procedia PDF Downloads 333
1738 Error Estimation for the Reconstruction Algorithm with Fan Beam Geometry

Authors: Nirmal Yadav, Tanuja Srivastava

Abstract:

Shannon theory is an exact method to recover a band limited signals from its sampled values in discrete implementation, using sinc interpolators. But sinc based results are not much satisfactory for band-limited calculations so that convolution with window function, having compact support, has been introduced. Convolution Backprojection algorithm with window function is an approximation algorithm. In this paper, the error has been calculated, arises due to this approximation nature of reconstruction algorithm. This result will be defined for fan beam projection data which is more faster than parallel beam projection.

Keywords: computed tomography, convolution backprojection, radon transform, fan beam

Procedia PDF Downloads 490
1737 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution

Authors: Md. Rashidul Hasan, Atikur Rahman Baizid

Abstract:

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.

Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function

Procedia PDF Downloads 383
1736 A Low Order Thermal Envelope Model for Heat Transfer Characteristics of Low-Rise Residential Buildings

Authors: Nadish Anand, Richard D. Gould

Abstract:

A simplistic model is introduced for determining the thermal characteristics of a Low-rise Residential (LRR) building and then predicts the energy usage by its Heating Ventilation & Air Conditioning (HVAC) system according to changes in weather conditions which are reflected in the Ambient Temperature (Outside Air Temperature). The LRR buildings are treated as a simple lump for solving the heat transfer problem and the model is derived using the lumped capacitance model of transient conduction heat transfer from bodies. Since most contemporary HVAC systems have a thermostat control which will have an offset temperature and user defined set point temperatures which define when the HVAC system will switch on and off. The aim is to predict without any error the Body Temperature (i.e. the Inside Air Temperature) which will estimate the switching on and off of the HVAC system. To validate the mathematical model derived from lumped capacitance we have used EnergyPlus simulation engine, which simulates Buildings with considerable accuracy. We have predicted through the low order model the Inside Air Temperature of a single house kept in three different climate zones (Detroit, Raleigh & Austin) and different orientations for summer and winter seasons. The prediction error from the model for the same day as that of model parameter calculation has showed an error of < 10% in winter for almost all the orientations and climate zones. Whereas the prediction error is only <10% for all the orientations in the summer season for climate zone at higher latitudes (Raleigh & Detroit). Possible factors responsible for the large variations are also noted in the work, paving way for future research.

Keywords: building energy, energy consumption, energy+, HVAC, low order model, lumped capacitance

Procedia PDF Downloads 265
1735 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping

Authors: Jie Xu, Zengshan Tian, Ze Li

Abstract:

Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.

Keywords: frequency hopping, phase error elimination, carrier phase, ranging

Procedia PDF Downloads 122
1734 Method of False Alarm Rate Control for Cyclic Redundancy Check-Aided List Decoding of Polar Codes

Authors: Dmitry Dikarev, Ajit Nimbalker, Alexei Davydov

Abstract:

Polar coding is a novel example of error correcting codes, which can achieve Shannon limit at block length N→∞ with log-linear complexity. Active research is being carried to adopt this theoretical concept for using in practical applications such as 5th generation wireless communication systems. Cyclic redundancy check (CRC) error detection code is broadly used in conjunction with successive cancellation list (SCL) decoding algorithm to improve finite-length polar code performance. However, there are two issues: increase of code block payload overhead by CRC bits and decrease of CRC error-detection capability. This paper proposes a method to control CRC overhead and false alarm rate of polar decoding. As shown in the computer simulations results, the proposed method provides the ability to use any set of CRC polynomials with any list size while maintaining the desired level of false alarm rate. This level of flexibility allows using polar codes in 5G New Radio standard.

Keywords: 5G New Radio, channel coding, cyclic redundancy check, list decoding, polar codes

Procedia PDF Downloads 237
1733 Applying Genetic Algorithm in Exchange Rate Models Determination

Authors: Mehdi Rostamzadeh

Abstract:

Genetic Algorithms (GAs) are an adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this study, we apply GAs for fundamental and technical models of exchange rate determination in exchange rate market. In this framework, we estimated absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices (monetary models), equilibrium exchange rate and portfolio balance model as fundamental models and Auto Regressive (AR), Moving Average (MA), Auto-Regressive with Moving Average (ARMA) and Mean Reversion (MR) as technical models for Iranian Rial against European Union’s Euro using monthly data from January 1992 to December 2014. Then, we put these models into the genetic algorithm system for measuring their optimal weight for each model. These optimal weights have been measured according to four criteria i.e. R-Squared (R2), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE).Based on obtained Results, it seems that for explaining of Iranian Rial against EU Euro exchange rate behavior, fundamental models are better than technical models.

Keywords: exchange rate, genetic algorithm, fundamental models, technical models

Procedia PDF Downloads 271
1732 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 151
1731 A Real Time Ultra-Wideband Location System for Smart Healthcare

Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang

Abstract:

Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.

Keywords: intelligent monitoring, ultra-wideband technology, real-time location, IoT devices, smart healthcare

Procedia PDF Downloads 137
1730 Setting Uncertainty Conditions Using Singular Values for Repetitive Control in State Feedback

Authors: Muhammad A. Alsubaie, Mubarak K. H. Alhajri, Tarek S. Altowaim

Abstract:

A repetitive controller designed to accommodate periodic disturbances via state feedback is discussed. Periodic disturbances can be represented by a time delay model in a positive feedback loop acting on system output. A direct use of the small gain theorem solves the periodic disturbances problem via 1) isolating the delay model, 2) finding the overall system representation around the delay model and 3) designing a feedback controller that assures overall system stability and tracking error convergence. This paper addresses uncertainty conditions for the repetitive controller designed in state feedback in either past error feedforward or current error feedback using singular values. The uncertainty investigation is based on the overall system found and the stability condition associated with it; depending on the scheme used, to set an upper/lower limit weighting parameter. This creates a region that should not be exceeded in selecting the weighting parameter which in turns assures performance improvement against system uncertainty. Repetitive control problem can be described in lifted form. This allows the usage of singular values principle in setting the range for the weighting parameter selection. The Simulation results obtained show a tracking error convergence against dynamic system perturbation if the weighting parameter chosen is within the range obtained. Simulation results also show the advantage of weighting parameter usage compared to the case where it is omitted.

Keywords: model mismatch, repetitive control, singular values, state feedback

Procedia PDF Downloads 155
1729 Simulation of Growth and Yield of Rice Under Irrigation and Nitrogen Management Using ORYZA2000

Authors: Mojtaba Esmaeilzad Limoudehi

Abstract:

To evaluate the model ORYZA2000, under the management of irrigation and nitrogen fertilization experiment, a split plot with a randomized complete block design with three replications on hybrid cultivars (spring) in the 1388-1387 crop year was conducted at the Rice Research Institute. Permanent flood irrigation as the main plot in the fourth level, around 5 days, from 11 days to 8 days away, and the four levels of nitrogen fertilizer as the subplots 0, 90, 120, and 150 kg N Ha were considered. Simulated and measured values of leaf area index, grain yield, and biological parameters using the regression coefficient, t-test, the root mean square error (RMSE), and normalized root mean square error (RMSEn) were performed. Results, the normalized root mean square error of 10% in grain yield, the biological yield of 9%, and 23% of maximum LAI was determined. The simulation results show that grain yield and biological ORYZA2000 model accuracy are good but do not simulate maximum LAI well. The results show that the model can support ORYZA2000 test results and can be used under conditions of nitrogen fertilizer and irrigation management.

Keywords: evaluation, rice, nitrogen fertilizer, model ORYZA2000

Procedia PDF Downloads 69
1728 Circular Approximation by Trigonometric Bézier Curves

Authors: Maria Hussin, Malik Zawwar Hussain, Mubashrah Saddiqa

Abstract:

We present a trigonometric scheme to approximate a circular arc with its two end points and two end tangents/unit tangents. A rational cubic trigonometric Bézier curve is constructed whose end control points are defined by the end points of the circular arc. Weight functions and the remaining control points of the cubic trigonometric Bézier curve are estimated by variational approach to reproduce a circular arc. The radius error is calculated and found less than the existing techniques.

Keywords: control points, rational trigonometric Bézier curves, radius error, shape measure, weight functions

Procedia PDF Downloads 474
1727 Integrating Blogging into Peer Assessment on College Students’ English Writing

Authors: Su-Lien Liao

Abstract:

Most of college students in Taiwan do not have sufficient English proficiency to express themselves in written English. Teachers spent a lot of time correcting students’ English writing, but the results are not satisfactory. This study aims to use blogs as a teaching and learning tool in written English. Before applying peer assessment, students should be trained to be good reviewers. The teacher starts the course by posting the error analysis of students’ first English composition on blogs as the comment models for students. Then the students will go through the process of drafting, composing, peer response and last revision on blogs. Evaluation Questionnaires and interviews will be conducted at the end of the course to see the impact and students’ perception for the course.

Keywords: blog, peer assessment, English writing, error analysis

Procedia PDF Downloads 421
1726 EEG Correlates of Trait and Mathematical Anxiety during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatiana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Tatiana A. Golovko, Yulia V. Kovas

Abstract:

EEG correlates of mathematical and trait anxiety level were studied in 52 healthy Russian-speakers during execution of error-recognition tasks with lexical, arithmetic and algebraic conditions. Event-related spectral perturbations were used as a measure of brain activity. The ERSP plots revealed alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three conditions. The correlates of anxiety were found in theta (4-8 Hz) and beta2 (16-20 Hz) frequency bands. In theta band the effects of mathematical anxiety were stronger expressed in lexical, than in arithmetic and algebraic condition. The mathematical anxiety effects in theta band were associated with differences between anterior and posterior cortical areas, whereas the effects of trait anxiety were associated with inter-hemispherical differences. In beta1 and beta2 bands effects of trait and mathematical anxiety were directed oppositely. The trait anxiety was associated with increase of amplitude of desynchronization, whereas the mathematical anxiety was associated with decrease of this amplitude. The effect of mathematical anxiety in beta2 band was insignificant for lexical condition but was the strongest in algebraic condition. EEG correlates of anxiety in theta band could be interpreted as indexes of task emotionality, whereas the reaction in beta2 band is related to tension of intellectual resources.

Keywords: EEG, brain activity, lexical and numerical error-recognition tasks, mathematical and trait anxiety

Procedia PDF Downloads 559
1725 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner

Authors: Aika Umemuro, Mitsuru Sato, Mizuki Narita, Saya Hori, Saya Sakurai, Tomomi Nakayama, Ayano Nakazawa, Toshihiro Ogura

Abstract:

Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.

Keywords: EEG scanner, eye-detector, mammography, observers

Procedia PDF Downloads 215
1724 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines

Authors: Xiaogang Li, Jieqiong Miao

Abstract:

As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square error

Keywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error

Procedia PDF Downloads 460
1723 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 265
1722 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach

Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo

Abstract:

Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.

Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation

Procedia PDF Downloads 185
1721 Application of Neural Network on the Loading of Copper onto Clinoptilolite

Authors: John Kabuba

Abstract:

The study investigated the implementation of the Neural Network (NN) techniques for prediction of the loading of Cu ions onto clinoptilolite. The experimental design using analysis of variance (ANOVA) was chosen for testing the adequacy of the Neural Network and for optimizing of the effective input parameters (pH, temperature and initial concentration). Feed forward, multi-layer perceptron (MLP) NN successfully tracked the non-linear behavior of the adsorption process versus the input parameters with mean squared error (MSE), correlation coefficient (R) and minimum squared error (MSRE) of 0.102, 0.998 and 0.004 respectively. The results showed that NN modeling techniques could effectively predict and simulate the highly complex system and non-linear process such as ion-exchange.

Keywords: clinoptilolite, loading, modeling, neural network

Procedia PDF Downloads 414
1720 Shoulder Range of Motion Measurements using Computer Vision Compared to Hand-Held Goniometric Measurements

Authors: Lakshmi Sujeesh, Aaron Ramzeen, Ricky Ziming Guo, Abhishek Agrawal

Abstract:

Introduction: Range of motion (ROM) is often measured by physiotherapists using hand-held goniometer as part of mobility assessment for diagnosis. Due to the nature of hand-held goniometer measurement procedure, readings often tend to have some variations depending on the physical therapist taking the measurements (Riddle et al.). This study aims to validate computer vision software readings against goniometric measurements for quick and consistent ROM measurements to be taken by clinicians. The use of this computer vision software hopes to improve the future of musculoskeletal space with more efficient diagnosis from recording of patient’s ROM with minimal human error across different physical therapists. Methods: Using the hand-held long arm goniometer measurements as the “gold-standard”, healthy study participants (n = 20) were made to perform 4 exercises: Front elevation, Abduction, Internal Rotation, and External Rotation, using both arms. Assessment of active ROM using computer vision software at different angles set by goniometer for each exercise was done. Interclass Correlation Coefficient (ICC) using 2-way random effects model, Box-Whisker plots, and Root Mean Square error (RMSE) were used to find the degree of correlation and absolute error measured between set and recorded angles across the repeated trials by the same rater. Results: ICC (2,1) values for all 4 exercises are above 0.9, indicating excellent reliability. Lowest overall RMSE was for external rotation (5.67°) and highest for front elevation (8.00°). Box-whisker plots showed have showed that there is a potential zero error in the measurements done by the computer vision software for abduction, where absolute error for measurements taken at 0 degree are shifted away from the ideal 0 line, with its lowest recorded error being 8°. Conclusion: Our results indicate that the use of computer vision software is valid and reliable to use in clinical settings by physiotherapists for measuring shoulder ROM. Overall, computer vision helps improve accessibility to quality care provided for individual patients, with the ability to assess ROM for their condition at home throughout a full cycle of musculoskeletal care (American Academy of Orthopaedic Surgeons) without the need for a trained therapist.

Keywords: physiotherapy, frozen shoulder, joint range of motion, computer vision

Procedia PDF Downloads 105
1719 On the Cluster of the Families of Hybrid Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

Over the years, kernel density estimation has been extensively studied within the context of nonparametric density estimation. The fundamental components of kernel density estimation are the kernel function and the bandwidth. While the mathematical exploration of the kernel component has been relatively limited, its selection and development remain crucial. The Mean Integrated Squared Error (MISE), serving as a measure of discrepancy, provides a robust framework for assessing the effectiveness of any kernel function. A kernel function with a lower MISE is generally considered to perform better than one with a higher MISE. Hence, the primary aim of this article is to create kernels that exhibit significantly reduced MISE when compared to existing classical kernels. Consequently, this article introduces a cluster of hybrid polynomial kernel families. The construction of these proposed kernel functions is carried out heuristically by combining two kernels from the classical polynomial kernel family using probability axioms. We delve into the analysis of error propagation within these kernels. To assess their performance, simulation experiments, and real-life datasets are employed. The obtained results demonstrate that the proposed hybrid kernels surpass their classical kernel counterparts in terms of performance.

Keywords: classical polynomial kernels, cluster of families, global error, hybrid Kernels, Kernel density estimation, Monte Carlo simulation

Procedia PDF Downloads 92
1718 Application of Adaptive Neuro Fuzzy Inference Systems Technique for Modeling of Postweld Heat Treatment Process of Pressure Vessel Steel AASTM A516 Grade 70

Authors: Omar Al Denali, Abdelaziz Badi

Abstract:

The ASTM A516 Grade 70 steel is a suitable material used for the fabrication of boiler pressure vessels working in moderate and lower temperature services, and it has good weldability and excellent notch toughness. The post-weld heat treatment (PWHT) or stress-relieving heat treatment has significant effects on avoiding the martensite transformation and resulting in high hardness, which can lead to cracking in the heat-affected zone (HAZ). An adaptive neuro-fuzzy inference system (ANFIS) was implemented to predict the material tensile strength of post-weld heat treatment (PWHT) experiments. The ANFIS models presented excellent predictions, and the comparison was carried out based on the mean absolute percentage error between the predicted values and the experimental values. The ANFIS model gave a Mean Absolute Percentage Error of 0.556 %, which confirms the high accuracy of the model.

Keywords: prediction, post-weld heat treatment, adaptive neuro-fuzzy inference system, mean absolute percentage error

Procedia PDF Downloads 152
1717 Estimating X-Ray Spectra for Digital Mammography by Using the Expectation Maximization Algorithm: A Monte Carlo Simulation Study

Authors: Chieh-Chun Chang, Cheng-Ting Shih, Yan-Lin Liu, Shu-Jun Chang, Jay Wu

Abstract:

With the widespread use of digital mammography (DM), radiation dose evaluation of breasts has become important. X-ray spectra are one of the key factors that influence the absorbed dose of glandular tissue. In this study, we estimated the X-ray spectrum of DM using the expectation maximization (EM) algorithm with the transmission measurement data. The interpolating polynomial model proposed by Boone was applied to generate the initial guess of the DM spectrum with the target/filter combination of Mo/Mo and the tube voltage of 26 kVp. The Monte Carlo N-particle code (MCNP5) was used to tally the transmission data through aluminum sheets of 0.2 to 3 mm. The X-ray spectrum was reconstructed by using the EM algorithm iteratively. The influence of the initial guess for EM reconstruction was evaluated. The percentage error of the average energy between the reference spectrum inputted for Monte Carlo simulation and the spectrum estimated by the EM algorithm was -0.14%. The normalized root mean square error (NRMSE) and the normalized root max square error (NRMaSE) between both spectra were 0.6% and 2.3%, respectively. We conclude that the EM algorithm with transmission measurement data is a convenient and useful tool for estimating x-ray spectra for DM in clinical practice.

Keywords: digital mammography, expectation maximization algorithm, X-Ray spectrum, X-Ray

Procedia PDF Downloads 729
1716 Definite Article Errors and Effect of L1 Transfer

Authors: Bimrisha Mali

Abstract:

The present study investigates the type of errors English as a second language (ESL) learners produce using the definite article ‘the’. The participants were provided a questionnaire on the learner's ability test. The questionnaire consists of three cloze tests and two free composition tests. Each participant's response was received in the form of written data. A total of 78 participants from three government schools participated in the study. The participants are high-school students from Rural Assam. Assam is a north-eastern state of India. Their age ranged between 14-15. The medium of instruction and the communication among the students take place in the local language, i.e., Assamese. Pit Corder’s steps for conducting error analysis have been followed for the analysis procedure. Four types of errors were found (1) deletion of the definite article, (2) use of the definite article as modifiers as adjectives, (3) incorrect use of the definite article with singular proper nouns, (4) substitution of the definite article by the indefinite article ‘a’. Classifiers in Assamese that express definiteness is used with nouns, adjectives, and numerals. It is found that native language (L1) transfer plays a pivotal role in the learners’ errors. The analysis reveals the learners' inability to acquire the semantic connotation of definiteness in English due to native language (L1) interference.

Keywords: definite article error, l1 transfer, error analysis, ESL

Procedia PDF Downloads 121
1715 Error Analysis of English Inflection among Thai University Students

Authors: Suwaree Yordchim, Toby J. Gibbs

Abstract:

The linguistic competence of Thai university students majoring in Business English was examined in the context of knowledge of English language inflection, and also various linguistic elements. Errors analysis was applied to the results of the testing. Levels of errors in inflection, tense and linguistic elements were shown to be significantly high for all noun, verb and adjective inflections. Findings suggest that students do not gain linguistic competence in their use of English language inflection, because of interlanguage interference. Implications for curriculum reform and treatment of errors in the classroom are discussed.

Keywords: interlanguage, error analysis, inflection, second language acquisition, Thai students

Procedia PDF Downloads 465
1714 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 402