Search results for: Mean Absolute Error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1356

Search results for: Mean Absolute Error

1296 Evaluation of Stormwater Quantity and Quality Control through Constructed Mini Wet Pond: A Case Study

Authors: Y. S. Liew, K. A. Puteh Ariffin, M. A. Mohd Nor

Abstract:

One of the Best Management Practices (BMPs) promoted in Urban Stormwater Management Manual for Malaysia (MSMA) published by the Department of Irrigation and Drainage (DID) in 2001 is through the construction of wet ponds in new development projects for water quantity and quality control. Therefore, this paper aims to demonstrate a case study on evaluation of a constructed mini wet pond located at Sekolah Rendah Kebangsaan Seksyen 2, Puchong, Selangor, Malaysia in both stormwater quantity and quality aspect particularly to reduce the peak discharge by temporary storing and gradual release of stormwater runoff from an outlet structure or other release mechanism. The evaluation technique will be using InfoWorks Collection System (CS) as the numerical modeling approach for water quantity aspect. Statistical test by comparing the correlation coefficient (R2), mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) were used to evaluate the model in simulating the peak discharge changes. Results demonstrated that there will be a reduction in peak flow at 11 % to 15% and time to peak flow is slower by 5 minutes through a wet pond. For water quality aspect, a survey on biological indicator of water quality carried out depicts that the pond is within the range of rather clean to clean water with the score of 5.3. This study indicates that a constructed wet pond with wetland facilities is able to help in managing water quantity and stormwater generated pollution at source, towards achieving ecologically sustainable development in urban areas.

Keywords: Wet pond, Retention Facilities, Best Management Practices (BMP), Urban Stormwater Management Manual for Malaysia (MSMA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2482
1295 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)

Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim

Abstract:

In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. These solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.

Keywords: Embedded Runge-Kutta-Fehlberg method, Initial value problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2690
1294 Angle Analyzer of an Encoder using the LabVIEW

Authors: Hyun-Min Kim, Yun-Seok Lim, Hyeok-Jin Yun, Jang-Mok Kim, Hee-je Kim

Abstract:

As we make progressive products for good works, and future industries want to get higher speed and resolution from various developments in the robotics as well as precise control system, the concept of control feedback is getting more important. Within a range of industrial developments, the concept is most responsible for the high reliability of a device. We explain an efficient analyzing method of a rotary encoder such as an incremental type encoder and absolute type encoder using the LabVIEW program

Keywords: LabVIEW, PFI Function, Angle analyzer, Incremental encoder, Absolute encoder

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3940
1293 DNA Computing for an Absolute 1-Center Problem: An Evolutionary Approach

Authors: Zuwairie Ibrahim, Yusei Tsuboi, Osamu Ono, Marzuki Khalid

Abstract:

Deoxyribonucleic Acid or DNA computing has emerged as an interdisciplinary field that draws together chemistry, molecular biology, computer science and mathematics. Thus, in this paper, the possibility of DNA-based computing to solve an absolute 1-center problem by molecular manipulations is presented. This is truly the first attempt to solve such a problem by DNA-based computing approach. Since, part of the procedures involve with shortest path computation, research works on DNA computing for shortest path Traveling Salesman Problem, in short, TSP are reviewed. These approaches are studied and only the appropriate one is adapted in designing the computation procedures. This DNA-based computation is designed in such a way that every path is encoded by oligonucleotides and the path-s length is directly proportional to the length of oligonucleotides. Using these properties, gel electrophoresis is performed in order to separate the respective DNA molecules according to their length. One expectation arise from this paper is that it is possible to verify the instance absolute 1-center problem using DNA computing by laboratory experiments.

Keywords: DNA computing, operation research, 1-center problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
1292 Robust ANOVA: An Illustrative Study in Horticultural Crop Research

Authors: Dinesh Inamadar, R. Venugopalan, K. Padmini

Abstract:

An attempt has been made in the present communication to elucidate the efficacy of robust ANOVA methods to analyse horticultural field experimental data in the presence of outliers. Results obtained fortify the use of robust ANOVA methods as there was substantiate reduction in error mean square, and hence the probability of committing Type I error, as compared to the regular approach.

Keywords: Outliers, robust ANOVA, horticulture, Cook distance, Type I error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2266
1291 Aliasing Free and Additive Error in Spectra for Alpha Stable Signals

Authors: R. Sabre

Abstract:

This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.

Keywords: Spectral density, stable processes, aliasing, p-adic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 533
1290 Error Correction Codes in Wireless Sensor Network: An Energy Aware Approach

Authors: Mohammad Rakibul Islam

Abstract:

Link reliability and transmitted power are two important design constraints in wireless network design. Error control coding (ECC) is a classic approach used to increase link reliability and to lower the required transmitted power. It provides coding gain, resulting in transmitter energy savings at the cost of added decoder power consumption. But the choice of ECC is very critical in the case of wireless sensor network (WSN). Since the WSNs are energy constraint in nature, both the BER and power consumption has to be taken into count. This paper develops a step by step approach in finding suitable error control codes for WSNs. Several simulations are taken considering different error control codes and the result shows that the RS(31,21) fits both in BER and power consumption criteria.

Keywords: Error correcting code, RS, BCH, wireless sensor networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3176
1289 Predicting the Impact of the Defect on the Overall Environment in Function Based Systems

Authors: Parvinder S. Sandhu, Urvashi Malhotra, E. Ardil

Abstract:

There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.

Keywords: Software Metrics, Fuzzy, Neuro-Fuzzy, Software Faults, Accuracy, MAE, RMSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
1288 The Fundamental Reliance of Iterative Learning Control on Stability Robustness

Authors: Richard W. Longman

Abstract:

Iterative learning control aims to achieve zero tracking error of a specific command. This is accomplished by iteratively adjusting the command given to a feedback control system, based on the tracking error observed in the previous iteration. One would like the iterations to converge to zero tracking error in spite of any error present in the model used to design the learning law. First, this need for stability robustness is discussed, and then the need for robustness of the property that the transients are well behaved. Methods of producing the needed robustness to parameter variations and to singular perturbations are presented. Then a method involving reverse time runs is given that lets the world behavior produce the ILC gains in such a way as to eliminate the need for a mathematical model. Since the real world is producing the gains, there is no issue of model error. Provided the world behaves linearly, the approach gives an ILC law with both stability robustness and good transient robustness, without the need to generate a model.

Keywords: Iterative learning control, stability robustness, monotonic convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
1287 Delaunay Triangulations Efficiency for Conduction-Convection Problems

Authors: Bashar Albaalbaki, Roger E. Khayat

Abstract:

This work is a comparative study on the effect of Delaunay triangulation algorithms on discretization error for conduction-convection conservation problems. A structured triangulation and many unstructured Delaunay triangulations using three popular algorithms for node placement strategies are used. The numerical method employed is the vertex-centered finite volume method. It is found that when the computational domain can be meshed using a structured triangulation, the discretization error is lower for structured triangulations compared to unstructured ones for only low Peclet number values, i.e. when conduction is dominant. However, as the Peclet number is increased and convection becomes more significant, the unstructured triangulations reduce the discretization error. Also, no statistical correlation between triangulation angle extremums and the discretization error is found using 200 samples of randomly generated Delaunay and non-Delaunay triangulations. Thus, the angle extremums cannot be an indicator of the discretization error on their own and need to be combined with other triangulation quality measures, which is the subject of further studies.

Keywords: Conduction-convection problems, Delaunay triangulation, discretization error, finite volume method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90
1286 Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach

Authors: Hamid R. S. Mojaveri, Seyed S. Mousavi, Mojtaba Heydar, Ahmad Aminian

Abstract:

The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.

Keywords: Artificial Neural Networks (ANN), bullwhip effect, demand forecasting, Support Vector Machine (SVM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
1285 Forecasting 24-Hour Ahead Electricity Load Using Time Series Models

Authors: Ramin Vafadary, Maryam Khanbaghi

Abstract:

Forecasting electricity load is important for various purposes like planning, operation and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria namely, the Mean Absolute Error and Root Mean Square Error. The National Renewable Energy Laboratory (NREL) residential energy consumption data are used to train the models. The results of this study show that SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts we can improve the robustness of the models for 24 hour ahead electricity load forecasting.

Keywords: Bagging, Fbprophet, Holt-Winters, LSTM, Load Forecast, SARIMA, tensorflow probability, time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 405
1284 Performance of Total Vector Error of an Estimated Phasor within Local Area Networks

Authors: Ahmed Abdolkhalig, Rastko Zivanovic

Abstract:

This paper evaluates the Total Vector Error of an estimated Phasor as define in IEEE C37.118 standard within different medium access in Local Area Networks (LAN). Three different LAN models (CSMA/CD, CSMA/AMP and Switched Ethernet) are evaluated. The Total Vector Error of the estimated Phasor has been evaluated for the effect of Nodes Number under the standardized network Band-width values defined in IEC 61850-9-2 communication standard (i.e. 0.1, 1 and 10 Gbps).

Keywords: Phasor, Local Area Network, Total Vector Error, IEEE C37.118, IEC 61850.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4107
1283 Laplace Decomposition Approximation Solution for a System of Multi-Pantograph Equations

Authors: M. A. Koroma, C. Zhan, A. F. Kamara, A. B. Sesay

Abstract:

In this work we adopt a combination of Laplace transform and the decomposition method to find numerical solutions of a system of multi-pantograph equations. The procedure leads to a rapid convergence of the series to the exact solution after computing a few terms. The effectiveness of the method is demonstrated in some examples by obtaining the exact solution and in others by computing the absolute error which decreases as the number of terms of the series increases.

Keywords: Laplace decomposition, pantograph equations, exact solution, numerical solution, approximate solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
1282 GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal

Authors: Thomas Odaker, Dieter Kranzlmueller, Jens Volkert

Abstract:

We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification.

Keywords: Computer graphics, half edge collapse, mesh simplification, precomputed simplification, topology preserving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2746
1281 Software Maintenance Severity Prediction for Object Oriented Systems

Authors: Parvinder S. Sandhu, Roma Jaswal, Sandeep Khimta, Shailendra Singh

Abstract:

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Keywords: Neural Network, Software faults, Software Metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
1280 Improving Air Temperature Prediction with Artificial Neural Networks

Authors: Brian A. Smith, Ronald W. McClendon, Gerrit Hoogenboom

Abstract:

The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. Previous work established that the Ward-style artificial neural network (ANN) is a suitable tool for developing such models. The current research focused on developing ANN models with reduced average prediction error by increasing the number of distinct observations used in training, adding additional input terms that describe the date of an observation, increasing the duration of prior weather data included in each observation, and reexamining the number of hidden nodes used in the network. Models were created to predict air temperature at hourly intervals from one to 12 hours ahead. Each ANN model, consisting of a network architecture and set of associated parameters, was evaluated by instantiating and training 30 networks and calculating the mean absolute error (MAE) of the resulting networks for some set of input patterns. The inclusion of seasonal input terms, up to 24 hours of prior weather information, and a larger number of processing nodes were some of the improvements that reduced average prediction error compared to previous research across all horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or 12.5%, less than the previous model. Prediction MAEs eight and 12 hours ahead improved by 0.17°C and 0.16°C, respectively, improvements of 7.4% and 5.9% over the existing model at these horizons. Networks instantiating the same model but with different initial random weights often led to different prediction errors. These results strongly suggest that ANN model developers should consider instantiating and training multiple networks with different initial weights to establish preferred model parameters.

Keywords: Decision support systems, frost protection, fruit, time-series prediction, weather modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2653
1279 Unequal Error Protection of Facial Features for Personal ID Images Coding

Authors: T. Hirner, J. Polec

Abstract:

This paper presents an approach for an unequal error protection of facial features of personal ID images coding. We consider unequal error protection (UEP) strategies for the efficient progressive transmission of embedded image codes over noisy channels. This new method is based on the progressive image compression embedded zerotree wavelet (EZW) algorithm and UEP technique with defined region of interest (ROI). In this case is ROI equal facial features within personal ID image. ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques and encoding LL band separately. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The proposed unequal error protection of image transmission has shown to be more appropriate to low bit rate applications, producing better quality output for ROI of the compresses image. The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the UEP of image transmission with defined ROI with facial features and the equal error protection (EEP) over additive white gaussian noise (AWGN) channel.

Keywords: Embedded zerotree wavelet (EZW), equal error protection (EEP), facial features, personal ID images, region of interest (ROI), unequal error protection (UEP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
1278 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code

Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare

Abstract:

Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.

Keywords: Concatenated coding, low–density parity–check codes, array code, error floors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
1277 APPLE: Providing Absolute and Proportional Throughput Guarantees in Wireless LANs

Authors: Zhijie Ma, Qinglin Zhao, Hongning Dai, Huan Zhang

Abstract:

This paper proposes an APPLE scheme that aims at providing absolute and proportional throughput guarantees, and maximizing system throughput simultaneously for wireless LANs with homogeneous and heterogenous traffic. We formulate our objectives as an optimization problem, present its exact and approximate solutions, and prove the existence and uniqueness of the approximate solution. Simulations validate that APPLE scheme is accurate, and the approximate solution can well achieve the desired objectives already.

Keywords: IEEE 802.11e, throughput guarantee, priority.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476
1276 Mechanism of Alcohol Related Disruption of the Error Monitoring and Processing System

Authors: M. O. Welcome, Y. E. Razvodovsky, E. V. Pereverzeva, V. A. Pereverzev

Abstract:

The error monitoring and processing system, EMPS is the system located in the substantia nigra of the midbrain, basal ganglia and cortex of the forebrain, and plays a leading role in error detection and correction. The main components of EMPS are the dopaminergic system and anterior cingulate cortex. Although, recent studies show that alcohol disrupts the EMPS, the ways in which alcohol affects this system are poorly understood. Based on current literature data, here we suggest a hypothesis of alcohol-related glucose-dependent system of error monitoring and processing, which holds that the disruption of the EMPS is related to the competency of glucose homeostasis regulation, which in turn may determine the dopamine level as a major component of EMPS. Alcohol may indirectly disrupt the EMPS by affecting dopamine level through disorders in blood glucose homeostasis regulation.

Keywords: Alcohol related disruption, Error monitoring andprocessing system, Mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
1275 Speech Data Compression using Vector Quantization

Authors: H. B. Kekre, Tanuja K. Sarode

Abstract:

Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table shows computational complexity of these three algorithms. Here we have introduced a new performance parameter Average Fractional Change in Speech Sample (AFCSS). Our FCG algorithm gives far better performance considering mean absolute error, AFCSS and complexity as compared to others.

Keywords: Vector Quantization, Data Compression, Encoding, , Speech coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2360
1274 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
1273 Design of an Stable GPC for Nonminimum Phase LTI Systems

Authors: Mahdi Yaghobi, Mohammad Haeri

Abstract:

The current methods of predictive controllers are utilized for those processes in which the rate of output variations is not high. For such processes, therefore, stability can be achieved by implementing the constrained predictive controller or applying infinite prediction horizon. When the rate of the output growth is high (e.g. for unstable nonminimum phase process) the stabilization seems to be problematic. In order to avoid this, it is suggested to change the method in the way that: first, the prediction error growth should be decreased at the early stage of the prediction horizon, and second, the rate of the error variation should be penalized. The growth of the error is decreased through adjusting its weighting coefficients in the cost function. Reduction in the error variation is possible by adding the first order derivate of the error into the cost function. By studying different examples it is shown that using these two remedies together, the closed-loop stability of unstable nonminimum phase process can be achieved.

Keywords: GPC, Stability, Varying Weighting Coefficients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
1272 A Survey of 2nd Year Students’ Frequent English Writing Errors and the Effects of Participatory Error Correction Process

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 52 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.00. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally the students’ overall satisfaction of the participatory error correction process is in high level (Mean = 4.39, S.D. = 0.76).

Keywords: Coded indirect corrective feedback, participatory error correction process, error treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
1271 Phase Noise Impact on BER in Space Communication

Authors: Ondrej Baran, Miroslav Kasal, Petr Vagner, Tomas Urbanec

Abstract:

This paper deals with the modeling and the evaluation of a multiplicative phase noise influence on the bit error ratio in a general space communication system. Our research is focused on systems with multi-state phase shift keying modulation techniques and it turns out, that the phase noise significantly affects the bit error rate, especially for higher signal to noise ratios. These results come from a system model created in Matlab environment and are shown in a form of constellation diagrams and bit error rate dependencies. The change of a user data bit rate is also considered and included into simulation results. Obtained outcomes confirm theoretical presumptions.

Keywords: Additive thermal noise, AWGN, BER, bit error rate, multiplicative phase noise, phase shift keying.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4556
1270 Optimization of Bit Error Rate and Power of Ad-hoc Networks Using Genetic Algorithm

Authors: Anjana Choudhary

Abstract:

The ad hoc networks are the future of wireless technology as everyone wants fast and accurate error free information so keeping this in mind Bit Error Rate (BER) and power is optimized in this research paper by using the Genetic Algorithm (GA). The digital modulation techniques used for this paper are Binary Phase Shift Keying (BPSK), M-ary Phase Shift Keying (M-ary PSK), and Quadrature Amplitude Modulation (QAM). This work is implemented on Wireless Ad Hoc Networks (WLAN). Then it is analyze which modulation technique is performing well to optimize the BER and power of WLAN.

Keywords: Bit Error Rate, Genetic Algorithm, Power, Phase Shift Keying, Quadrature Amplitude Modulation, Signal to Noise Ratio, Wireless Ad Hoc Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3076
1269 Error Rate Performance Comparisons of Precoding Schemes over Fading Channels for Multiuser MIMO

Authors: M. Arulvizhi

Abstract:

In Multiuser MIMO communication systems, interuser interference has a strong impact on the transmitted signals. Precoding technique schemes are employed for multiuser broadcast channels to suppress an interuser interference. Different Linear and nonlinear precoding schemes are there. For the massive system dimension, it is difficult to design an appropriate precoding algorithm with low computational complexity and good error rate performance at the same time over fading channels. This paper describes the error rate performance of precoding schemes over fading channels with the assumption of perfect channel state information at the transmitter. To estimate the bit error rate performance, different propagation environments namely, Rayleigh, Rician and Nakagami fading channels have been offered. This paper presents the error rate performance comparison of these fading channels based on precoding methods like Channel Inversion and Dirty paper coding for multiuser broadcasting system. MATLAB simulation has been used. It is observed that multiuser system achieves better error rate performance by Dirty paper coding over Rayleigh fading channel.

Keywords: Multiuser MIMO, channel inversion precoding, dirty paper coding, fading channels, BER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677
1268 On Finite Wordlength Properties of Block-Floating-Point Arithmetic

Authors: Abhijit Mitra

Abstract:

A special case of floating point data representation is block floating point format where a block of operands are forced to have a joint exponent term. This paper deals with the finite wordlength properties of this data format. The theoretical errors associated with the error model for block floating point quantization process is investigated with the help of error distribution functions. A fast and easy approximation formula for calculating signal-to-noise ratio in quantization to block floating point format is derived. This representation is found to be a useful compromise between fixed point and floating point format due to its acceptable numerical error properties over a wide dynamic range.

Keywords: Block floating point, Roundoff error, Block exponent dis-tribution fuction, Signal factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950
1267 Comparative Study of Some Adaptive Fuzzy Algorithms for Manipulator Control

Authors: Sudeept Mohan, Surekha Bhanot

Abstract:

The problem of manipulator control is a highly complex problem of controlling a system which is multi-input, multioutput, non-linear and time variant. In this paper some adaptive fuzzy, and a new hybrid fuzzy control algorithm have been comparatively evaluated through simulations, for manipulator control. The adaptive fuzzy controllers consist of self-organizing, self-tuning, and coarse/fine adaptive fuzzy schemes. These controllers are tested for different trajectories and for varying manipulator parameters through simulations. Various performance indices like the RMS error, steady state error and maximum error are used for comparison. It is observed that the self-organizing fuzzy controller gives the best performance. The proposed hybrid fuzzy plus integral error controller also performs remarkably well, given its simple structure.

Keywords: Hybrid fuzzy, Self-organizing, Self-tuning, Trajectory tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1448