Search results for: An interval polynomial
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 492

Search results for: An interval polynomial

372 GPS TEC Variation Affected by the Interhemispheric Conjugate Auroral Activity on 21 September 2009

Authors: W. Suparta, M. A. Mohd. Ali, M. S. Jit Singh, B. Yatim, T. Motoba, N. Sato, A. Kadokura, G. Bjornsson

Abstract:

This paper observed the interhemispheric conjugate auroral activity occurred on 21 September 2009. The GPS derived ionospheric total electron content (TEC) during a weak substorm interval recorded at interhemispheric conjugate points at Husafell in Iceland and Syowa in Antarctica is investigated to look at their signatures on the auroral features. Selection of all-sky camera (ASC) images and keogram at Tjörnes and Syowa during the interval 00:47:54 – 00:50:14 UT on 21 September 2009 found that the auroral activity had exerted their influence on the GPS TEC as a consequence of varying interplanetary magnetic field (IMF) By polarity.

Keywords: Auroral activity, GPS TEC, Interhemispheric conjugate points, Responses

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190
371 An Adaptive Hand-Talking System for the Hearing Impaired

Authors: Zhou Yu, Jiang Feng

Abstract:

An adaptive Chinese hand-talking system is presented in this paper. By analyzing the 3 data collecting strategies for new users, the adaptation framework including supervised and unsupervised adaptation methods is proposed. For supervised adaptation, affinity propagation (AP) is used to extract exemplar subsets, and enhanced maximum a posteriori / vector field smoothing (eMAP/VFS) is proposed to pool the adaptation data among different models. For unsupervised adaptation, polynomial segment models (PSMs) are used to help hidden Markov models (HMMs) to accurately label the unlabeled data, then the "labeled" data together with signerindependent models are inputted to MAP algorithm to generate signer-adapted models. Experimental results show that the proposed framework can execute both supervised adaptation with small amount of labeled data and unsupervised adaptation with large amount of unlabeled data to tailor the original models, and both achieve improvements on the performance of recognition rate.

Keywords: sign language recognition, signer adaptation, eMAP/VFS, polynomial segment model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1727
370 Improved Neutron Leakage Treatment on Nodal Expansion Method for PWR Reactors

Authors: Antonio Carlos Marques Alvim, Fernando Carvalho da Silva, Aquilino Senra Martinez

Abstract:

For a quick and accurate calculation of spatial neutron distribution in nuclear power reactors 3D nodal codes are usually used aiming at solving the neutron diffusion equation for a given reactor core geometry and material composition. These codes use a second order polynomial to represent the transverse leakage term. In this work, a nodal method based on the well known nodal expansion method (NEM), developed at COPPE, making use of this polynomial expansion was modified to treat the transverse leakage term for the external surfaces of peripheral reflector nodes. The proposed method was implemented into a computational system which, besides solving the diffusion equation, also solves the burnup equations governing the gradual changes in material compositions of the core due to fuel depletion. Results confirm the effectiveness of this modified treatment of peripheral nodes for practical purposes in PWR reactors.

Keywords: Transverse leakage, nodal expansion method, power density, PWR reactors

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2003
369 Confidence Intervals for the Normal Mean with Known Coefficient of Variation

Authors: Suparat Niwitpong

Abstract:

In this paper we proposed two new confidence intervals for the normal population mean with known coefficient of variation. This situation occurs normally in environment and agriculture experiments where the scientist knows the coefficient of variation of their experiments. We propose two new confidence intervals for this problem based on the recent work of Searls [5] and the new method proposed in this paper for the first time. We derive analytic expressions for the coverage probability and the expected length of each confidence interval. Monte Carlo simulation will be used to assess the performance of these intervals based on their expected lengths.

Keywords: confidence interval, coverage probability, expected length, known coefficient of variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
368 Lagrange and Multilevel Wavelet-Galerkin with Polynomial Time Basis for Heat Equation

Authors: Watcharakorn Thongchuay, Puntip Toghaw, Montri Maleewong

Abstract:

The Wavelet-Galerkin finite element method for solving the one-dimensional heat equation is presented in this work. Two types of basis functions which are the Lagrange and multi-level wavelet bases are employed to derive the full form of matrix system. We consider both linear and quadratic bases in the Galerkin method. Time derivative is approximated by polynomial time basis that provides easily extend the order of approximation in time space. Our numerical results show that the rate of convergences for the linear Lagrange and the linear wavelet bases are the same and in order 2 while the rate of convergences for the quadratic Lagrange and the quadratic wavelet bases are approximately in order 4. It also reveals that the wavelet basis provides an easy treatment to improve numerical resolutions that can be done by increasing just its desired levels in the multilevel construction process.

Keywords: Galerkin finite element method, Heat equation , Lagrange basis function, Wavelet basis function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
367 A Mixed Expert Evaluation System and Dynamic Interval-Valued Hesitant Fuzzy Selection Approach

Authors: Hossein Gitinavard, Mohammad Hossein Fazel Zarandi

Abstract:

In the last decades, concerns about the environmental issues lead to professional and academic efforts on green supplier selection problems. In this sake, one of the main issues in evaluating the green supplier selection problems, which could increase the uncertainty, is the preferences of the experts' judgments about the candidate green suppliers. Therefore, preparing an expert system to evaluate the problem based on the historical data and the experts' knowledge can be sensible. This study provides an expert evaluation system to assess the candidate green suppliers under selected criteria in a multi-period approach. In addition, a ranking approach under interval-valued hesitant fuzzy set (IVHFS) environment is proposed to select the most appropriate green supplier in planning horizon. In the proposed ranking approach, the IVHFS and the last aggregation approach are considered to margin the errors and to prevent data loss, respectively. Hence, a comparative analysis is provided based on an illustrative example to show the feasibility of the proposed approach.

Keywords: Green supplier selection, expert system, ranking approach, interval-valued hesitant fuzzy setting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
366 An Interval-Based Multi-Attribute Decision Making Approach for Electric Utility Resource Planning

Authors: M. Sedighizadeh, A. Rezazadeh

Abstract:

This paper presents an interval-based multi-attribute decision making (MADM) approach in support of the decision process with imprecise information. The proposed decision methodology is based on the model of linear additive utility function but extends the problem formulation with the measure of composite utility variance. A sample study concerning with the evaluation of electric generation expansion strategies is provided showing how the imprecise data may affect the choice toward the best solution and how a set of alternatives, acceptable to the decision maker (DM), may be identified with certain confidence.

Keywords: Decision Making, Power Generation, ElectricUtilities, Resource Planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
365 Design and Implementation of Reed Solomon Encoder on FPGA

Authors: Amandeep Singh, Mandeep Kaur

Abstract:

Error correcting codes are used for detection and correction of errors in digital communication system. Error correcting coding is based on appending of redundancy to the information message according to a prescribed algorithm. Reed Solomon codes are part of channel coding and withstand the effect of noise, interference and fading. Galois field arithmetic is used for encoding and decoding reed Solomon codes. Galois field multipliers and linear feedback shift registers are used for encoding the information data block. The design of Reed Solomon encoder is complex because of use of LFSR and Galois field arithmetic. The purpose of this paper is to design and implement Reed Solomon (255, 239) encoder with optimized and lesser number of Galois Field multipliers. Symmetric generator polynomial is used to reduce the number of GF multipliers. To increase the capability toward error correction, convolution interleaving will be used with RS encoder. The Design will be implemented on Xilinx FPGA Spartan II.

Keywords: Galois Field, Generator polynomial, LFSR, Reed Solomon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4814
364 Model-free Prediction based on Tracking Theory and Newton Form of Polynomial

Authors: Guoyuan Qi , Yskandar Hamam, Barend Jacobus van Wyk, Shengzhi Du

Abstract:

The majority of existing predictors for time series are model-dependent and therefore require some prior knowledge for the identification of complex systems, usually involving system identification, extensive training, or online adaptation in the case of time-varying systems. Additionally, since a time series is usually generated by complex processes such as the stock market or other chaotic systems, identification, modeling or the online updating of parameters can be problematic. In this paper a model-free predictor (MFP) for a time series produced by an unknown nonlinear system or process is derived using tracking theory. An identical derivation of the MFP using the property of the Newton form of the interpolating polynomial is also presented. The MFP is able to accurately predict future values of a time series, is stable, has few tuning parameters and is desirable for engineering applications due to its simplicity, fast prediction speed and extremely low computational load. The performance of the proposed MFP is demonstrated using the prediction of the Dow Jones Industrial Average stock index.

Keywords: Forecast, model-free predictor, prediction, time series

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
363 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
362 Recent Trends in Nonlinear Methods of HRV Analysis: A Review

Authors: Ramesh K. Sunkaria

Abstract:

The linear methods of heart rate variability analysis such as non-parametric (e.g. fast Fourier transform analysis) and parametric methods (e.g. autoregressive modeling) has become an established non-invasive tool for marking the cardiac health, but their sensitivity and specificity were found to be lower than expected with positive predictive value <30%. This may be due to considering the RR-interval series as stationary and re-sampling them prior to their use for analysis, whereas actually it is not. This paper reviews the non-linear methods of HRV analysis such as correlation dimension, largest Lyupnov exponent, power law slope, fractal analysis, detrended fluctuation analysis, complexity measure etc. which are currently becoming popular as these uses the actual RR-interval series. These methods are expected to highly accurate cardiac health prognosis.

Keywords: chaos, nonlinear dynamics, sample entropy, approximate entropy, detrended fluctuation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2324
361 Reliability Analysis of k-out-of-n : G System Using Triangular Intuitionistic Fuzzy Numbers

Authors: Tanuj Kumar, Rakesh Kumar Bajaj

Abstract:

In the present paper, we analyze the vague reliability of k-out-of-n : G system (particularly, series and parallel system) with independent and non-identically distributed components, where the reliability of the components are unknown. The reliability of each component has been estimated using statistical confidence interval approach. Then we converted these statistical confidence interval into triangular intuitionistic fuzzy numbers. Based on these triangular intuitionistic fuzzy numbers, the reliability of the k-out-of-n : G system has been calculated. Further, in order to implement the proposed methodology and to analyze the results of k-out-of-n : G system, a numerical example has been provided.

Keywords: Vague set, vague reliability, triangular intuitionistic fuzzy number, k-out-of-n : G system, series and parallel system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2949
360 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 429
359 A Biometric Template Security Approach to Fingerprints Based on Polynomial Transformations

Authors: Ramon Santana

Abstract:

The use of biometric identifiers in the field of information security, access control to resources, authentication in ATMs and banking among others, are of great concern because of the safety of biometric data. In the general architecture of a biometric system have been detected eight vulnerabilities, six of them allow obtaining minutiae template in plain text. The main consequence of obtaining minutia templates is the loss of biometric identifier for life. To mitigate these vulnerabilities several models to protect minutiae templates have been proposed. Several vulnerabilities in the cryptographic security of these models allow to obtain biometric data in plain text. In order to increase the cryptographic security and ease of reversibility, a minutiae templates protection model is proposed. The model aims to make the cryptographic protection and facilitate the reversibility of data using two levels of security. The first level of security is the data transformation level. In this level generates invariant data to rotation and translation, further transformation is irreversible. The second level of security is the evaluation level, where the encryption key is generated and data is evaluated using a defined evaluation function. The model is aimed at mitigating known vulnerabilities of the proposed models, basing its security on the impossibility of the polynomial reconstruction.

Keywords: Fingerprint, template protection, bio-cryptography, minutiae protection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 803
358 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
357 Using Interval Constrained Petri Nets for the Fuzzy Regulation of Quality: Case of Assembly Process Mechanics

Authors: Nabli L., Dhouibi H., Collart Dutilleul S., Craye E.

Abstract:

The indistinctness of the manufacturing processes makes that a parts cannot be realized in an absolutely exact way towards the specifications on the dimensions. It is thus necessary to assume that the effectively realized product has to belong in a very strict way to compatible intervals with a correct functioning of the parts. In this paper we present an approach based on mixing tow different characteristics theories, the fuzzy system and Petri net system. This tool has been proposed to model and control the quality in an assembly system. A robust command of a mechanical assembly process is presented as an application. This command will then have to maintain the specifications interval of parts in front of the variations. It also illustrates how the technique reacts when the product quality is high, medium, or low.

Keywords: Petri nets, production rate, performance evaluation, tolerant system, fuzzy sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1255
356 Products in Early Development Phases: Ecological Classification and Evaluation Using an Interval Arithmetic Based Calculation Approach

Authors: Helen L. Hein, Joachim Schwarte

Abstract:

As a pillar of sustainable development, ecology has become an important milestone in research community, especially due to global challenges like climate change. The ecological performance of products can be scientifically conducted with life cycle assessments. In the construction sector, significant amounts of CO2 emissions are assigned to the energy used for building heating purposes. Therefore, sustainable construction materials for insulating purposes are substantial, whereby aerogels have been explored intensively in the last years due to their low thermal conductivity. Therefore, the WALL-ACE project aims to develop an aerogel-based thermal insulating plaster that would achieve minor thermal conductivities. But as in the early stage of development phases, a lot of information is still missing or not yet accessible, the ecological performance of innovative products bases increasingly on uncertain data that can lead to significant deviations in the results. To be able to predict realistically how meaningful the results are and how viable the developed products may be with regard to their corresponding respective market, these deviations however have to be considered. Therefore, a classification method is presented in this study, which may allow comparing the ecological performance of modern products with already established and competitive materials. In order to achieve this, an alternative calculation method was used that allows computing with lower and upper bounds to consider all possible values without precise data. The life cycle analysis of the considered products was conducted with an interval arithmetic based calculation method. The results lead to the conclusion that the interval solutions describing the possible environmental impacts are so wide that the result usability is limited. Nevertheless, a further optimization in reducing environmental impacts of aerogels seems to be needed to become more competitive in the future.

Keywords: Aerogel-based, insulating material, early develop¬ment phase, interval arithmetic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 582
355 Genetic Algorithm Approach for Solving the Falkner–Skan Equation

Authors: Indu Saini, Phool Singh, Vikas Malik

Abstract:

A novel method based on Genetic Algorithm to solve the boundary value problems (BVPs) of the Falkner–Skan equation over a semi-infinite interval has been presented. In our approach, we use the free boundary formulation to truncate the semi-infinite interval into a finite one. Then we use the shooting method based on Genetic Algorithm to transform the BVP into initial value problems (IVPs). Genetic Algorithm is used to calculate shooting angle. The initial value problems arisen during shooting are computed by Runge-Kutta Fehlberg method. The numerical solutions obtained by the present method are in agreement with those obtained by previous authors.

Keywords: Boundary Layer Flow, Falkner–Skan equation, Genetic Algorithm, Shooting method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2487
354 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

Keywords: Image compression, wavelet transform, sign coding, magnitude coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
353 Investigating Climate Change Trend Based on Data Simulation and IPCC Scenario during 2010-2030 AD: Case Study of Fars Province

Authors: Leila Rashidian, Abbas Ebrahimi

Abstract:

The development of industrial activities, increase in fossil fuel consumption, vehicles, destruction of forests and grasslands, changes in land use, and population growth have caused to increase the amount of greenhouse gases especially CO2 in the atmosphere in recent decades. This has led to global warming and climate change. In the present paper, we have investigated the trend of climate change according to the data simulation during the time interval of 2010-2030 in the Fars province. In this research, the daily climatic parameters such as maximum and minimum temperature, precipitation and number of sunny hours during the 1977-2008 time interval for synoptic stations of Shiraz and Abadeh and during 1995-2008 for Lar stations and also the output of HADCM3 model in 2010-2030 time interval have been used based on the A2 propagation scenario. The results of the model show that the average temperature will increase by about 1 degree centigrade and the amount of precipitation will increase by 23.9% compared to the observational data. In conclusion, according to the temperature increase in this province, the amount of precipitation in the form of snow will be reduced and precipitations often will occur in the form of rain. This 1-degree centigrade increase during the season will reduce production by 6 to 10% because of shortening the growing period of wheat.

Keywords: Climate change, Lars.WG, HADCM3 model, Fars province, climatic parameters, A2 scenario.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139
352 Forecasting Issues in Energy Markets within a Reg-ARIMA Framework

Authors: Ilaria Lucrezia Amerise

Abstract:

Electricity markets throughout the world have undergone substantial changes. Accurate, reliable, clear and comprehensible modeling and forecasting of different variables (loads and prices in the first instance) have achieved increasing importance. In this paper, we describe the actual state of the art focusing on reg-SARMA methods, which have proven to be flexible enough to accommodate the electricity price/load behavior satisfactory. More specifically, we will discuss: 1) The dichotomy between point and interval forecasts; 2) The difficult choice between stochastic (e.g. climatic variation) and non-deterministic predictors (e.g. calendar variables); 3) The confrontation between modelling a single aggregate time series or creating separated and potentially different models of sub-series. The noteworthy point that we would like to make it emerge is that prices and loads require different approaches that appear irreconcilable even though must be made reconcilable for the interests and activities of energy companies.

Keywords: Forecasting problem, interval forecasts, time series, electricity prices, reg-plus-SARMA methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
351 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results

Authors: C. Villegas-Quezada, J. Climent

Abstract:

Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.

Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
350 Fuzzy Controlled Hydraulic Excavator with Model Parameter Uncertainty

Authors: Ganesh Kothapalli, Mohammed Y. Hassan

Abstract:

The hydraulic actuated excavator, being a non-linear mobile machine, encounters many uncertainties. There are uncertainties in the hydraulic system in addition to the uncertain nature of the load. The simulation results obtained in this study show that there is a need for intelligent control of such machines and in particular interval type-2 fuzzy controller is most suitable for minimizing the position error of a typical excavator-s bucket under load variations. We consider the model parameter uncertainties such as hydraulic fluid leakage and friction. These are uncertainties which also depend up on the temperature and alter bulk modulus and viscosity of the hydraulic fluid. Such uncertainties together with the load variations cause chattering of the bucket position. The interval type-2 fuzzy controller effectively eliminates the chattering and manages to control the end-effecter (bucket) position with positional error in the order of few millimeters.

Keywords: excavator, fuzzy control, hydraulics, mining, type-2

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
349 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
348 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines

Authors: Arun Goel

Abstract:

The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free overfall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, Support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, Support vector machine (Polynomial and rbf) models and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free overfall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.

Keywords: Air entrainment rate, dissolved oxygen, regression, SVM, weir.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
347 Hydraulic Conductivity Prediction of Cement Stabilized Pavement Base Incorporating Recycled Plastics and Recycled Aggregates

Authors: Md. Shams Razi Shopnil, Tanvir Imtiaz, Sabrina Mahjabin, Md. Sahadat Hossain

Abstract:

Saturated hydraulic conductivity is one of the most significant attributes of pavement base course. Determination of hydraulic conductivity is a routine procedure for regular aggregate base courses. However, in many cases, a cement-stabilized base course is used with compromised drainage ability. Traditional hydraulic conductivity testing procedure is a readily available option which leads to two consequential drawbacks, i.e., the time required for the specimen to be saturated and extruding the sample after completion of the laboratory test. To overcome these complications, this study aims at formulating an empirical approach to predicting hydraulic conductivity based on Unconfined Compressive Strength test results. To do so, this study comprises two separate experiments (Constant Head Permeability test and Unconfined Compressive Strength test) conducted concurrently on a specimen having the same physical credentials. Data obtained from the two experiments were then used to devise a correlation between hydraulic conductivity and unconfined compressive strength. This correlation in the form of a polynomial equation helps to predict the hydraulic conductivity of cement-treated pavement base course, bypassing the cumbrous process of traditional permeability and less commonly used horizontal permeability tests. The correlation was further corroborated by a different set of data, and it has been found that the derived polynomial equation is deemed to be a viable tool to predict hydraulic conductivity.

Keywords: Hydraulic conductivity, unconfined compressive strength, recycled plastics, recycled concrete aggregates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 280
346 Route Training in Mobile Robotics through System Identification

Authors: Roberto Iglesias, Theocharis Kyriacou, Ulrich Nehmzow, Steve Billings

Abstract:

Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.

Keywords: Mobile robotics, system identification, non-linear modelling, NARMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
345 Knowledge Representation Based On Interval Type-2 CFCM Clustering

Authors: Myung-Won Lee, Keun-Chang Kwak

Abstract:

This paper is concerned with knowledge representation and extraction of fuzzy if-then rules using Interval Type-2 Context-based Fuzzy C-Means clustering (IT2-CFCM) with the aid of fuzzy granulation. This proposed clustering algorithm is based on information granulation in the form of IT2 based Fuzzy C-Means (IT2-FCM) clustering and estimates the cluster centers by preserving the homogeneity between the clustered patterns from the IT2 contexts produced in the output space. Furthermore, we can obtain the automatic knowledge representation in the design of Radial Basis Function Networks (RBFN), Linguistic Model (LM), and Adaptive Neuro-Fuzzy Networks (ANFN) from the numerical input-output data pairs. We shall focus on a design of ANFN in this paper. The experimental results on an estimation problem of energy performance reveal that the proposed method showed a good knowledge representation and performance in comparison with the previous works.

Keywords: IT2-FCM, IT2-CFCM, context-based fuzzy clustering, adaptive neuro-fuzzy network, knowledge representation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2595
344 Data Mining Determination of Sunlight Average Input for Solar Power Plant

Authors: Fl. Loury, P. Sablonière, C. Lamoureux, G. Magnier, Th. Gutierrez

Abstract:

A method is proposed to extract faithful representative patterns from data set of observations when they are suffering from non-negligible fluctuations. Supposing time interval between measurements to be extremely small compared to observation time, it consists in defining first a subset of intermediate time intervals characterizing coherent behavior. Data projection on these intervals gives a set of curves out of which an ideally “perfect” one is constructed by taking the sup limit of them. Then comparison with average real curve in corresponding interval gives an efficiency parameter expressing the degradation consecutive to fluctuation effect. The method is applied to sunlight data collected in a specific place, where ideal sunlight is the one resulting from direct exposure at location latitude over the year, and efficiency is resulting from action of meteorological parameters, mainly cloudiness, at different periods of the year. The extracted information already gives interesting element of decision, before being used for analysis of plant control.

Keywords: Base Input Reconstruction, Data Mining, Efficiency Factor, Information Pattern Operator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
343 Terminal Velocity of a Bubble Rise in a Liquid Column

Authors: Mário A. R. Talaia

Abstract:

As it is known, buoyancy and drag forces rule bubble's rise velocity in a liquid column. These forces are strongly dependent on fluid properties, gravity as well as equivalent's diameter. This study reports a set of bubble rising velocity experiments in a liquid column using water or glycerol. Several records of terminal velocity were obtained. The results show that bubble's rise terminal velocity is strongly dependent on dynamic viscosity effect. The data set allowed to have some terminal velocities data interval of 8.0 ? 32.9 cm/s with Reynolds number interval 1.3 -7490. The bubble's movement was recorded with a video camera. The main goal is to present an original set data and results that will be discussed based on two-phase flow's theory. It will also discussed, the prediction of terminal velocity of a single bubble in liquid, as well as the range of its applicability. In conclusion, this study presents general expressions for the determination of the terminal velocity of isolated gas bubbles of a Reynolds number range, when the fluid proprieties are known.

Keywords: Bubbles, terminal velocity, two phase-flow, vertical column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18518