Search results for: IIR filtering. Polyphase decomposition.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 591

Search results for: IIR filtering. Polyphase decomposition.

381 Adaptive Weighted Averaging Filter Using the Appropriate Number of Consecutive Frames

Authors: Mahmoud Saeidi, Ali Nazemipour

Abstract:

In this paper, we propose a novel adaptive spatiotemporal filter that utilizes image sequences in order to remove noise. The consecutive frames include: current, previous and next noisy frames. The filter proposed in this paper is based upon the weighted averaging pixels intensity and noise variance in image sequences. It utilizes the Appropriate Number of Consecutive Frames (ANCF) based on the noisy pixels intensity among the frames. The number of consecutive frames is adaptively calculated for each region in image and its value may change from one region to another region depending on the pixels intensity within the region. The weights are determined by a well-defined mathematical criterion, which is adaptive to the feature of spatiotemporal pixels of the consecutive frames. It is experimentally shown that the proposed filter can preserve image structures and edges under motion while suppressing noise, and thus can be effectively used in image sequences filtering. In addition, the AWA filter using ANCF is particularly well suited for filtering sequences that contain segments with abruptly changing scene content due to, for example, rapid zooming and changes in the view of the camera.

Keywords: Appropriate Number of Consecutive Frames, Adaptive Weighted Averaging, Motion Estimation, Noise Variance, Motion Compensation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822
380 Machine Vision System for Automatic Weeding Strategy in Oil Palm Plantation using Image Filtering Technique

Authors: Kamarul Hawari Ghazali, Mohd. Marzuki Mustafa, Aini Hussain

Abstract:

Machine vision is an application of computer vision to automate conventional work in industry, manufacturing or any other field. Nowadays, people in agriculture industry have embarked into research on implementation of engineering technology in their farming activities. One of the precision farming activities that involve machine vision system is automatic weeding strategy. Automatic weeding strategy in oil palm plantation could minimize the volume of herbicides that is sprayed to the fields. This paper discusses an automatic weeding strategy in oil palm plantation using machine vision system for the detection and differential spraying of weeds. The implementation of vision system involved the used of image processing technique to analyze weed images in order to recognized and distinguished its types. Image filtering technique has been used to process the images as well as a feature extraction method to classify the type of weed images. As a result, the image processing technique contributes a promising result of classification to be implemented in machine vision system for automated weeding strategy.

Keywords: Machine vision, Automatic Weeding Strategy, filter, feature extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
379 New Nonlinear Filtering Strategies for Eliminating Short and Long Tailed Noise in Images with Edge Preservation Properties

Authors: E. Srinivasan, D. Ebenezer

Abstract:

Midpoint filter is quite effective in recovering the images confounded by the short-tailed (uniform) noise. It, however, performs poorly in the presence of additive long-tailed (impulse) noise and it does not preserve the edge structures of the image signals. Median smoother discards outliers (impulses) effectively, but it fails to provide adequate smoothing for images corrupted with nonimpulse noise. In this paper, two nonlinear techniques for image filtering, namely, New Filter I and New Filter II are proposed based on a nonlinear high-pass filter algorithm. New Filter I is constructed using a midpoint filter, a highpass filter and a combiner. It suppresses uniform noise quite well. New Filter II is configured using an alpha trimmed midpoint filter, a median smoother of window size 3x3, the high pass filter and the combiner. It is robust against impulse noise and attenuates uniform noise satisfactorily. Both the filters are shown to exhibit good response at the image boundaries (edges). The proposed filters are evaluated for their performance on a test image and the results obtained are included.

Keywords: Image filters, Midpoint filter, Nonlinear filters, Nonlinear highpass filter, Order-statistic filters, Rank-order filters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
378 Ozone Assisted Low Temperature Catalytic Benzene Oxidation over Al2O3, SiO2, AlOOH Supported Ni/Pd Catalytic

Authors: V. Georgiev

Abstract:

Catalytic oxidation of benzene assisted by ozone, on alumina, silica, and boehmite-supported Ni/Pd catalysts was investigated at 353 K to assess the influence of the support on the reaction. Three bimetallic Ni/Pd nanosized samples with loading 4.7% of Ni and 0.17% of Pd supported on SiO2, AlOOH and Al2O3 were synthesized by the extractive-pyrolytic method. The phase composition was characterized by means of XRD and the surface area and pore size were estimated using Brunauer–Emmett–Teller (BET) and Barrett–Joyner–Halenda (BJH) methods. At the beginning of the reaction, catalysts were significantly deactivated due to the accumulation of intermediates on the catalyst surface and after 60 minutes it turned stable. Ni/Pd/AlOOH catalyst showed the highest steady-state activity in comparison with the Ni/Pd/SiO2 and Ni/Pd/Al2O3 catalysts. Their activity depends on the ozone decomposition potential of the catalysts because of generating oxidizing active species. The sample with the highest ozone decomposition ability which correlated to the surface area of the support oxidizes benzene to the highest extent.

Keywords: Ozone, catalysts, oxidation, Volatile organic compounds, VOCs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 623
377 High Accuracy ESPRIT-TLS Technique for Wind Turbine Fault Discrimination

Authors: Saad Chakkor, Mostafa Baghouri, Abderrahmane Hajraoui

Abstract:

ESPRIT-TLS method appears a good choice for high resolution fault detection in induction machines. It has a very high effectiveness in the frequency and amplitude identification. Contrariwise, it presents a high computation complexity which affects its implementation in real time fault diagnosis. To avoid this problem, a Fast-ESPRIT algorithm that combined the IIR band-pass filtering technique, the decimation technique and the original ESPRIT-TLS method was employed to enhance extracting accurately frequencies and their magnitudes from the wind stator current with less computation cost. The proposed algorithm has been applied to verify the wind turbine machine need in the implementation of an online, fast, and proactive condition monitoring. This type of remote and periodic maintenance provides an acceptable machine lifetime, minimize its downtimes and maximize its productivity. The developed technique has evaluated by computer simulations under many fault scenarios. Study results prove the performance of Fast- ESPRIT offering rapid and high resolution harmonics recognizing with minimum computation time and less memory cost.

Keywords: Spectral Estimation, ESPRIT-TLS, Real Time, Diagnosis, Wind Turbine Faults, Band-Pass Filtering, Decimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261
376 Optimum Conditions for Effective Decomposition of Toluene as VOC Gas by Pilot-Scale Regenerative Thermal Oxidizer

Authors: S. Iijima, K. Nakayama, D. Kuchar, M. Kubota, H. Matsuda

Abstract:

Regenerative Thermal Oxidizer (RTO) is one of the best solutions for removal of Volatile Organic Compounds (VOC) from industrial processes. In the RTO, VOC in a raw gas are usually decomposed at 950-1300 K and the combustion heat of VOC is recovered by regenerative heat exchangers charged with ceramic honeycombs. The optimization of the treatment of VOC leads to the reduction of fuel addition to VOC decomposition, the minimization of CO2 emission and operating cost as well. In the present work, the thermal efficiency of the RTO was investigated experimentally in a pilot-scale RTO unit using toluene as a typical representative of VOC. As a result, it was recognized that the radiative heat transfer was dominant in the preheating process of a raw gas when the gas flow rate was relatively low. Further, it was found that a minimum heat exchanger volume to achieve self combustion of toluene without additional heating of the RTO by fuel combustion was dependent on both the flow rate of a raw gas and the concentration of toluene. The thermal efficiency calculated from fuel consumption and the decomposed toluene ratio, was found to have a maximum value of 0.95 at a raw gas mass flow rate of 1810 kg·h-1 and honeycombs height of 1.5m.

Keywords: Regenerative Heat Exchange, Self Combustion, Toluene, Volatile Organic Compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2447
375 Labview-Based System for Fiber Links Events Detection

Authors: Bo Liu, Qingshan Kong, Weiqing Huang

Abstract:

With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.

Keywords: Empirical mode decomposition (EMD), events detection, Gabor transform, optical time domain reflectometer (OTDR), wavelet threshold denoising.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808
374 Performance Analysis of a Discrete-time GeoX/G/1 Queue with Single Working Vacation

Authors: Shan Gao, Zaiming Liu

Abstract:

This paper treats a discrete-time batch arrival queue with single working vacation. The main purpose of this paper is to present a performance analysis of this system by using the supplementary variable technique. For this purpose, we first analyze the Markov chain underlying the queueing system and obtain its ergodicity condition. Next, we present the stationary distributions of the system length as well as some performance measures at random epochs by using the supplementary variable method. Thirdly, still based on the supplementary variable method we give the probability generating function (PGF) of the number of customers at the beginning of a busy period and give a stochastic decomposition formulae for the PGF of the stationary system length at the departure epochs. Additionally, we investigate the relation between our discretetime system and its continuous counterpart. Finally, some numerical examples show the influence of the parameters on some crucial performance characteristics of the system.

Keywords: Discrete-time queue, batch arrival, working vacation, supplementary variable technique, stochastic decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
373 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error

Authors: Qianhua He, Weili Zhou, Aiwu Chen

Abstract:

A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.

Keywords: Speech denoising, sparse representation, K-singular value decomposition, orthogonal matching pursuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016
372 Random Projections for Dimensionality Reduction in ICA

Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi

Abstract:

In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.

Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
371 A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT

Authors: Say Wei Foo, Qi Dong

Abstract:

Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.

Keywords: Image watermarking, Image normalization, Singularvalue decomposition, Discrete cosine transform, Robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098
370 Cellulolytic Microbial Activator Influence on Decomposition of Rubber Factory Waste Composting

Authors: Thaniya Kaosol, Sirinthrar Wandee

Abstract:

In this research, an aerobic composting method is studied to reuse organic waste from rubber factory waste as soil fertilizer and to study the effect of cellulolytic microbial activator (CMA) as the activator in the rubber factory waste composting. The performance of the composting process was monitored as a function of carbon and organic matter decomposition rate, temperature and moisture content. The results indicate that the rubber factory waste is best composted with water hyacinth and sludge than composted alone. In addition, the CMA is more affective when mixed with the rubber factory waste, water hyacinth and sludge since a good fertilizer is achieved. When adding CMA into the rubber factory waste composted alone, the finished product does not achieve a standard of fertilizer, especially the C/N ratio. Finally, the finished products of composting rubber factory waste and water hyacinth and sludge (both CMA and without CMA), can be an environmental friendly alternative to solve the disposal problems of rubber factory waste. Since the C/N ratio, pH, moisture content, temperature, and nutrients of the finished products are acceptable for agriculture use.

Keywords: composting, rubber waste, C/N ratio, sludge, cellulolytic microbial activator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2141
369 A New Fast Skin Color Detection Technique

Authors: Tarek M. Mahmoud

Abstract:

Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method.

Keywords: Adult images filtering, image resizing, skin color detection, YcbCr color space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4010
368 Method of Intelligent Fault Diagnosis of Preload Loss for Single Nut Ball Screws through the Sensed Vibration Signals

Authors: Yi-Cheng Huang, Yan-Chen Shin

Abstract:

This paper proposes method of diagnosing ball screw preload loss through the Hilbert-Huang Transform (HHT) and Multiscale entropy (MSE) process. The proposed method can diagnose ball screw preload loss through vibration signals when the machine tool is in operation. Maximum dynamic preload of 2 %, 4 %, and 6 % ball screws were predesigned, manufactured, and tested experimentally. Signal patterns are discussed and revealed using Empirical Mode Decomposition(EMD)with the Hilbert Spectrum. Different preload features are extracted and discriminated using HHT. The irregularity development of a ball screw with preload loss is determined and abstracted using MSE based on complexity perception. Experiment results show that the proposed method can predict the status of ball screw preload loss. Smart sensing for the health of the ball screw is also possible based on a comparative evaluation of MSE by the signal processing and pattern matching of EMD/HHT. This diagnosis method realizes the purposes of prognostic effectiveness on knowing the preload loss and utilizing convenience.

Keywords: Empirical Mode Decomposition, Hilbert-Huang Transform, Multi-scale Entropy, Preload Loss, Single-nut Ball Screw

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2846
367 A TFETI Domain Decompositon Solver for Von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening

Authors: Martin Cermak, Stanislav Sysala

Abstract:

In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MatLab.

Keywords: Isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
366 Decomposition of Homeomorphism on Topological Spaces

Authors: Ahmet Z. Ozcelik, Serkan Narli

Abstract:

In this study, two new classes of generalized homeomorphisms are introduced and shown that one of these classes has a group structure. Moreover, some properties of these two homeomorphisms are obtained.

Keywords: Generalized closed set, homeomorphism, gsghomeomorphism, sgs-homeomorphism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
365 An Implementation of MacMahon's Partition Analysis in Ordering the Lower Bound of Processing Elements for the Algorithm of LU Decomposition

Authors: Halil Snopce, Ilir Spahiu, Lavdrim Elmazi

Abstract:

A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.

Keywords: generating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equations and calculus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
364 Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition

Authors: Chiou-Yng Lee, Wen-Yo Lee, Chieh-Tsai Wu, Cheng-Chen Yang

Abstract:

Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit level and digi -level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very large scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.

Keywords: Digit-serial systolic multiplier, elliptic curve cryptography (ECC), Karatsuba algorithm (KA), shifted polynomial basis (SPB), pairing computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063
363 Effect of Zeolite on the Decomposition Resistance of Organic Matter in Tropical Soils under Global Warming

Authors: Mai Thanh Truc, Masao Yoshida

Abstract:

Global temperature had increased by about 0.5oC over the past century, increasing temperature leads to a loss or a decrease of soil organic matter (SOM). Whereas soil organic matter in many tropical soils is less stable than that of temperate soils, and it will be easily affected by climate change. Therefore, conservation of soil organic matter is urgent issue nowadays. This paper presents the effect of different doses (5%, 15%) of Ca-type zeolite in conjunction with organic manure, applied to soil samples from Philippines, Paraguay and Japan, on the decomposition resistance of soil organic matter under high temperature. Results showed that a remain or slightly increase the C/N ratio of soil. There are an increase in percent of humic acid (PQ) that extracted with Na4P2O7. A decrease of percent of free humus (fH) after incubation was determined. A larger the relative color intensity (RF) value and a lower the color coefficient (6logK) value following increasing zeolite rates leading to a higher degrees of humification. The increase in the aromatic condensation of humic acid (HA) after incubation, as indicates by the decrease of H/C and O/C ratios of HA. This finding indicates that the use of zeolite could be beneficial with respect to SOM conservation under global warming condition.

Keywords: Global warming, Humic substances, Soil organicmatter, Zeolite.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2213
362 Thermogravimetry Study on Pyrolysis of Various Lignocellulosic Biomass for Potential Hydrogen Production

Authors: S.S. Abdullah, S. Yusup, M.M. Ahmad, A. Ramli, L. Ismail

Abstract:

This paper aims to study decomposition behavior in pyrolytic environment of four lignocellulosic biomass (oil palm shell, oil palm frond, rice husk and paddy straw), and two commercial components of biomass (pure cellulose and lignin), performed in a thermogravimetry analyzer (TGA). The unit which consists of a microbalance and a furnace flowed with 100 cc (STP) min-1 Nitrogen, N2 as inert. Heating rate was set at 20⁰C min-1 and temperature started from 50 to 900⁰C. Hydrogen gas production during the pyrolysis was observed using Agilent Gas Chromatography Analyzer 7890A. Oil palm shell, oil palm frond, paddy straw and rice husk were found to be reactive enough in a pyrolytic environment of up to 900°C since pyrolysis of these biomass starts at temperature as low as 200°C and maximum value of weight loss is achieved at about 500°C. Since there was not much different in the cellulose, hemicelluloses and lignin fractions between oil palm shell, oil palm frond, paddy straw and rice husk, the T-50 and R-50 values obtained are almost similar. H2 productions started rapidly at this temperature as well due to the decompositions of biomass inside the TGA. Biomass with more lignin content such as oil palm shell was found to have longer duration of H2 production compared to materials of high cellulose and hemicelluloses contents.

Keywords: biomass, decomposition, hydrogen, lignocellulosic, thermogravimetry

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2271
361 Automatic Generation Control of Multi-Area Electric Energy Systems Using Modified GA

Authors: Gayadhar Panda, Sidhartha Panda, C. Ardil

Abstract:

A modified Genetic Algorithm (GA) based optimal selection of parameters for Automatic Generation Control (AGC) of multi-area electric energy systems is proposed in this paper. Simulations on multi-area reheat thermal system with and without consideration of nonlinearity like governor dead band followed by 1% step load perturbation is performed to exemplify the optimum parameter search. In this proposed method, a modified Genetic Algorithm is proposed where one point crossover with modification is employed. Positional dependency in respect of crossing site helps to maintain diversity of search point as well as exploitation of already known optimum value. This makes a trade-off between exploration and exploitation of search space to find global optimum in less number of generations. The proposed GA along with decomposition technique as developed has been used to obtain the optimum megawatt frequency control of multi-area electric energy systems. Time-domain simulations are conducted with trapezoidal integration along with decomposition technique. The superiority of the proposed method over existing one is verified from simulations and comparisons.

Keywords: Automatic Generation Control (AGC), Reheat, Proportional Integral (PI) controller, Dead Band, Genetic Algorithm(GA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2660
360 Data-driven Multiscale Tsallis Complexity: Application to EEG Analysis

Authors: Young-Seok Choi

Abstract:

This work proposes a data-driven multiscale based quantitative measures to reveal the underlying complexity of electroencephalogram (EEG), applying to a rodent model of hypoxic-ischemic brain injury and recovery. Motivated by that real EEG recording is nonlinear and non-stationary over different frequencies or scales, there is a need of more suitable approach over the conventional single scale based tools for analyzing the EEG data. Here, we present a new framework of complexity measures considering changing dynamics over multiple oscillatory scales. The proposed multiscale complexity is obtained by calculating entropies of the probability distributions of the intrinsic mode functions extracted by the empirical mode decomposition (EMD) of EEG. To quantify EEG recording of a rat model of hypoxic-ischemic brain injury following cardiac arrest, the multiscale version of Tsallis entropy is examined. To validate the proposed complexity measure, actual EEG recordings from rats (n=9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Experimental results demonstrate that the use of the multiscale Tsallis entropy leads to better discrimination of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective metric as a prognostic tool.

Keywords: Electroencephalogram (EEG), multiscale complexity, empirical mode decomposition, Tsallis entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
359 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 937
358 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach

Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh

Abstract:

This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.

Keywords: River stage-discharge process, LSSVM, discrete wavelet transform (DWT), ensemble empirical decomposition mode (EEMD), multi-station modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667
357 Teager-Huang Analysis Applied to Sonar Target Recognition

Authors: J.-C. Cexus, A.O. Boudraa

Abstract:

In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.

Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285
356 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained. 

Keywords: adaptive filtering, Bayesian approach Kalman filtering approach, variance tracking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 623
355 Low Cost IMU \ GPS Integration Using Kalman Filtering for Land Vehicle Navigation Application

Authors: Othman Maklouf, Abdurazag Ghila, Ahmed Abdulla, Ameer Yousef

Abstract:

Land vehicle navigation system technology is a subject of great interest today. Global Positioning System (GPS) is a common choice for positioning in such systems. However, GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation is the implementation of inertial sensors to determine the position and orientation of a vehicle. As such, inertial navigation has unbounded error growth since the error accumulates at each step. Thus in order to contain these errors some form of external aiding is required. The availability of low cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop Inertial Navigation System (INS) using an inertial measurement unit (IMU), in conjunction with GPS to fulfill the demands of such systems. Typically IMU’s are very expensive systems; however this INS will use “low cost” components. Unfortunately with low cost also comes low performance and is the main reason for the inclusion of GPS and Kalman filtering into the system. The aim of this paper is to develop a GPS/MEMS INS integrated system, which is able to provide a navigation solution with accuracy levels appropriate for land vehicle navigation. The primary piece of equipment used was a MEMS-based Crista IMU (from Cloud Cap Technology Inc.) and a Garmin GPS 18 PC (which is both a receiver and antenna). The integration of GPS with INS can be implemented using a Kalman filter in loosely coupled mode. In this integration mode the INS error states, together with any navigation state (position, velocity, and attitude) and other unknown parameters of interest, are estimated using GPS measurements. All important equations regarding navigation are presented along with discussion.

Keywords: GPS, IMU, Kalman Filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7539
354 The Wavelet-Based DFT: A New Interpretation, Extensions and Applications

Authors: Abdulnasir Hossen, Ulrich Heute

Abstract:

In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.

Keywords: Image Transform, Spectral Analysis, Sub-Band DFT, Wavelet DFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
353 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording

Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy

Abstract:

Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.

Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2331
352 Waste Management in a Hot Laboratory of Japan Atomic Energy Agency – 1: Overview and Activities in Chemical Processing Facility

Authors: Kazunori Nomura, Hiromichi Ogi, Masaumi Nakahara, Sou Watanabe, Atsuhiro Shibata

Abstract:

Chemical Processing Facility of Japan Atomic Energy Agency is a basic research field for advanced back-end technology developments with using actual high-level radioactive materials such as irradiated fuels from the fast reactor, high-level liquid waste from reprocessing plant. In the nature of a research facility, various kinds of chemical reagents have been offered for fundamental tests. Most of them were treated properly and stored in the liquid waste vessel equipped in the facility, but some were not treated and remained at the experimental space as a kind of legacy waste. It is required to treat the waste in safety. On the other hand, we formulated the Medium- and Long-Term Management Plan of Japan Atomic Energy Agency Facilities. This comprehensive plan considers Chemical Processing Facility as one of the facilities to be decommissioned. Even if the plan is executed, treatment of the “legacy” waste beforehand must be a necessary step for decommissioning operation. Under this circumstance, we launched a collaborative research project called the STRAD project, which stands for Systematic Treatment of Radioactive liquid waste for Decommissioning, in order to develop the treatment processes for wastes of the nuclear research facility. In this project, decomposition methods of chemicals causing a troublesome phenomenon such as corrosion and explosion have been developed and there is a prospect of their decomposition in the facility by simple method. And solidification of aqueous or organic liquid wastes after the decomposition has been studied by adding cement or coagulants. Furthermore, we treated experimental tools of various materials with making an effort to stabilize and to compact them before the package into the waste container. It is expected to decrease the number of transportation of the solid waste and widen the operation space. Some achievements of these studies will be shown in this paper. The project is expected to contribute beneficial waste management outcome that can be shared world widely.

Keywords: Chemical Processing Facility, medium- and long-term management plan of JAEA Facilities, STRAD project, treatment of radioactive waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 880