Search results for: ozone decomposition
233 Pathway to Reduce Industrial Energy Intensity for Energy Conservation at Chinese Provincial Level
Authors: Shengman Zhao, Yang Yu, Shenghui Cui
Abstract:
Using logarithmic mean Divisia decomposition technique, this paper analyzes the change in industrial energy intensity of Fujian Province in China, based on data sets of added value and energy consumption for 35 selected industrial sub-sectors from 1999 to 2009. The change in industrial energy intensity is decomposed into intensity effect and structure effect. Results show that the industrial energy intensity of Fujian Province has achieved a reduction of 51% over the past ten years. The structural change, a shift in the mix of industrial sub-sectors, made overwhelming contribution to the reduction. The impact of energy efficiency’s improvement was relatively small. However, the aggregate industrial energy intensity was very sensitive to both the changes in energy intensity and in production share of energy-intensive sub-sectors, such as production and supply of electric power, steam and hot water. Pathway to reduce industrial energy intensity for energy conservation in Fujian Province is proposed in the end.
Keywords: Decomposition analysis, energy intensity, Fujian Province, industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388232 A New Method for Image Classification Based on Multi-level Neural Networks
Authors: Samy Sadek, Ayoub Al-Hamadi, Bernd Michaelis, Usama Sayed
Abstract:
In this paper, we propose a supervised method for color image classification based on a multilevel sigmoidal neural network (MSNN) model. In this method, images are classified into five categories, i.e., “Car", “Building", “Mountain", “Farm" and “Coast". This classification is performed without any segmentation processes. To verify the learning capabilities of the proposed method, we compare our MSNN model with the traditional Sigmoidal Neural Network (SNN) model. Results of comparison have shown that the MSNN model performs better than the traditional SNN model in the context of training run time and classification rate. Both color moments and multi-level wavelets decomposition technique are used to extract features from images. The proposed method has been tested on a variety of real and synthetic images.Keywords: Image classification, multi-level neural networks, feature extraction, wavelets decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647231 Tracking Objects in Color Image Sequences: Application to Football Images
Authors: Mourad Moussa, Ali Douik, Hassani Messaoud
Abstract:
In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Keywords: Image segmentation, objects tracking, Parzen window, singular value decomposition, target recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984230 Analysis of the EEG Signal for a Practical Biometric System
Authors: Muhammad Kamil Abdullah, Khazaimatol S Subari, Justin Leo Cheang Loong, Nurul Nadia Ahmad
Abstract:
This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects- eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects- eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved.Keywords: Biometric, EEG, Wavelet Packet Decomposition, NeuralNetworks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3026229 Video Shot Detection and Key Frame Extraction Using Faber Shauder DWT and SVD
Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi
Abstract:
Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.
Keywords: Key Frame Extraction, Shot detection, FSDWT, Singular Value Decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2520228 1-D Modeling of Hydrate Decomposition in Porous Media
Authors: F. Esmaeilzadeh, M. E. Zeighami, J. Fathi
Abstract:
This paper describes a one-dimensional numerical model for natural gas production from the dissociation of methane hydrate in hydrate-capped gas reservoir under depressurization and thermal stimulation. Some of the hydrate reservoirs discovered are overlying a free-gas layer, known as hydrate-capped gas reservoirs. These reservoirs are thought to be easiest and probably the first type of hydrate reservoirs to be produced. The mathematical equations that can be described this type of reservoir include mass balance, heat balance and kinetics of hydrate decomposition. These non-linear partial differential equations are solved using finite-difference fully implicit scheme. In the model, the effect of convection and conduction heat transfer, variation change of formation porosity, the effect of using different equations of state such as PR and ER and steam or hot water injection are considered. In addition distributions of pressure, temperature, saturation of gas, hydrate and water in the reservoir are evaluated. It is shown that the gas production rate is a sensitive function of well pressure.
Keywords: Hydrate reservoir, numerical modeling, depressurization, thermal stimulation, gas generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053227 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693226 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570225 A Neural-Network-Based Fault Diagnosis Approach for Analog Circuits by Using Wavelet Transformation and Fractal Dimension as a Preprocessor
Abstract:
This paper presents a new method of analog fault diagnosis based on back-propagation neural networks (BPNNs) using wavelet decomposition and fractal dimension as preprocessors. The proposed method has the capability to detect and identify faulty components in an analog electronic circuit with tolerance by analyzing its impulse response. Using wavelet decomposition to preprocess the impulse response drastically de-noises the inputs to the neural network. The second preprocessing by fractal dimension can extract unique features, which are the fed to a neural network as inputs for further classification. A comparison of our work with [1] and [6], which also employs back-propagation (BP) neural networks, reveals that our system requires a much smaller network and performs significantly better in fault diagnosis of analog circuits due to our proposed preprocessing techniques.
Keywords: Analog circuits, fault diagnosis, tolerance, wavelettransform, fractal dimension, box dimension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198224 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model
Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li
Abstract:
Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.
Keywords: Spatial Information Network, Traffic prediction, Wavelet decomposition, Time series model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 636223 The Study of the Desulfurization Process of Oil and Oil Products of “Zhanazhol” Oil Field Using the Approaches of Green Chemistry
Authors: Zhaksyntay K. Kairbekov, Zhannur K. Myltykbaeva, Nazym T. Smagulova, Dariya K. Kanseitova
Abstract:
In this paper we studied sono catalytic oxidative desulfurization of oil and diesel fraction from “Zhanazhol” oil deposits. We have established that the combined effect of the ultrasonic field and oxidant (ozone-air mixture) in the presence of the catalyst on the oil is potentially very effective method of desulfurization of oil and oil products. This method allows increasing the degree of desulfurization of oil by 62%.
Keywords: Desulfurization, diesel, oil, oil products, sonication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951222 Optimum Conditions for Effective Decomposition of Toluene as VOC Gas by Pilot-Scale Regenerative Thermal Oxidizer
Authors: S. Iijima, K. Nakayama, D. Kuchar, M. Kubota, H. Matsuda
Abstract:
Regenerative Thermal Oxidizer (RTO) is one of the best solutions for removal of Volatile Organic Compounds (VOC) from industrial processes. In the RTO, VOC in a raw gas are usually decomposed at 950-1300 K and the combustion heat of VOC is recovered by regenerative heat exchangers charged with ceramic honeycombs. The optimization of the treatment of VOC leads to the reduction of fuel addition to VOC decomposition, the minimization of CO2 emission and operating cost as well. In the present work, the thermal efficiency of the RTO was investigated experimentally in a pilot-scale RTO unit using toluene as a typical representative of VOC. As a result, it was recognized that the radiative heat transfer was dominant in the preheating process of a raw gas when the gas flow rate was relatively low. Further, it was found that a minimum heat exchanger volume to achieve self combustion of toluene without additional heating of the RTO by fuel combustion was dependent on both the flow rate of a raw gas and the concentration of toluene. The thermal efficiency calculated from fuel consumption and the decomposed toluene ratio, was found to have a maximum value of 0.95 at a raw gas mass flow rate of 1810 kg·h-1 and honeycombs height of 1.5m.Keywords: Regenerative Heat Exchange, Self Combustion, Toluene, Volatile Organic Compounds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2441221 Labview-Based System for Fiber Links Events Detection
Authors: Bo Liu, Qingshan Kong, Weiqing Huang
Abstract:
With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.
Keywords: Empirical mode decomposition (EMD), events detection, Gabor transform, optical time domain reflectometer (OTDR), wavelet threshold denoising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801220 Performance Analysis of a Discrete-time GeoX/G/1 Queue with Single Working Vacation
Authors: Shan Gao, Zaiming Liu
Abstract:
This paper treats a discrete-time batch arrival queue with single working vacation. The main purpose of this paper is to present a performance analysis of this system by using the supplementary variable technique. For this purpose, we first analyze the Markov chain underlying the queueing system and obtain its ergodicity condition. Next, we present the stationary distributions of the system length as well as some performance measures at random epochs by using the supplementary variable method. Thirdly, still based on the supplementary variable method we give the probability generating function (PGF) of the number of customers at the beginning of a busy period and give a stochastic decomposition formulae for the PGF of the stationary system length at the departure epochs. Additionally, we investigate the relation between our discretetime system and its continuous counterpart. Finally, some numerical examples show the influence of the parameters on some crucial performance characteristics of the system.
Keywords: Discrete-time queue, batch arrival, working vacation, supplementary variable technique, stochastic decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433219 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
Authors: Qianhua He, Weili Zhou, Aiwu Chen
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.
Keywords: Speech denoising, sparse representation, K-singular value decomposition, orthogonal matching pursuit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014218 Random Projections for Dimensionality Reduction in ICA
Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi
Abstract:
In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889217 A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT
Authors: Say Wei Foo, Qi Dong
Abstract:
Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.Keywords: Image watermarking, Image normalization, Singularvalue decomposition, Discrete cosine transform, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095216 Cellulolytic Microbial Activator Influence on Decomposition of Rubber Factory Waste Composting
Authors: Thaniya Kaosol, Sirinthrar Wandee
Abstract:
In this research, an aerobic composting method is studied to reuse organic waste from rubber factory waste as soil fertilizer and to study the effect of cellulolytic microbial activator (CMA) as the activator in the rubber factory waste composting. The performance of the composting process was monitored as a function of carbon and organic matter decomposition rate, temperature and moisture content. The results indicate that the rubber factory waste is best composted with water hyacinth and sludge than composted alone. In addition, the CMA is more affective when mixed with the rubber factory waste, water hyacinth and sludge since a good fertilizer is achieved. When adding CMA into the rubber factory waste composted alone, the finished product does not achieve a standard of fertilizer, especially the C/N ratio. Finally, the finished products of composting rubber factory waste and water hyacinth and sludge (both CMA and without CMA), can be an environmental friendly alternative to solve the disposal problems of rubber factory waste. Since the C/N ratio, pH, moisture content, temperature, and nutrients of the finished products are acceptable for agriculture use.Keywords: composting, rubber waste, C/N ratio, sludge, cellulolytic microbial activator
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137215 Method of Intelligent Fault Diagnosis of Preload Loss for Single Nut Ball Screws through the Sensed Vibration Signals
Authors: Yi-Cheng Huang, Yan-Chen Shin
Abstract:
This paper proposes method of diagnosing ball screw preload loss through the Hilbert-Huang Transform (HHT) and Multiscale entropy (MSE) process. The proposed method can diagnose ball screw preload loss through vibration signals when the machine tool is in operation. Maximum dynamic preload of 2 %, 4 %, and 6 % ball screws were predesigned, manufactured, and tested experimentally. Signal patterns are discussed and revealed using Empirical Mode Decomposition(EMD)with the Hilbert Spectrum. Different preload features are extracted and discriminated using HHT. The irregularity development of a ball screw with preload loss is determined and abstracted using MSE based on complexity perception. Experiment results show that the proposed method can predict the status of ball screw preload loss. Smart sensing for the health of the ball screw is also possible based on a comparative evaluation of MSE by the signal processing and pattern matching of EMD/HHT. This diagnosis method realizes the purposes of prognostic effectiveness on knowing the preload loss and utilizing convenience.Keywords: Empirical Mode Decomposition, Hilbert-Huang Transform, Multi-scale Entropy, Preload Loss, Single-nut Ball Screw
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2842214 Gas Sensing Properties of SnO2 Thin Films Modified by Ag Nanoclusters Synthesized by SILD Method
Authors: G. Korotcenkov, B. K. Cho, L. B. Gulina, V. P. Tolstoy
Abstract:
The effect of SnO2 surface modification by Ag nanoclusters, synthesized by SILD method, on the operating characteristics of thin film gas sensors was studied and models for the promotional role of Ag additives were discussed. It was found that mentioned above approach can be used for improvement both the sensitivity and the rate of response of the SnO2-based gas sensors to CO and H2. At the same time, the presence of the Ag clusters on the surface of SnO2 depressed the sensor response to ozone.
Keywords: Ag nanoparticles, deposition, characterization, gas sensors, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2388213 A TFETI Domain Decompositon Solver for Von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening
Authors: Martin Cermak, Stanislav Sysala
Abstract:
In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MatLab.
Keywords: Isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759212 Decomposition of Homeomorphism on Topological Spaces
Authors: Ahmet Z. Ozcelik, Serkan Narli
Abstract:
In this study, two new classes of generalized homeomorphisms are introduced and shown that one of these classes has a group structure. Moreover, some properties of these two homeomorphisms are obtained.Keywords: Generalized closed set, homeomorphism, gsghomeomorphism, sgs-homeomorphism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888211 An Implementation of MacMahon's Partition Analysis in Ordering the Lower Bound of Processing Elements for the Algorithm of LU Decomposition
Authors: Halil Snopce, Ilir Spahiu, Lavdrim Elmazi
Abstract:
A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Keywords: generating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equations and calculus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472210 Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition
Authors: Chiou-Yng Lee, Wen-Yo Lee, Chieh-Tsai Wu, Cheng-Chen Yang
Abstract:
Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit level and digi -level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very large scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.
Keywords: Digit-serial systolic multiplier, elliptic curve cryptography (ECC), Karatsuba algorithm (KA), shifted polynomial basis (SPB), pairing computation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060209 Effect of Zeolite on the Decomposition Resistance of Organic Matter in Tropical Soils under Global Warming
Authors: Mai Thanh Truc, Masao Yoshida
Abstract:
Global temperature had increased by about 0.5oC over the past century, increasing temperature leads to a loss or a decrease of soil organic matter (SOM). Whereas soil organic matter in many tropical soils is less stable than that of temperate soils, and it will be easily affected by climate change. Therefore, conservation of soil organic matter is urgent issue nowadays. This paper presents the effect of different doses (5%, 15%) of Ca-type zeolite in conjunction with organic manure, applied to soil samples from Philippines, Paraguay and Japan, on the decomposition resistance of soil organic matter under high temperature. Results showed that a remain or slightly increase the C/N ratio of soil. There are an increase in percent of humic acid (PQ) that extracted with Na4P2O7. A decrease of percent of free humus (fH) after incubation was determined. A larger the relative color intensity (RF) value and a lower the color coefficient (6logK) value following increasing zeolite rates leading to a higher degrees of humification. The increase in the aromatic condensation of humic acid (HA) after incubation, as indicates by the decrease of H/C and O/C ratios of HA. This finding indicates that the use of zeolite could be beneficial with respect to SOM conservation under global warming condition.Keywords: Global warming, Humic substances, Soil organicmatter, Zeolite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206208 Thermogravimetry Study on Pyrolysis of Various Lignocellulosic Biomass for Potential Hydrogen Production
Authors: S.S. Abdullah, S. Yusup, M.M. Ahmad, A. Ramli, L. Ismail
Abstract:
This paper aims to study decomposition behavior in pyrolytic environment of four lignocellulosic biomass (oil palm shell, oil palm frond, rice husk and paddy straw), and two commercial components of biomass (pure cellulose and lignin), performed in a thermogravimetry analyzer (TGA). The unit which consists of a microbalance and a furnace flowed with 100 cc (STP) min-1 Nitrogen, N2 as inert. Heating rate was set at 20⁰C min-1 and temperature started from 50 to 900⁰C. Hydrogen gas production during the pyrolysis was observed using Agilent Gas Chromatography Analyzer 7890A. Oil palm shell, oil palm frond, paddy straw and rice husk were found to be reactive enough in a pyrolytic environment of up to 900°C since pyrolysis of these biomass starts at temperature as low as 200°C and maximum value of weight loss is achieved at about 500°C. Since there was not much different in the cellulose, hemicelluloses and lignin fractions between oil palm shell, oil palm frond, paddy straw and rice husk, the T-50 and R-50 values obtained are almost similar. H2 productions started rapidly at this temperature as well due to the decompositions of biomass inside the TGA. Biomass with more lignin content such as oil palm shell was found to have longer duration of H2 production compared to materials of high cellulose and hemicelluloses contents.Keywords: biomass, decomposition, hydrogen, lignocellulosic, thermogravimetry
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2267207 Automatic Generation Control of Multi-Area Electric Energy Systems Using Modified GA
Authors: Gayadhar Panda, Sidhartha Panda, C. Ardil
Abstract:
A modified Genetic Algorithm (GA) based optimal selection of parameters for Automatic Generation Control (AGC) of multi-area electric energy systems is proposed in this paper. Simulations on multi-area reheat thermal system with and without consideration of nonlinearity like governor dead band followed by 1% step load perturbation is performed to exemplify the optimum parameter search. In this proposed method, a modified Genetic Algorithm is proposed where one point crossover with modification is employed. Positional dependency in respect of crossing site helps to maintain diversity of search point as well as exploitation of already known optimum value. This makes a trade-off between exploration and exploitation of search space to find global optimum in less number of generations. The proposed GA along with decomposition technique as developed has been used to obtain the optimum megawatt frequency control of multi-area electric energy systems. Time-domain simulations are conducted with trapezoidal integration along with decomposition technique. The superiority of the proposed method over existing one is verified from simulations and comparisons.
Keywords: Automatic Generation Control (AGC), Reheat, Proportional Integral (PI) controller, Dead Band, Genetic Algorithm(GA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2658206 Data-driven Multiscale Tsallis Complexity: Application to EEG Analysis
Authors: Young-Seok Choi
Abstract:
This work proposes a data-driven multiscale based quantitative measures to reveal the underlying complexity of electroencephalogram (EEG), applying to a rodent model of hypoxic-ischemic brain injury and recovery. Motivated by that real EEG recording is nonlinear and non-stationary over different frequencies or scales, there is a need of more suitable approach over the conventional single scale based tools for analyzing the EEG data. Here, we present a new framework of complexity measures considering changing dynamics over multiple oscillatory scales. The proposed multiscale complexity is obtained by calculating entropies of the probability distributions of the intrinsic mode functions extracted by the empirical mode decomposition (EMD) of EEG. To quantify EEG recording of a rat model of hypoxic-ischemic brain injury following cardiac arrest, the multiscale version of Tsallis entropy is examined. To validate the proposed complexity measure, actual EEG recordings from rats (n=9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Experimental results demonstrate that the use of the multiscale Tsallis entropy leads to better discrimination of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective metric as a prognostic tool.
Keywords: Electroencephalogram (EEG), multiscale complexity, empirical mode decomposition, Tsallis entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2061205 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach
Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh
Abstract:
This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.Keywords: River stage-discharge process, LSSVM, discrete wavelet transform (DWT), ensemble empirical decomposition mode (EEMD), multi-station modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 663204 Teager-Huang Analysis Applied to Sonar Target Recognition
Authors: J.-C. Cexus, A.O. Boudraa
Abstract:
In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2282