Search results for: Wavelet entropy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 560

Search results for: Wavelet entropy

50 A Two-Stage Multi-Agent System to Predict the Unsmoothed Monthly Sunspot Numbers

Authors: Mak Kaboudan

Abstract:

A multi-agent system is developed here to predict monthly details of the upcoming peak of the 24th solar magnetic cycle. While studies typically predict the timing and magnitude of cycle peaks using annual data, this one utilizes the unsmoothed monthly sunspot number instead. Monthly numbers display more pronounced fluctuations during periods of strong solar magnetic activity than the annual sunspot numbers. Because strong magnetic activities may cause significant economic damages, predicting monthly variations should provide different and perhaps helpful information for decision-making purposes. The multi-agent system developed here operates in two stages. In the first, it produces twelve predictions of the monthly numbers. In the second, it uses those predictions to deliver a final forecast. Acting as expert agents, genetic programming and neural networks produce the twelve fits and forecasts as well as the final forecast. According to the results obtained, the next peak is predicted to be 156 and is expected to occur in October 2011- with an average of 136 for that year.

Keywords: Computational techniques, discrete wavelet transformations, solar cycle prediction, sunspot numbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1285
49 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices

Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues

Abstract:

This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.

Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175
48 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of this work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. With our research and based on a feature selection in different phases, we are trying to design a neural network system with an optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each region of interest (ROI), 6 distinct sets of texture features are extracted such as: first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. When analyzing more phases, we show that the injection of liquid cause changes to the high relevant features in each region. Our results demonstrate that for detecting HCC tumor phase 3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between pathology and healthy classes, according to our method, relates to first order histogram parameters with accuracy of 85% in phase 1, 95% in phase 2, and 95% in phase 3.

Keywords: Feature selection, Multi-phasic liver images, Neural network, Texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2493
47 Genetic-Based Multi Resolution Noisy Color Image Segmentation

Authors: Raghad Jawad Ahmed

Abstract:

Segmentation of a color image composed of different kinds of regions can be a hard problem, namely to compute for an exact texture fields. The decision of the optimum number of segmentation areas in an image when it contains similar and/or un stationary texture fields. A novel neighborhood-based segmentation approach is proposed. A genetic algorithm is used in the proposed segment-pass optimization process. In this pass, an energy function, which is defined based on Markov Random Fields, is minimized. In this paper we use an adaptive threshold estimation method for image thresholding in the wavelet domain based on the generalized Gaussian distribution (GGD) modeling of sub band coefficients. This method called Normal Shrink is computationally more efficient and adaptive because the parameters required for estimating the threshold depend on sub band data energy that used in the pre-stage of segmentation. A quad tree is employed to implement the multi resolution framework, which enables the use of different strategies at different resolution levels, and hence, the computation can be accelerated. The experimental results using the proposed segmentation approach are very encouraging.

Keywords: Color image segmentation, Genetic algorithm, Markov random field, Scale space filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
46 Optimal Channel Equalization for MIMO Time-Varying Channels

Authors: Ehab F. Badran, Guoxiang Gu

Abstract:

We consider optimal channel equalization for MIMO (multi-input/multi-output) time-varying channels in the sense of MMSE (minimum mean-squared-error), where the observation noise can be non-stationary. We show that all ZF (zero-forcing) receivers can be parameterized in an affine form which eliminates completely the ISI (inter-symbol-interference), and optimal channel equalizers can be designed through minimization of the MSE (mean-squarederror) between the detected signals and the transmitted signals, among all ZF receivers. We demonstrate that the optimal channel equalizer is a modified Kalman filter, and show that under the AWGN (additive white Gaussian noise) assumption, the proposed optimal channel equalizer minimizes the BER (bit error rate) among all possible ZF receivers. Our results are applicable to optimal channel equalization for DWMT (discrete wavelet multitone), multirate transmultiplexers, OFDM (orthogonal frequency division multiplexing), and DS (direct sequence) CDMA (code division multiple access) wireless data communication systems. A design algorithm for optimal channel equalization is developed, and several simulation examples are worked out to illustrate the proposed design algorithm.

Keywords: Channel equalization, Kalman filtering, Time-varying systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
45 Copy-Move Image Forgery Detection in Virtual Electrostatic Field

Authors: Michael Zimba, Darlison Nyirenda

Abstract:

A novel copy-move image forgery, CMIF, detection method is proposed. The proposed method presents a new approach which relies on electrostatic field theory, EFT. Solely for the purpose of reducing the dimension of a suspicious image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of the suspicious image and extracts only the approximation subband. The extracted subband is then bijectively mapped onto a virtual electrostatic field where concepts of EFT are utilized to extract robust features. The extracted features are invariant to additive noise, JPEG compression, and affine transformation. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. SATS is a better option than the common shift vector method because SATS is insensitive to affine transformation. Consequently, the proposed CMIF algorithm is not only fast but also more robust to attacks compared to the existing related CMIF algorithms. The experimental results show high detection rates, as high as 100% in some cases.

Keywords: Affine transformation, Radix sort, SATS, Virtual electrostatic field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
44 Spread Spectrum Image Watermarking for Secured Multimedia Data Communication

Authors: Tirtha S. Das, Ayan K. Sau, Subir K. Sarkar

Abstract:

Digital watermarking is a way to provide the facility of secure multimedia data communication besides its copyright protection approach. The Spread Spectrum modulation principle is widely used in digital watermarking to satisfy the robustness of multimedia signals against various signal-processing operations. Several SS watermarking algorithms have been proposed for multimedia signals but very few works have discussed on the issues responsible for secure data communication and its robustness improvement. The current paper has critically analyzed few such factors namely properties of spreading codes, proper signal decomposition suitable for data embedding, security provided by the key, successive bit cancellation method applied at decoder which have greater impact on the detection reliability, secure communication of significant signal under camouflage of insignificant signals etc. Based on the analysis, robust SS watermarking scheme for secure data communication is proposed in wavelet domain and improvement in secure communication and robustness performance is reported through experimental results. The reported result also shows improvement in visual and statistical invisibility of the hidden data.

Keywords: Spread spectrum modulation, spreading code, signaldecomposition, security, successive bit cancellation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2738
43 Understanding the Discharge Activities in Transformer Oil under AC and DC Voltage Adopting UHF Technique

Authors: R. Sarathi, G. Koperundevi

Abstract:

Design of Converter transformer insulation is a major challenge. The insulation of these transformers is stressed by both AC and DC voltages. Particle contamination is one of the major problems in insulation structures, as they generate partial discharges leading it to major failure of insulation. Similarly corona discharges occur in transformer insulation. This partial discharge due to particle movement / corona formation in insulation structure under different voltage wave shapes, are different. In the present study, UHF technique is adopted to understand the discharge activity and could be realized that the characteristics of UHF signal generated under low and high fields are different. In the case of corona generated signal, the frequency content of the UHF sensor output lies in the range 0.3-1.2 GHz and is not much varied except for its increase in magnitude of discharge with the increase in applied voltage. It is realized that the current signal injected due to partial discharges/corona is about 4ns duration measured for first one half cycle. Wavelet technique is adopted in the present study. It allows one to identify the frequency content present in the signal at different instant of time. The STD-MRA analysis helps one to identify the frequency band in which the energy content of the UHF signal is maximum.

Keywords: Contamination, Insulation, Partial Discharges, Transformer oil, UHF sensors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3797
42 Feature Extractions of EMG Signals during a Constant Workload Pedaling Exercise

Authors: Bing-Wen Chen, Alvin W. Y. Su, Yu-Lin Wang

Abstract:

Electromyography (EMG) is one of the important indicators during exercise, as it is closely related to the level of muscle activations. This work quantifies the muscle conditions of the lower limbs in a constant workload exercise. Surface EMG signals of the vastus laterals (VL), vastus medialis (VM), rectus femoris (RF), gastrocnemius medianus (GM), gastrocnemius lateral (GL) and Soleus (SOL) were recorded from fourteen healthy males. The EMG signals were segmented in two phases: activation segment (AS) and relaxation segment (RS). Period entropy (PE), peak count (PC), zero crossing (ZC), wave length (WL), mean power frequency (MPF), median frequency (MDF) and root mean square (RMS) are calculated to provide the quantitative information of the measured EMG segments. The outcomes reveal that the PE, PC, ZC and RMS have significantly changed (p<.001); WL presents moderately changed (p<.01); MPF and MDF show no changed (p>.05) during exercise. The results also suggest that the RS is also preferred for performance evaluation, while the results of the extracted features in AS are usually affected directly by the amplitudes. It is further found that the VL exhibits the most significant changes within six muscles during pedaling exercise. The proposed work could be applied to quantify the stamina analysis and to predict the instant muscle status in athletes.

Keywords: EMG, feature extraction, muscle status, pedaling exercise, relaxation segment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1176
41 Spike Sorting Method Using Exponential Autoregressive Modeling of Action Potentials

Authors: Sajjad Farashi

Abstract:

Neurons in the nervous system communicate with each other by producing electrical signals called spikes. To investigate the physiological function of nervous system it is essential to study the activity of neurons by detecting and sorting spikes in the recorded signal. In this paper a method is proposed for considering the spike sorting problem which is based on the nonlinear modeling of spikes using exponential autoregressive model. The genetic algorithm is utilized for model parameter estimation. In this regard some selected model coefficients are used as features for sorting purposes. For optimal selection of model coefficients, self-organizing feature map is used. The results show that modeling of spikes with nonlinear autoregressive model outperforms its linear counterpart. Also the extracted features based on the coefficients of exponential autoregressive model are better than wavelet based extracted features and get more compact and well-separated clusters. In the case of spikes different in small-scale structures where principal component analysis fails to get separated clouds in the feature space, the proposed method can obtain well-separated cluster which removes the necessity of applying complex classifiers.

Keywords: Exponential autoregressive model, Neural data, spike sorting, time series modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
40 Improved Feature Processing for Iris Biometric Authentication System

Authors: Somnath Dey, Debasis Samanta

Abstract:

Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.

Keywords: Iris recognition, biometric, feature processing, patternrecognition, pattern matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2099
39 Numerical Simulations of Electronic Cooling with In-Line and Staggered Pin Fin Heat Sinks

Authors: Yue-Tzu Yang, Hsiang-Wen Tang, Jian-Zhang Yin, Chao-Han Wu

Abstract:

Three-dimensional incompressible turbulent fluid flow and heat transfer of pin fin heat sinks using air as a cooling fluid are numerically studied in this study. Two different kinds of pin fins are compared in the thermal performance, including circular and square cross sections, both are in-line and staggered arrangements. The turbulent governing equations are solved using a control-volume- based finite-difference method. Subsequently, numerical computations are performed with the realizable k - ԑ turbulence for the parameters studied, the fin height H, fin diameter D, and Reynolds number (Re) in the range of 7 ≤ H ≤ 10, 0.75 ≤ D ≤ 2, 2000 ≤ Re ≤ 126000 respectively. The numerical results are validated with available experimental data in the literature and good agreement has been found. It indicates that circular pin fins are streamlined in comparing with the square pin fins, the pressure drop is small than that of square pin fins, and heat transfer is not as good as the square pin fins. The thermal performance of the staggered pin fins is better than that of in-line pin fins because the staggered arrangements produce large disturbance. Both in-line and staggered arrangements show the same behavior for thermal resistance, pressure drop, and the entropy generation.

Keywords: Pin-fin, heat sinks, simulations, turbulent flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214
38 Nonlinear Analysis of Postural Sway in Multiple Sclerosis

Authors: Hua Cao, Laurent Peyrodie, Olivier Agnani, Cécile Donzé

Abstract:

Multiple Sclerosis (MS) is a disease which affects the central nervous system and causes balance problem. In clinical, this disorder is usually evaluated using static posturography. Some linear or nonlinear measures, extracted from the posturographic data (i.e. center of pressure, COP) recorded during a balance test, has been used to analyze postural control of MS patients. In this study, the trend (TREND) and the sample entropy (SampEn), two nonlinear parameters were chosen to investigate their relationships with the expanded disability status scale (EDSS) score. 40 volunteers with different EDSS scores participated in our experiments with eyes open (EO) and closed (EC). TREND and 2 types of SampEn (SampEn1 and SampEn2) were calculated for each combined COP’s position signal. The results have shown that TREND had a weak negative correlation to EDSS while SampEn2 had a strong positive correlation to EDSS. Compared to TREND and SampEn1, SampEn2 showed a better significant correlation to EDSS and an ability to discriminate the MS patients in the EC case. In addition, the outcome of the study suggests that the multi-dimensional nonlinear analysis could provide some information about the impact of disability progression in MS on dynamics of the COP data.

Keywords: Balance, multiple sclerosis, nonlinear analysis, postural sway.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
37 New Features for Specific JPEG Steganalysis

Authors: Johann Barbier, Eric Filiol, Kichenakoumar Mayoura

Abstract:

We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.

Keywords: Compressed frequency domain, Fisher discriminant, specific JPEG steganalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2116
36 Using the PARIS Method for Multiple Criteria Decision Making in Unmanned Combat Aircraft Evaluation and Selection

Authors: C. Ardil

Abstract:

Unmanned combat aircraft (UCA) are expanding significantly in several defense industries, along with artificial intelligence improvements in highly precise technology. UCA is crucial in military settings for targeting enemy elements, and objects. UCA is also utilized for highly precise reconnaissance and surveillance tasks. To select the best alternative for critical missions, a methodical and effective strategy for UCA selection is required. Multiple criteria decision-making (MCDM) methodologies are ideally equipped to handle the complexity of alternative aircraft selection. To analyze UCA alternatives for the selection process, an integrated methodology built on the objective criteria weights and preference analysis for reference ideal solution (PARIS). First, the weights of essential elements are determined using the average weight (AW), standard deviation (SW) and entropy weight (EW) approach. The weights of the evaluation criteria affect the decision-making process. The aircraft choices in the decision problem are then ranked using objective criteria weights along with the PARIS technique. The validation and sensitivity analysis of the proposed MCDM approach are discussed.

Keywords: unmanned combat aircraft (UCA), multiple criteria decision making, MCDM, PARIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 388
35 A Complexity-Based Approach in Image Compression using Neural Networks

Authors: Hadi Veisi, Mansour Jamzad

Abstract:

In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding.

Keywords: Adaptive image compression, Image complexity, Multi-layer perceptron neural network, JPEG Standard, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174
34 Gender Based Variability Time Series Complexity Analysis

Authors: Ramesh K. Sunkaria, Puneeta Marwaha

Abstract:

Non linear methods of heart rate variability (HRV) analysis are becoming more popular. It has been observed that complexity measures quantify the regularity and uncertainty of cardiovascular RR-interval time series. In the present work, SampEn has been evaluated in healthy normal sinus rhythm (NSR) male and female subjects for different data lengths and tolerance level r. It is demonstrated that SampEn is small for higher values of tolerance r. Also SampEn value of healthy female group is higher than that of healthy male group for short data length and with increase in data length both groups overlap each other and it is difficult to distinguish them. The SampEn gives inaccurate results by assigning higher value to female group, because male subject have more complex HRV pattern than that of female subjects. Therefore, this traditional algorithm exhibits higher complexity for healthy female subjects than for healthy male subjects, which is misleading observation. This may be due to the fact that SampEn do not account for multiple time scales inherent in the physiologic time series and the hidden spatial and temporal fluctuations remains unexplored.

Keywords: Heart rate variability, normal sinus rhythm group, RR interval time series, sample entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
33 Methyltrioctylammonium Chloride as a Separation Solvent for Binary Mixtures: Evaluation Based on Experimental Activity Coefficients

Authors: B. Kabane, G. G. Redhi

Abstract:

An ammonium based ionic liquid (methyltrioctylammonium chloride) [N8 8 8 1] [Cl] was investigated as an extraction potential solvent for volatile organic solvents (in this regard, solutes), which includes alkenes, alkanes, ketones, alkynes, aromatic hydrocarbons, tetrahydrofuran (THF), alcohols, thiophene, water and acetonitrile based on the experimental activity coefficients at infinite THF measurements were conducted by the use of gas-liquid chromatography at four different temperatures (313.15 to 343.15) K. Experimental data of activity coefficients obtained across the examined temperatures were used in order to calculate the physicochemical properties at infinite dilution such as partial molar excess enthalpy, Gibbs free energy and entropy term. Capacity and selectivity data for selected petrochemical extraction problems (heptane/thiophene, heptane/benzene, cyclohaxane/cyclohexene, hexane/toluene, hexane/hexene) were computed from activity coefficients data and compared to the literature values with other ionic liquids. Evaluation of activity coefficients at infinite dilution expands the knowledge and provides a good understanding related to the interactions between the ionic liquid and the investigated compounds.

Keywords: Separation, activity coefficients, ionic liquid, methyltrioctylammonium chloride, capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667
32 Robust Digital Cinema Watermarking

Authors: Sadi Vural, Hiromi Tomii, Hironori Yamauchi

Abstract:

With the advent of digital cinema and digital broadcasting, copyright protection of video data has been one of the most important issues. We present a novel method of watermarking for video image data based on the hardware and digital wavelet transform techniques and name it as “traceable watermarking" because the watermarked data is constructed before the transmission process and traced after it has been received by an authorized user. In our method, we embed the watermark to the lowest part of each image frame in decoded video by using a hardware LSI. Digital Cinema is an important application for traceable watermarking since digital cinema system makes use of watermarking technology during content encoding, encryption, transmission, decoding and all the intermediate process to be done in digital cinema systems. The watermark is embedded into the randomly selected movie frames using hash functions. Embedded watermark information can be extracted from the decoded video data. For that, there is no need to access original movie data. Our experimental results show that proposed traceable watermarking method for digital cinema system is much better than the convenient watermarking techniques in terms of robustness, image quality, speed, simplicity and robust structure.

Keywords: Decoder, Digital content, JPEG2000 Frame, System-On-Chip, traceable watermark, Hash Function, CRC-32.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
31 Equilibrium, Kinetic and Thermodynamic Studies on Biosorption of Cd (II) and Pb (II) from Aqueous Solution Using a Spore Forming Bacillus Isolated from Wastewater of a Leather Factory

Authors: Sh. Kianfar, A. Moheb, H. Ghaforian

Abstract:

The equilibrium, thermodynamics and kinetics of the biosorption of Cd (II) and Pb(II) by a Spore Forming Bacillus (MGL 75) were investigated at different experimental conditions. The Langmuir and Freundlich, and Dubinin-Radushkevich (D-R) equilibrium adsorption models were applied to describe the biosorption of the metal ions by MGL 75 biomass. The Langmuir model fitted the equilibrium data better than the other models. Maximum adsorption capacities q max for lead (II) and cadmium (II) were found equal to 158.73mg/g and 91.74 mg/g by Langmuir model. The values of the mean free energy determined with the D-R equation showed that adsorption process is a physiosorption process. The thermodynamic parameters Gibbs free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) changes were also calculated, and the values indicated that the biosorption process was exothermic and spontaneous. Experiment data were also used to study biosorption kinetics using pseudo-first-order and pseudo-second-order kinetic models. Kinetic parameters, rate constants, equilibrium sorption capacities and related correlation coefficients were calculated and discussed. The results showed that the biosorption processes of both metal ions followed well pseudo-second-order kinetics.

Keywords: biosorption, kinetics, Metal ion removal, thermodynamics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
30 Malt Bagasse Waste as Biosorbent for Malachite Green: An Ecofriendly Approach for Dye Removal from Aqueous Solution

Authors: H. C. O. Reis, A. S. Cossolin, B. A. P. Santos, K. C. Castro, G. M. Pereira, V. C. Silva, P. T. Sousa Jr, E. L. Dall’Oglio, L. G. Vasconcelos, E. B. Morais

Abstract:

In this study, malt bagasse, a low-cost waste biomass, was tested as a biosorbent to remove the cationic dye Malachite green (MG) from aqueous solution. Batch biosorption experiments were investigated as functions of different experimental parameters such as initial pH, salt (NaCl) concentration, contact time, temperature and initial dye concentration. Higher removal rates of MG were obtained at pH 8 and 10. The equilibrium and kinetic studies suggest that the biosorption follows Langmuir isotherm and the pseudo-second-order model. The maximum monolayer adsorption capacity was estimated at 117.65 mg/g (at 45 °C). According to Dubinin–Radushkevich (D-R) isotherm model, biosorption of MG onto malt bagasse occurs physically. The thermodynamic parameters such as Gibbs free energy, enthalpy and entropy indicated that the MG biosorption onto malt bagasse is spontaneous and endothermic. The results of the ionic strength effect indicated that the biosorption process under study had a strong tolerance under high salt concentrations. It can be concluded that malt bagasse waste has potential for application as biosorbent for removal of MG from aqueous solution.

Keywords: Color removal, kinetic and isotherm studies, thermodynamic parameters, FTIR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
29 Analysis of Message Authentication in Turbo Coded Halftoned Images using Exit Charts

Authors: Andhe Dharani, P. S. Satyanarayana, Andhe Pallavi

Abstract:

Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).

Keywords: Halftoning, Turbo codes, security, operationallifetime, Turbo based stego system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
28 Fuzzy Uncertainty Theory for Stealth Fighter Aircraft Selection in Entropic Fuzzy TOPSIS Decision Analysis Process

Authors: C. Ardil

Abstract:

The purpose of this paper is to present fuzzy TOPSIS in an entropic fuzzy environment. Due to the ambiguous concepts often represented in decision data, exact values are insufficient to model real-life situations. In this paper, the rating of each alternative is defined in fuzzy linguistic terms, which can be expressed with triangular fuzzy numbers. The weight of each criterion is then derived from the decision matrix using the entropy weighting method. Next, a vertex method is proposed to calculate the distance between two triangular fuzzy numbers. According to the TOPSIS concept, a closeness coefficient is defined to determine the ranking order of all alternatives by simultaneously calculating the distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). Finally, an illustrative example of selecting stealth fighter aircraft is shown at the end of this article to highlight the procedure of the proposed method. Correlation analysis and validation analysis using TOPSIS, WSM, and WPM methods were performed to compare the ranking order of the alternatives.

Keywords: stealth fighter aircraft selection, fuzzy uncertainty theory (FUT), fuzzy entropic decision (FED), fuzzy linguistic variables, triangular fuzzy numbers, multiple criteria decision making analysis, MCDMA, TOPSIS, WSM, WPM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510
27 Active Segment Selection Method in EEG Classification Using Fractal Features

Authors: Samira Vafaye Eslahi

Abstract:

BCI (Brain Computer Interface) is a communication machine that translates brain massages to computer commands. These machines with the help of computer programs can recognize the tasks that are imagined. Feature extraction is an important stage of the process in EEG classification that can effect in accuracy and the computation time of processing the signals. In this study we process the signal in three steps of active segment selection, fractal feature extraction, and classification. One of the great challenges in BCI applications is to improve classification accuracy and computation time together. In this paper, we have used student’s 2D sample t-statistics on continuous wavelet transforms for active segment selection to reduce the computation time. In the next level, the features are extracted from some famous fractal dimension estimation of the signal. These fractal features are Katz and Higuchi. In the classification stage we used ANFIS (Adaptive Neuro-Fuzzy Inference System) classifier, FKNN (Fuzzy K-Nearest Neighbors), LDA (Linear Discriminate Analysis), and SVM (Support Vector Machines). We resulted that active segment selection method would reduce the computation time and Fractal dimension features with ANFIS analysis on selected active segments is the best among investigated methods in EEG classification.

Keywords: EEG, Student’s t- statistics, BCI, Fractal Features, ANFIS, FKNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2077
26 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2706
25 Transform-Domain Rate-Distortion Optimization Accelerator for H.264/AVC Video Encoding

Authors: Mohammed Golam Sarwer, Lai Man Po, Kai Guo, Q.M. Jonathan Wu

Abstract:

In H.264/AVC video encoding, rate-distortion optimization for mode selection plays a significant role to achieve outstanding performance in compression efficiency and video quality. However, this mode selection process also makes the encoding process extremely complex, especially in the computation of the ratedistortion cost function, which includes the computations of the sum of squared difference (SSD) between the original and reconstructed image blocks and context-based entropy coding of the block. In this paper, a transform-domain rate-distortion optimization accelerator based on fast SSD (FSSD) and VLC-based rate estimation algorithm is proposed. This algorithm could significantly simplify the hardware architecture for the rate-distortion cost computation with only ignorable performance degradation. An efficient hardware structure for implementing the proposed transform-domain rate-distortion optimization accelerator is also proposed. Simulation results demonstrated that the proposed algorithm reduces about 47% of total encoding time with negligible degradation of coding performance. The proposed method can be easily applied to many mobile video application areas such as a digital camera and a DMB (Digital Multimedia Broadcasting) phone.

Keywords: Context-adaptive variable length coding (CAVLC), H.264/AVC, rate-distortion optimization (RDO), sum of squareddifference (SSD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
24 Increase of Organization in Complex Systems

Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan

Abstract:

Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.

Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
23 A Study on the Average Information Ratio of Perfect Secret-Sharing Schemes for Access Structures Based On Bipartite Graphs

Authors: Hui-Chuan Lu

Abstract:

A perfect secret-sharing scheme is a method to distribute a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of participants in any unqualified subset is statistically independent of the secret. The collection of all qualified subsets is called the access structure of the perfect secret-sharing scheme. In a graph-based access structure, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme  realizing the access structure based on G is defined as AR = (Pv2V (G) H(v))/(|V (G)|H(s)), where s is the secret and v is the share of v, both are random variables from  and H is the Shannon entropy. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing a given access structure is called the optimal average information ratio of that access structure. Most known results about the optimal average information ratio give upper bounds or lower bounds on it. In this present structures based on bipartite graphs and determine the exact values of the optimal average information ratio of some infinite classes of them.

Keywords: secret-sharing scheme, average information ratio, star covering, core sequence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
22 Development of a Neural Network based Algorithm for Multi-Scale Roughness Parameters and Soil Moisture Retrieval

Authors: L. Bennaceur Farah, I. R. Farah, R. Bennaceur, Z. Belhadj, M. R. Boussema

Abstract:

The overall objective of this paper is to retrieve soil surfaces parameters namely, roughness and soil moisture related to the dielectric constant by inverting the radar backscattered signal from natural soil surfaces. Because the classical description of roughness using statistical parameters like the correlation length doesn't lead to satisfactory results to predict radar backscattering, we used a multi-scale roughness description using the wavelet transform and the Mallat algorithm. In this description, the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each having a spatial scale. A second step in this study consisted in adapting a direct model simulating radar backscattering namely the small perturbation model to this multi-scale surface description. We investigated the impact of this description on radar backscattering through a sensitivity analysis of backscattering coefficient to the multi-scale roughness parameters. To perform the inversion of the small perturbation multi-scale scattering model (MLS SPM) we used a multi-layer neural network architecture trained by backpropagation learning rule. The inversion leads to satisfactory results with a relative uncertainty of 8%.

Keywords: Remote sensing, rough surfaces, inverse problems, SAR, radar scattering, Neural networks and Fractals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
21 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: Automatic detection, wavelets, defects, fracture lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121