Search results for: maximal overlap discrete wavelet transform
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1430

Search results for: maximal overlap discrete wavelet transform

1190 A Comparison of Real Valued Transforms for Image Compression

Authors: Shivali D. Kulkarni, Ameya K. Naik, Nitin S. Nagori

Abstract:

In this paper we present simulation results for the application of a bandwidth efficient algorithm (mapping algorithm) to an image transmission system. This system considers three different real valued transforms to generate energy compact coefficients. First results are presented for gray scale and color image transmission in the absence of noise. It is seen that the system performs its best when discrete cosine transform is used. Also the performance of the system is dominated more by the size of the transform block rather than the number of coefficients transmitted or the number of bits used to represent each coefficient. Similar results are obtained in the presence of additive white Gaussian noise. The varying values of the bit error rate have very little or no impact on the performance of the algorithm. Optimum results are obtained for the system considering 8x8 transform block and by transmitting 15 coefficients from each block using 8 bits.

Keywords: Additive white Gaussian noise channel, mapping algorithm, peak signal to noise ratio, transform encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
1189 Shot Boundary Detection Using Octagon Square Search Pattern

Authors: J. Kavitha, S. Sowmyayani, P. Arockia Jansi Rani

Abstract:

In this paper, a shot boundary detection method is presented using octagon square search pattern. The color, edge, motion and texture features of each frame are extracted and used in shot boundary detection. The motion feature is extracted using octagon square search pattern. Then, the transition detection method is capable of detecting the shot or non-shot boundaries in the video using the feature weight values. Experimental results are evaluated in TRECVID video test set containing various types of shot transition with lighting effects, object and camera movement within the shots. Further, this paper compares the experimental results of the proposed method with existing methods. It shows that the proposed method outperforms the state-of-art methods for shot boundary detection.

Keywords: Content-based indexing and retrieval, cut transition detection, discrete wavelet transform, shot boundary detection, video source.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995
1188 Wavelet Enhanced CCA for Minimization of Ocular and Muscle Artifacts in EEG

Authors: B. S. Raghavendra, D. Narayana Dutt

Abstract:

Electroencephalogram (EEG) recordings are often contaminated with ocular and muscle artifacts. In this paper, the canonical correlation analysis (CCA) is used as blind source separation (BSS) technique (BSS-CCA) to decompose the artifact contaminated EEG into component signals. We combine the BSSCCA technique with wavelet filtering approach for minimizing both ocular and muscle artifacts simultaneously, and refer the proposed method as wavelet enhanced BSS-CCA. In this approach, after careful visual inspection, the muscle artifact components are discarded and ocular artifact components are subjected to wavelet filtering to retain high frequency cerebral information, and then clean EEG is reconstructed. The performance of the proposed wavelet enhanced BSS-CCA method is tested on real EEG recordings contaminated with ocular and muscle artifacts, for which power spectral density is used as a quantitative measure. Our results suggest that the proposed hybrid approach minimizes ocular and muscle artifacts effectively, minimally affecting underlying cerebral activity in EEG recordings.

Keywords: Blind source separation, Canonical correlationanalysis, Electroencephalogram, Muscle artifact, Ocular artifact, Power spectrum, Wavelet threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2326
1187 Thin Bed Reservoir Delineation Using Spectral Decomposition and Instantaneous Seismic Attributes, Pohokura Field, Taranaki Basin, New Zealand

Authors: P. Sophon, M. Kruachanta, S. Chaisri, G. Leaungvongpaisan, P. Wongpornchai

Abstract:

The thick bed hydrocarbon reservoirs are primarily interested because of the more prolific production. When the amount of petroleum in the thick bed starts decreasing, the thin bed reservoirs are the alternative targets to maintain the reserves. The conventional interpretation of seismic data cannot delineate the thin bed having thickness less than the vertical seismic resolution. Therefore, spectral decomposition and instantaneous seismic attributes were used to delineate the thin bed in this study. Short Window Discrete Fourier Transform (SWDFT) spectral decomposition and instantaneous frequency attributes were used to reveal the thin bed reservoir, while Continuous Wavelet Transform (CWT) spectral decomposition and envelope (instantaneous amplitude) attributes were used to indicate hydrocarbon bearing zone. The study area is located in the Pohokura Field, Taranaki Basin, New Zealand. The thin bed target is the uppermost part of Mangahewa Formation, the most productive in the gas-condensate production in the Pohokura Field. According to the time-frequency analysis, SWDFT spectral decomposition can reveal the thin bed using a 72 Hz SWDFT isofrequency section and map, and that is confirmed by the instantaneous frequency attribute. The envelope attribute showing the high anomaly indicates the hydrocarbon accumulation area at the thin bed target. Moreover, the CWT spectral decomposition shows the low-frequency shadow zone and abnormal seismic attenuation in the higher isofrequencies below the thin bed confirms that the thin bed can be a prospective hydrocarbon zone.

Keywords: Hydrocarbon indication, instantaneous seismic attribute, spectral decomposition, thin bed delineation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 628
1186 Effect of Scene Changing on Image Sequences Compression Using Zero Tree Coding

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

We study in this paper the effect of the scene changing on image sequences coding system using Embedded Zerotree Wavelet (EZW). The scene changing considered here is the full motion which may occurs. A special image sequence is generated where the scene changing occurs randomly. Two scenarios are considered: In the first scenario, the system must provide the reconstruction quality as best as possible by the management of the bit rate (BR) while the scene changing occurs. In the second scenario, the system must keep the bit rate as constant as possible by the management of the reconstruction quality. The first scenario may be motivated by the availability of a large band pass transmission channel where an increase of the bit rate may be possible to keep the reconstruction quality up to a given threshold. The second scenario may be concerned by the narrow band pass transmission channel where an increase of the bit rate is not possible. In this last case, applications for which the reconstruction quality is not a constraint may be considered. The simulations are performed with five scales wavelet decomposition using the 9/7-tap filter bank biorthogonal wavelet. The entropy coding is performed using a specific defined binary code book and EZW algorithm. Experimental results are presented and compared to LEAD H263 EVAL. It is shown that if the reconstruction quality is the constraint, the system increases the bit rate to obtain the required quality. In the case where the bit rate must be constant, the system is unable to provide the required quality if the scene change occurs; however, the system is able to improve the quality while the scene changing disappears.

Keywords: Image Sequence Compression, Wavelet Transform, Scene Changing, Zero Tree, Bit Rate, Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
1185 A Trainable Neural Network Ensemble for ECG Beat Classification

Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour

Abstract:

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Keywords: ECG beat Classification; Combining Classifiers;Premature Ventricular Contraction (PVC); Multi Layer Perceptrons;Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207
1184 A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT

Authors: Say Wei Foo, Qi Dong

Abstract:

Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.

Keywords: Image watermarking, Image normalization, Singularvalue decomposition, Discrete cosine transform, Robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088
1183 Haar Wavelet Method for Solving Fitz Hugh-Nagumo Equation

Authors: G.Hariharan, K.Kannan

Abstract:

In this paper, we develop an accurate and efficient Haar wavelet method for well-known FitzHugh-Nagumo equation. The proposed scheme can be used to a wide class of nonlinear reaction-diffusion equations. The power of this manageable method is confirmed. Moreover the use of Haar wavelets is found to be accurate, simple, fast, flexible, convenient, small computation costs and computationally attractive.

Keywords: FitzHugh-Nagumo equation, Haar wavelet method, adomain decomposition method, computationally attractive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2769
1182 Design of Low-Area HEVC Core Transform Architecture

Authors: Seung-Mok Han, Woo-Jin Nam, Seongsoo Lee

Abstract:

This paper proposes and implements an core transform architecture, which is one of the major processes in HEVC video compression standard. The proposed core transform architecture is implemented with only adders and shifters instead of area-consuming multipliers. Shifters in the proposed core transform architecture are implemented in wires and multiplexers, which significantly reduces chip area. Also, it can process from 4×4 to 16×16 blocks with common hardware by reusing processing elements. Designed core transform architecture in 0.13um technology can process a 16×16 block with 2-D transform in 130 cycles, and its gate count is 101,015 gates.

Keywords: HEVC, Core transform, Low area, Shift-and-add, PE reuse

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912
1181 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode provides good sources of needed information to classify living species. The classification problem has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use the similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. However, all the used methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. In fact, our method permits to avoid the complex problem of form and structure in different classes of organisms. The empirical data and their classification performances are compared with other methods. Evenly, in this study, we present our system which is consisted of three phases. The first one, is called transformation, is composed of three sub steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. Moreover, the second phase step is an approximation; it is empowered by the use of Multi Library Wavelet Neural Networks (MLWNN). Finally, the third one, is called the classification of DNA Barcodes, is realized by applying the algorithm of hierarchical classification.

Keywords: DNA Barcode, Electron-Ion Interaction Pseudopotential, Multi Library Wavelet Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
1180 Kalman-s Shrinkage for Wavelet-Based Despeckling of SAR Images

Authors: Mario Mastriani, Alberto E. Giraldez

Abstract:

In this paper, a new probability density function (pdf) is proposed to model the statistics of wavelet coefficients, and a simple Kalman-s filter is derived from the new pdf using Bayesian estimation theory. Specifically, we decompose the speckled image into wavelet subbands, we apply the Kalman-s filter to the high subbands, and reconstruct a despeckled image from the modified detail coefficients. Experimental results demonstrate that our method compares favorably to several other despeckling methods on test synthetic aperture radar (SAR) images.

Keywords: Kalman's filter, shrinkage, speckle, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
1179 Usage of Channel Coding Techniques for Peak-to-Average Power Ratio Reduction in Visible Light Communications Systems

Authors: P.L.D.N.M. de Silva, S.G. Edirisinghe, R. Weerasuriya

Abstract:

High Peak-to-Average Power Ratio (PAPR) is a concern of Orthogonal Frequency Division Multiplexing (OFDM) based Visible Light Communication (VLC) systems. Discrete Fourier Transform spread (DFT-s) OFDM is an alternative single carrier modulation scheme which would address this concern. Employing channel coding techniques is another mechanism to reduce the PAPR. In this study, the improvement which can be harnessed by hybridizing these two techniques for VLC system is being studied. Within the study, efficient techniques such as Hamming coding and Convolutional coding have been studied. Thus, we present the impact of the hybrid of DFT-s OFDM and Channel coding (Hamming coding and Convolutional coding) on PAPR in VLC systems, using MATLAB simulations.

Keywords: Convolutional Coding, Discrete Fourier Transform spread Orthogonal Frequency Division Multiplexing (DFT-s OFDM), Hamming Coding, Peak-to-Average Power Ratio (PAPR), Visible Light Communications (VLC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 488
1178 A Robust Extrapolation Method for Curtailed Aperture Reconstruction in Acoustic Imaging

Authors: R. Bremananth

Abstract:

Acoustic Imaging based sound localization using microphone array is a challenging task in digital-signal processing. Discrete Fourier transform (DFT) based near-field acoustical holography (NAH) is an important acoustical technique for sound source localization and provide an efficient solution to the ill-posed problem. However, in practice, due to the usage of small curtailed aperture and its consequence of significant spectral leakage, the DFT could not reconstruct the active-region-of-sound (AROS) effectively, especially near the edges of aperture. In this paper, we emphasize the fundamental problems of DFT-based NAH, provide a solution to spectral leakage effect by the extrapolation based on linear predictive coding and 2D Tukey windowing. This approach has been tested to localize the single and multi-point sound sources. We observe that incorporating extrapolation technique increases the spatial resolution, localization accuracy and reduces spectral leakage when small curtail aperture with a lower number of sensors accounts.

Keywords: Acoustic Imaging, Discrete Fourier Transform (DFT), k-space wavenumber, Near-Field Acoustical Holography (NAH), Source Localization, Spectral Leakage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
1177 New Wavelet-Based Superresolution Algorithm for Speckle Reduction in SAR Images

Authors: Mario Mastriani

Abstract:

This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.

Keywords: Projection, speckle, superresolution, synthetic aperture radar, thresholding, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
1176 Fractal - Wavelet Based Techniques for Improving the Artificial Neural Network Models

Authors: Reza Bazargan Lari, Mohammad H. Fattahi

Abstract:

Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for preprocessing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based preprocessing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.

Keywords: Wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2054
1175 Motion Recognition Based On Fuzzy WP Feature Extraction Approach

Authors: Keun-Chang Kwak

Abstract:

This paper is concerned with motion recognition based fuzzy WP(Wavelet Packet) feature extraction approach from Vicon physical data sets. For this purpose, we use an efficient fuzzy mutual-information-based WP transform for feature extraction. This method estimates the required mutual information using a novel approach based on fuzzy membership function. The physical action data set includes 10 normal and 10 aggressive physical actions that measure the human activity. The data have been collected from 10 subjects using the Vicon 3D tracker. The experiments consist of running, seating, and walking as physical activity motion among various activities. The experimental results revealed that the presented feature extraction approach showed good recognition performance.

Keywords: Motion recognition, fuzzy wavelet packet, Vicon physical data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
1174 Risk Factors’ Analysis on Shanghai Carbon Trading

Authors: Zhaojun Wang, Zongdi Sun, Zhiyuan Liu

Abstract:

First of all, the carbon trading price and trading volume in Shanghai are transformed by Fourier transform, and the frequency response diagram is obtained. Then, the frequency response diagram is analyzed and the Blackman filter is designed. The Blackman filter is used to filter, and the carbon trading time domain and frequency response diagram are obtained. After wavelet analysis, the carbon trading data were processed; respectively, we got the average value for each 5 days, 10 days, 20 days, 30 days, and 60 days. Finally, the data are used as input of the Back Propagation Neural Network model for prediction.

Keywords: Shanghai carbon trading, carbon trading price, carbon trading volume, wavelet analysis, BP neural network model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 965
1173 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network

Authors: Jia Xin Low, Keng Wah Choo

Abstract:

This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.

Keywords: Convolutional neural network, discrete wavelet transform, deep learning, heart sound classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
1172 Support Vector Machine based Intelligent Watermark Decoding for Anticipated Attack

Authors: Syed Fahad Tahir, Asifullah Khan, Abdul Majid, Anwar M. Mirza

Abstract:

In this paper, we present an innovative scheme of blindly extracting message bits from an image distorted by an attack. Support Vector Machine (SVM) is used to nonlinearly classify the bits of the embedded message. Traditionally, a hard decoder is used with the assumption that the underlying modeling of the Discrete Cosine Transform (DCT) coefficients does not appreciably change. In case of an attack, the distribution of the image coefficients is heavily altered. The distribution of the sufficient statistics at the receiving end corresponding to the antipodal signals overlap and a simple hard decoder fails to classify them properly. We are considering message retrieval of antipodal signal as a binary classification problem. Machine learning techniques like SVM is used to retrieve the message, when certain specific class of attacks is most probable. In order to validate SVM based decoding scheme, we have taken Gaussian noise as a test case. We generate a data set using 125 images and 25 different keys. Polynomial kernel of SVM has achieved 100 percent accuracy on test data.

Keywords: Bit Correct Ratio (BCR), Grid Search, Intelligent Decoding, Jackknife Technique, Support Vector Machine (SVM), Watermarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
1171 Optimized and Secured Digital Watermarking Using Entropy, Chaotic Grid Map and Its Performance Analysis

Authors: R. Rama Kishore, Sunesh

Abstract:

This paper presents an optimized, robust, and secured watermarking technique. The methodology used in this work is the combination of entropy and chaotic grid map. The proposed methodology incorporates Discrete Cosine Transform (DCT) on the host image. To improve the imperceptibility of the method, the host image DCT blocks, where the watermark is to be embedded, are further optimized by considering the entropy of the blocks. Chaotic grid is used as a key to reorder the DCT blocks so that it will further increase security while selecting the watermark embedding locations and its sequence. Without a key, one cannot reveal the exact watermark from the watermarked image. The proposed method is implemented on four different images. It is concluded that the proposed method is giving better results in terms of imperceptibility measured through PSNR and found to be above 50. In order to prove the effectiveness of the method, the performance analysis is done after implementing different attacks on the watermarked images. It is found that the methodology is very strong against JPEG compression attack even with the quality parameter up to 15. The experimental results are confirming that the combination of entropy and chaotic grid map method is strong and secured to different image processing attacks.

Keywords: Digital watermarking, discrete cosine transform, chaotic grid map, entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 710
1170 Algorithms for the Fast Computation of PWL and PHL Transforms

Authors: Fituri H Belgassem, Abdulbasit Nigrat, Seddeeq Ghrari

Abstract:

In this paper, the construction of fast algorithms for the computation of Periodic Walsh Piecewise-Linear PWL transform and the Periodic Haar Piecewise-Linear PHL transform will be presented. Algorithms for the computation of the inverse transforms are also proposed. The matrix equation of the PWL and PHL transforms are introduced. Comparison of the computational requirements for the periodic piecewise-linear transforms and other orthogonal transforms shows that the periodic piecewise-linear transforms require less number of operations than some orthogonal transforms such as the Fourier, Walsh and the Discrete Cosine transforms.

Keywords: Piece wise linear transforms, Fast transforms, Fast algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
1169 Quality Factor Variation with Transform Order in Fractional Fourier Domain

Authors: Sukrit Shankar, Chetana Shanta Patsa, K. Pardha Saradhi, Jaydev Sharma

Abstract:

Fractional Fourier Transform is a powerful tool, which is a generalization of the classical Fourier Transform. This paper provides a mathematical relation relating the span in Fractional Fourier domain with the amplitude and phase functions of the signal, which is further used to study the variation of quality factor with different values of the transform order. It is seen that with the increase in the number of transients in the signal, the deviation of average Fractional Fourier span from the frequency bandwidth increases. Also, with the increase in the transient nature of the signal, the optimum value of transform order can be estimated based on the quality factor variation, and this value is found to be very close to that for which one can obtain the most compact representation. With the entire mathematical analysis and experimentation, we consolidate the fact that Fractional Fourier Transform gives more optimal representations for a number of transform orders than Fourier transform.

Keywords: Fractional Fourier Transform, Quality Factor, Fractional Fourier span, transient signals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1235
1168 On Decomposition of Maximal Prefix Codes

Authors: Nikolai Krainiukov, Boris Melnikov

Abstract:

We study the properties of maximal prefix codes. The codes have many applications in computer science, theory of formal languages, data processing and data classification. Our approaches to study use finite state automata (so-called flower automata) for the representation of prefix codes. An important task is the decomposition of prefix codes into prime prefix codes (factors). We discuss properties of such prefix code decompositions. A linear time algorithm is designed to find the prime decomposition. We used the GAP computer algebra system, which allows us to perform algebraic operations for free semigroups, monoids and automata.

Keywords: Maximal prefix code, regular languages, flower automata, prefix code decomposing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37
1167 Improved Approximation to the Derivative of a Digital Signal Using Wavelet Transforms for Crosstalk Analysis

Authors: S. P. Kozaitis, R. L. Kriner

Abstract:

The information revealed by derivatives can help to better characterize digital near-end crosstalk signatures with the ultimate goal of identifying the specific aggressor signal. Unfortunately, derivatives tend to be very sensitive to even low levels of noise. In this work we approximated the derivatives of both quiet and noisy digital signals using a wavelet-based technique. The results are presented for Gaussian digital edges, IBIS Model digital edges, and digital edges in oscilloscope data captured from an actual printed circuit board. Tradeoffs between accuracy and noise immunity are presented. The results show that the wavelet technique can produce first derivative approximations that are accurate to within 5% or better, even under noisy conditions. The wavelet technique can be used to calculate the derivative of a digital signal edge when conventional methods fail.

Keywords: digital signals, electronics, IBIS model, printedcircuit board, wavelets

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
1166 Numerical Modelling of Dry Stone Masonry Structures Based on Finite-Discrete Element Method

Authors: Ž. Nikolić, H. Smoljanović, N. Živaljić

Abstract:

This paper presents numerical model based on finite-discrete element method for analysis of the structural response of dry stone masonry structures under static and dynamic loads. More precisely, each discrete stone block is discretized by finite elements. Material non-linearity including fracture and fragmentation of discrete elements as well as cyclic behavior during dynamic load are considered through contact elements which are implemented within a finite element mesh. The application of the model was conducted on several examples of these structures. The performed analysis shows high accuracy of the numerical results in comparison with the experimental ones and demonstrates the potential of the finite-discrete element method for modelling of the response of dry stone masonry structures.

Keywords: Finite-discrete element method, dry stone masonry structures, static load, dynamic load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
1165 Wavelet-Based Despeckling of Synthetic Aperture Radar Images Using Adaptive and Mean Filters

Authors: Syed Musharaf Ali, Muhammad Younus Javed, Naveed Sarfraz Khattak

Abstract:

In this paper we introduced new wavelet based algorithm for speckle reduction of synthetic aperture radar images, which uses combination of undecimated wavelet transformation, wiener filter (which is an adaptive filter) and mean filter. Further more instead of using existing thresholding techniques such as sure shrinkage, Bayesian shrinkage, universal thresholding, normal thresholding, visu thresholding, soft and hard thresholding, we use brute force thresholding, which iteratively run the whole algorithm for each possible candidate value of threshold and saves each result in array and finally selects the value for threshold that gives best possible results. That is why it is slow as compared to existing thresholding techniques but gives best results under the given algorithm for speckle reduction.

Keywords: Brute force thresholding, directional smoothing, direction dependent mask, undecimated wavelet transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2870
1164 Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme

Authors: Jean-Pierre Dubois, Rania Minkara, Rafic Ayoubi

Abstract:

Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.

Keywords: Bit error rate, femto-internet cells, generalized maximal ratio combining, signal-to-scattering noise ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144
1163 Fail-safe Modeling of Discrete Event Systems using Petri Nets

Authors: P. Nazemzadeh, A. Dideban, M. Zareiee

Abstract:

In this paper the effect of faults in the elements and parts of discrete event systems is investigated. In the occurrence of faults, some states of the system must be changed and some of them must be forbidden. For this goal, different states of these elements are examined and a model for fail-safe behavior of each state is introduced. Replacing new models of the target elements in the preliminary model by a systematic method, leads to a fail-safe discrete event system.

Keywords: Discrete event systems, Fail-safe, Petri nets, Supervisory control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609
1162 Numerical Inverse Laplace Transform Using Chebyshev Polynomial

Authors: Vinod Mishra, Dimple Rani

Abstract:

In this paper, numerical approximate Laplace transform inversion algorithm based on Chebyshev polynomial of second kind is developed using odd cosine series. The technique has been tested for three different functions to work efficiently. The illustrations show that the new developed numerical inverse Laplace transform is very much close to the classical analytic inverse Laplace transform.

Keywords: Chebyshev polynomial, Numerical inverse Laplace transform, Odd cosine series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1396
1161 Effect of Submaximal Eccentric versus Maximal Isometric Contraction on Delayed Onset Muscle Soreness

Authors: Mohamed M. Ragab, Neveen A. Abdel Raoof, Reham H. Diab

Abstract:

Background: Delayed onset muscle soreness (DOMS) is the most common symptom when ordinary individuals and athletes are exposed to unaccustomed physical activity, especially eccentric contraction which impairs athletic performance, ordinary people work ability and physical functioning. Multitudes of methods have been investigated to reduce DOMS. One of the valuable methods to control DOMS is repeated bout effect (RBE) as a prophylactic method. Purpose: To compare the repeated bout effect of submaximal eccentric with maximal isometric contraction on induced DOMS. Methods: Sixty normal male volunteers were assigned randomly into three equal groups: Group A (first study group): 20 subjects received submaximal eccentric contraction on non-dominant elbow flexors as a prophylactic exercise. Group B (second study group): 20 subjects received maximal isometric contraction on nondominant elbow flexors as a prophylactic exercise. Group C (control group): 20 subjects did not receive any prophylactic exercises. Maximal isometric peak torque of elbow flexors and patient related elbow evaluation (PREE) scale were measured for each subject 3 times before, immediately after, and 48 hours after induction of DOMS. Results: Post-hoc test for maximal isometric peak torque and PREE scale immediately and 48 hours after induction of DOMS revealed that group (A) and group (B) resulted in significant decrease in maximal isometric strength loss and elbow pain and disability rather than control group (C), but submaximal eccentric group (A) was more effective than maximal isometric group (B) as it showed more rapid recovery of functional strength and less degrees of elbow pain and disability. Conclusion: Both submaximal eccentric contraction and maximal isometric contraction were effective in prevention of DOMS but submaximal eccentric contraction produced a greater protective effect against muscle damage induced by maximal eccentric exercise performed 2 days later.

Keywords: Delayed onset muscle soreness, maximal isometric peak torque, patient related elbow evaluation scale, repeated bout effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081