Search results for: Huffman coding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 278

Search results for: Huffman coding

218 An Evaluation of Algorithms for Single-Echo Biosonar Target Classification

Authors: Turgay Temel, John Hallam

Abstract:

A recent neurospiking coding scheme for feature extraction from biosonar echoes of various plants is examined with avariety of stochastic classifiers. Feature vectors derived are employedin well-known stochastic classifiers, including nearest-neighborhood,single Gaussian and a Gaussian mixture with EM optimization.Classifiers' performances are evaluated by using cross-validation and bootstrapping techniques. It is shown that the various classifers perform equivalently and that the modified preprocessing configuration yields considerably improved results.

Keywords: Classification, neuro-spike coding, non-parametricmodel, parametric model, Gaussian mixture, EM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
217 Channel Estimation/Equalization with Adaptive Modulation and Coding over Multipath Faded Channels for WiMAX

Authors: B. Siva Kumar Reddy, B. Lakshmi

Abstract:

Different order modulations combined with different coding schemes, allow sending more bits per symbol, thus achieving higher throughputs and better spectral efficiencies. However, it must also be noted that when using a modulation technique such as 64- QAM with less overhead bits, better signal-to-noise ratios (SNRs) are needed to overcome any Inter symbol Interference (ISI) and maintain a certain bit error ratio (BER). The use of adaptive modulation allows wireless technologies to yielding higher throughputs while also covering long distances. The aim of this paper is to implement an Adaptive Modulation and Coding (AMC) features of the WiMAX PHY in MATLAB and to analyze the performance of the system in different channel conditions (AWGN, Rayleigh and Rician fading channel) with channel estimation and blind equalization. Simulation results have demonstrated that the increment in modulation order causes to increment in throughput and BER values. These results derived a trade-off among modulation order, FFT length, throughput, BER value and spectral efficiency. The BER changes gradually for AWGN channel and arbitrarily for Rayleigh and Rician fade channels.

Keywords: AMC, CSI, CMA, OFDM, OFDMA, WiMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3102
216 Quality-Controlled Compression Method using Wavelet Transform for Electrocardiogram Signals

Authors: Redha Benzid, Farid Marir, Nour-Eddine Bouguechal

Abstract:

This paper presents a new Quality-Controlled, wavelet based, compression method for electrocardiogram (ECG) signals. Initially, an ECG signal is decomposed using the wavelet transform. Then, the resulting coefficients are iteratively thresholded to guarantee that a predefined goal percent root mean square difference (GPRD) is matched within tolerable boundaries. The quantization strategy of extracted non-zero wavelet coefficients (NZWC), according to the combination of RLE, HUFFMAN and arithmetic encoding of the NZWC and a resulting look up table, allow the accomplishment of high compression ratios with good quality reconstructed signals.

Keywords: ECG compression, Non-uniform Max-Lloydquantizer, PRD, Quality-Controlled, Wavelet transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
215 Fast Intra Prediction Algorithm for H.264/AVC Based on Quadratic and Gradient Model

Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf

Abstract:

The H.264/AVC standard uses an intra prediction, 9 directional modes for 4x4 luma blocks and 8x8 luma blocks, 4 directional modes for 16x16 macroblock and 8x8 chroma blocks, respectively. It means that, for a macroblock, it has to perform 736 different RDO calculation before a best RDO modes is determined. With this Multiple intra-mode prediction, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards, but computational complexity is increased significantly. This paper presents a fast intra prediction algorithm for H.264/AVC intra prediction based a characteristic of homogeneity information. In this study, the gradient prediction method used to predict the homogeneous area and the quadratic prediction function used to predict the nonhomogeneous area. Based on the correlation between the homogeneity and block size, the smaller block is predicted by gradient prediction and quadratic prediction, so the bigger block is predicted by gradient prediction. Experimental results are presented to show that the proposed method reduce the complexity by up to 76.07% maintaining the similar PSNR quality with about 1.94%bit rate increase in average.

Keywords: Intra prediction, H.264/AVC, video coding, encodercomplexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1894
214 Multi Switched Split Vector Quantization of Narrowband Speech Signals

Authors: M. Satya Sai Ram, P. Siddaiah, M. Madhavi Latha

Abstract:

Vector quantization is a powerful tool for speech coding applications. This paper deals with LPC Coding of speech signals which uses a new technique called Multi Switched Split Vector Quantization (MSSVQ), which is a hybrid of Multi, switched, split vector quantization techniques. The spectral distortion performance, computational complexity, and memory requirements of MSSVQ are compared to split vector quantization (SVQ), multi stage vector quantization(MSVQ) and switched split vector quantization (SSVQ) techniques. It has been proved from results that MSSVQ has better spectral distortion performance, lower computational complexity and lower memory requirements when compared to all the above mentioned product code vector quantization techniques. Computational complexity is measured in floating point operations (flops), and memory requirements is measured in (floats).

Keywords: Linear predictive Coding, Multi stage vectorquantization, Switched Split vector quantization, Split vectorquantization, Line Spectral Frequencies (LSF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
213 Novel Security Strategy for Real Time Digital Videos

Authors: Prakash Devale, R. S. Prasad, Amol Dhumane, Pritesh Patil

Abstract:

Now a days video data embedding approach is a very challenging and interesting task towards keeping real time video data secure. We can implement and use this technique with high-level applications. As the rate-distortion of any image is not confirmed, because the gain provided by accurate image frame segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with a lot factors like losses that depend on both the coding scheme and the object structure. By using rate controller in association with the encoder one can dynamically adjust the target bitrate. This paper discusses about to keep secure videos by mixing signature data with negligible distortion in the original video, and to keep steganographic video as closely as possible to the quality of the original video. In this discussion we propose the method for embedding the signature data into separate video frames by the use of block Discrete Cosine Transform. These frames are then encoded by real time encoding H.264 scheme concepts. After processing, at receiver end recovery of original video and the signature data is proposed.

Keywords: Data Hiding, Digital Watermarking, video coding H.264, Rate Control, Block DCT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561
212 Perceptual JPEG Compliant Coding by Using DCT-Based Visibility Thresholds of Color Images

Authors: Kuo-Cheng Liu

Abstract:

Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.

Keywords: Just-noticeable distortion (JND), discrete cosine transform (DCT), JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2581
211 Error Rate Performance Comparisons of Precoding Schemes over Fading Channels for Multiuser MIMO

Authors: M. Arulvizhi

Abstract:

In Multiuser MIMO communication systems, interuser interference has a strong impact on the transmitted signals. Precoding technique schemes are employed for multiuser broadcast channels to suppress an interuser interference. Different Linear and nonlinear precoding schemes are there. For the massive system dimension, it is difficult to design an appropriate precoding algorithm with low computational complexity and good error rate performance at the same time over fading channels. This paper describes the error rate performance of precoding schemes over fading channels with the assumption of perfect channel state information at the transmitter. To estimate the bit error rate performance, different propagation environments namely, Rayleigh, Rician and Nakagami fading channels have been offered. This paper presents the error rate performance comparison of these fading channels based on precoding methods like Channel Inversion and Dirty paper coding for multiuser broadcasting system. MATLAB simulation has been used. It is observed that multiuser system achieves better error rate performance by Dirty paper coding over Rayleigh fading channel.

Keywords: Multiuser MIMO, channel inversion precoding, dirty paper coding, fading channels, BER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
210 Plug and Play Interferometer Configuration using Single Modulator Technique

Authors: Norshamsuri Ali, Hafizulfika, Salim Ali Al-Kathiri, Abdulla Al-Attas, Suhairi Saharudin, Mohamed Ridza Wahiddin

Abstract:

We demonstrate single-photon interference over 10 km using a plug and play system for quantum key distribution. The quality of the interferometer is measured by using the interferometer visibility. The coding of the signal is based on the phase coding and the value of visibility is based on the interference effect, which result a number of count. The setup gives full control of polarization inside the interferometer. The quality measurement of the interferometer is based on number of count per second and the system produces 94 % visibility in one of the detectors.

Keywords: single photon, interferometer, quantum key distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
209 Orchestra/Percussion Classification Algorithm for United Speech Audio Coding System

Authors: Yueming Wang, Rendong Ying, Sumxin Jiang, Peilin Liu

Abstract:

Unified Speech Audio Coding (USAC), the latest MPEG standardization for unified speech and audio coding, uses a speech/audio classification algorithm to distinguish speech and audio segments of the input signal. The quality of the recovered audio can be increased by well-designed orchestra/percussion classification and subsequent processing. However, owing to the shortcoming of the system, introducing an orchestra/percussion classification and modifying subsequent processing can enormously increase the quality of the recovered audio. This paper proposes an orchestra/percussion classification algorithm for the USAC system which only extracts 3 scales of Mel-Frequency Cepstral Coefficients (MFCCs) rather than traditional 13 scales of MFCCs and use Iterative Dichotomiser 3 (ID3) Decision Tree rather than other complex learning method, thus the proposed algorithm has lower computing complexity than most existing algorithms. Considering that frequent changing of attributes may lead to quality loss of the recovered audio signal, this paper also design a modified subsequent process to help the whole classification system reach an accurate rate as high as 97% which is comparable to classical 99%.

Keywords: ID3 Decision Tree, MFCC, Orchestra/Percussion Classification, USAC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
208 Statistical Distributions of the Lapped Transform Coefficients for Images

Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,

Abstract:

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
207 Influence of Ambiguity Cluster on Quality Improvement in Image Compression

Authors: Safaa Al-Ali, Ahmad Shahin, Fadi Chakik

Abstract:

Image coding based on clustering provides immediate access to targeted features of interest in a high quality decoded image. This approach is useful for intelligent devices, as well as for multimedia content-based description standards. The result of image clustering cannot be precise in some positions especially on pixels with edge information which produce ambiguity among the clusters. Even with a good enhancement operator based on PDE, the quality of the decoded image will highly depend on the clustering process. In this paper, we introduce an ambiguity cluster in image coding to represent pixels with vagueness properties. The presence of such cluster allows preserving some details inherent to edges as well for uncertain pixels. It will also be very useful during the decoding phase in which an anisotropic diffusion operator, such as Perona-Malik, enhances the quality of the restored image. This work also offers a comparative study to demonstrate the effectiveness of a fuzzy clustering technique in detecting the ambiguity cluster without losing lot of the essential image information. Several experiments have been carried out to demonstrate the usefulness of ambiguity concept in image compression. The coding results and the performance of the proposed algorithms are discussed in terms of the peak signal-tonoise ratio and the quantity of ambiguous pixels.

Keywords: Ambiguity Cluster, Anisotropic Diffusion, Fuzzy Clustering, Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
206 Study of Efficiency and Capability LZW++ Technique in Data Compression

Authors: Yusof. Mohd Kamir, Mat Deris. Mohd Sufian, Abidin. Ahmad Faisal Amri

Abstract:

The purpose of this paper is to show efficiency and capability LZWµ in data compression. The LZWµ technique is enhancement from existing LZW technique. The modification the existing LZW is needed to produce LZWµ technique. LZW read one by one character at one time. Differ with LZWµ technique, where the LZWµ read three characters at one time. This paper focuses on data compression and tested efficiency and capability LZWµ by different data format such as doc type, pdf type and text type. Several experiments have been done by different types of data format. The results shows LZWµ technique is better compared to existing LZW technique in term of file size.

Keywords: Data Compression, Huffman Encoding, LZW, LZWµ, RLL, Size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2089
205 Vector Space of the Extended Base-triplets over the Galois Field of five DNA Bases Alphabet

Authors: Robersy Sánchez, Ricardo Grau

Abstract:

A plausible architecture of an ancient genetic code is derived from an extended base triplet vector space over the Galois field of the extended base alphabet {D, G, A, U, C}, where the letter D represent one or more hypothetical bases with unspecific pairing. We hypothesized that the high degeneration of a primeval genetic code with five bases and the gradual origin and improvements of a primitive DNA repair system could make possible the transition from the ancient to the modern genetic code. Our results suggest that the Watson-Crick base pairing and the non-specific base pairing of the hypothetical ancestral base D used to define the sum and product operations are enough features to determine the coding constraints of the primeval and the modern genetic code, as well as the transition from the former to the later. Geometrical and algebraic properties of this vector space reveal that the present codon assignment of the standard genetic code could be induced from a primeval codon assignment. Besides, the Fourier spectrum of the extended DNA genome sequences derived from the multiple sequence alignment suggests that the called period-3 property of the present coding DNA sequences could also exist in the ancient coding DNA sequences.

Keywords: Genetic code vector space, primeval genetic code, power spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2364
204 Transform-Domain Rate-Distortion Optimization Accelerator for H.264/AVC Video Encoding

Authors: Mohammed Golam Sarwer, Lai Man Po, Kai Guo, Q.M. Jonathan Wu

Abstract:

In H.264/AVC video encoding, rate-distortion optimization for mode selection plays a significant role to achieve outstanding performance in compression efficiency and video quality. However, this mode selection process also makes the encoding process extremely complex, especially in the computation of the ratedistortion cost function, which includes the computations of the sum of squared difference (SSD) between the original and reconstructed image blocks and context-based entropy coding of the block. In this paper, a transform-domain rate-distortion optimization accelerator based on fast SSD (FSSD) and VLC-based rate estimation algorithm is proposed. This algorithm could significantly simplify the hardware architecture for the rate-distortion cost computation with only ignorable performance degradation. An efficient hardware structure for implementing the proposed transform-domain rate-distortion optimization accelerator is also proposed. Simulation results demonstrated that the proposed algorithm reduces about 47% of total encoding time with negligible degradation of coding performance. The proposed method can be easily applied to many mobile video application areas such as a digital camera and a DMB (Digital Multimedia Broadcasting) phone.

Keywords: Context-adaptive variable length coding (CAVLC), H.264/AVC, rate-distortion optimization (RDO), sum of squareddifference (SSD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
203 Video Coding Algorithm for Video Sequences with Abrupt Luminance Change

Authors: Sang Hyun Kim

Abstract:

In this paper, a fast motion compensation algorithm is proposed that improves coding efficiency for video sequences with brightness variations. We also propose a cross entropy measure between histograms of two frames to detect brightness variations. The framewise brightness variation parameters, a multiplier and an offset field for image intensity, are estimated and compensated. Simulation results show that the proposed method yields a higher peak signal to noise ratio (PSNR) compared with the conventional method, with a greatly reduced computational load, when the video scene contains illumination changes.

Keywords: Motion estimation, Fast motion compensation, Brightness variation compensation, Brightness change detection, Cross entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
202 A Novel VLSI Architecture of Hybrid Image Compression Model based on Reversible Blockade Transform

Authors: C. Hemasundara Rao, M. Madhavi Latha

Abstract:

Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. Furthermore, the discrete cosine transform has emerged as the new state-of-the art standard for image compression. In this paper, a hybrid image compression technique based on reversible blockade transform coding is proposed. The technique, implemented over regions of interest (ROIs), is based on selection of the coefficients that belong to different transforms, depending on the coefficients is proposed. This method allows: (1) codification of multiple kernals at various degrees of interest, (2) arbitrary shaped spectrum,and (3) flexible adjustment of the compression quality of the image and the background. No standard modification for JPEG2000 decoder was required. The method was applied over different types of images. Results show a better performance for the selected regions, when image coding methods were employed for the whole set of images. We believe that this method is an excellent tool for future image compression research, mainly on images where image coding can be of interest, such as the medical imaging modalities and several multimedia applications. Finally VLSI implementation of proposed method is shown. It is also shown that the kernal of Hartley and Cosine transform gives the better performance than any other model.

Keywords: VLSI, Discrete Cosine Transform, JPEG, Hartleytransform, Radon Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836
201 Target Detection with Improved Image Texture Feature Coding Method and Support Vector Machine

Authors: R. Xu, X. Zhao, X. Li, C. Kwan, C.-I Chang

Abstract:

An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.

Keywords: Image texture analysis, feature extraction, target detection, pattern classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
200 Performance Analysis of IDMA Scheme Using Quasi-Cyclic Low Density Parity Check Codes

Authors: Anurag Saxena, Alkesh Agrawal, Dinesh Kumar

Abstract:

The next generation mobile communication systems i.e. fourth generation (4G) was developed to accommodate the quality of service and required data rate. This project focuses on multiple access technique proposed in 4G communication systems. It is attempted to demonstrate the IDMA (Interleave Division Multiple Access) technology. The basic principle of IDMA is that interleaver is different for each user whereas CDMA employs different signatures. IDMA inherits many advantages of CDMA such as robust against fading, easy cell planning; dynamic channel sharing and IDMA increase the spectral efficiency and reduce the receiver complexity. In this, performance of IDMA is analyzed using QC-LDPC coding scheme further it is compared with LDPC coding and at last BER is calculated and plotted in MATLAB.

Keywords: 4G, QC-LDPC, CDMA, IDMA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
199 Dynamic Decompression for Text Files

Authors: Ananth Kamath, Ankit Kant, Aravind Srivatsa, Harisha J.A

Abstract:

Compression algorithms reduce the redundancy in data representation to decrease the storage required for that data. Lossless compression researchers have developed highly sophisticated approaches, such as Huffman encoding, arithmetic encoding, the Lempel-Ziv (LZ) family, Dynamic Markov Compression (DMC), Prediction by Partial Matching (PPM), and Burrows-Wheeler Transform (BWT) based algorithms. Decompression is also required to retrieve the original data by lossless means. A compression scheme for text files coupled with the principle of dynamic decompression, which decompresses only the section of the compressed text file required by the user instead of decompressing the entire text file. Dynamic decompressed files offer better disk space utilization due to higher compression ratios compared to most of the currently available text file formats.

Keywords: Compression, Dynamic Decompression, Text file format, Portable Document Format, Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
198 A Novel Arabic Text Steganography Method Using Letter Points and Extensions

Authors: Adnan Abdul-Aziz Gutub, Manal Mohammad Fattani

Abstract:

This paper presents a new steganography approach suitable for Arabic texts. It can be classified under steganography feature coding methods. The approach hides secret information bits within the letters benefiting from their inherited points. To note the specific letters holding secret bits, the scheme considers the two features, the existence of the points in the letters and the redundant Arabic extension character. We use the pointed letters with extension to hold the secret bit 'one' and the un-pointed letters with extension to hold 'zero'. This steganography technique is found attractive to other languages having similar texts to Arabic such as Persian and Urdu.

Keywords: Arabic text, Cryptography, Feature coding, Information security, Text steganography, Text watermarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3504
197 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods

Authors: C. Kalamani, K. Paramasivam

Abstract:

In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.

Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
196 Coding Considerations for Standalone Molecular Dynamics Simulations of Atomistic Structures

Authors: R. O. Ocaya, J. J. Terblans

Abstract:

The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.

Keywords: C-language, molecular dynamics, simulation, embedded atom method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
195 A New Approach to ECG Biometric Systems: A Comparitive Study between LPC and WPD Systems

Authors: Justin Leo Cheang Loong, Khazaimatol S Subari, Rosli Besar, Muhammad Kamil Abdullah

Abstract:

In this paper, a novel method for a biometric system based on the ECG signal is proposed, using spectral coefficients computed through linear predictive coding (LPC). ECG biometric systems have traditionally incorporated characteristics of fiducial points of the ECG signal as the feature set. These systems have been shown to contain loopholes and thus a non-fiducial system allows for tighter security. In the proposed system, incorporating non-fiducial features from the LPC spectrum produced a segment and subject recognition rate of 99.52% and 100% respectively. The recognition rates outperformed the biometric system that is based on the wavelet packet decomposition (WPD) algorithm in terms of recognition rates and computation time. This allows for LPC to be used in a practical ECG biometric system that requires fast, stringent and accurate recognition.

Keywords: biometric, ecg, linear predictive coding, wavelet packet decomposition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2900
194 Performance Evaluation of Cooperative Diversity in Flat Fading Channel with Error Control Coding

Authors: Oluseye Adeniyi Adeleke, Mohd Fadzli Salleh

Abstract:

Cooperative communication provides transmit diversity, even when, due to size constraints, mobile units cannot accommodate multiple antennas. A versatile cooperation method called coded cooperation has been developed, in which cooperation is implemented through channel coding with a view to controlling the errors inherent in wireless communication. In this work we evaluate the performance of coded cooperation in flat Rayleigh fading environment using a concept known as the pair wise error probability (PEP). We derive the PEP for a flat fading scenario in coded cooperation and then compare with the signal-to-noise ratio of the users in the network. Results show that an increase in the SNR leads to a decrease in the PEP. We also carried out simulations to validate the result.

Keywords: Channel state information, coded cooperation, cooperative systems, pairwise-error-probability, Reed-Solomon codes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
193 Unequal Error Protection for Region of Interest with Embedded Zerotree Wavelet

Authors: T. Hirner, J. Polec

Abstract:

This paper describes a new method of unequal error protection (UEP) for region of interest (ROI) with embedded zerotree wavelet algorithm (EZW). ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the unequal error protection (UEP) of image transmission with defined ROI and the equal error protection (EEP) over multiple noisy channels.

Keywords: embedded zerotree wavelet (EZW), equal error protection (EEP), region of interest (ROI), RS code, unequal error protection (UEP)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
192 Error Correction Codes in Wireless Sensor Network: An Energy Aware Approach

Authors: Mohammad Rakibul Islam

Abstract:

Link reliability and transmitted power are two important design constraints in wireless network design. Error control coding (ECC) is a classic approach used to increase link reliability and to lower the required transmitted power. It provides coding gain, resulting in transmitter energy savings at the cost of added decoder power consumption. But the choice of ECC is very critical in the case of wireless sensor network (WSN). Since the WSNs are energy constraint in nature, both the BER and power consumption has to be taken into count. This paper develops a step by step approach in finding suitable error control codes for WSNs. Several simulations are taken considering different error control codes and the result shows that the RS(31,21) fits both in BER and power consumption criteria.

Keywords: Error correcting code, RS, BCH, wireless sensor networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3232
191 Effect of Scene Changing on Image Sequences Compression Using Zero Tree Coding

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

We study in this paper the effect of the scene changing on image sequences coding system using Embedded Zerotree Wavelet (EZW). The scene changing considered here is the full motion which may occurs. A special image sequence is generated where the scene changing occurs randomly. Two scenarios are considered: In the first scenario, the system must provide the reconstruction quality as best as possible by the management of the bit rate (BR) while the scene changing occurs. In the second scenario, the system must keep the bit rate as constant as possible by the management of the reconstruction quality. The first scenario may be motivated by the availability of a large band pass transmission channel where an increase of the bit rate may be possible to keep the reconstruction quality up to a given threshold. The second scenario may be concerned by the narrow band pass transmission channel where an increase of the bit rate is not possible. In this last case, applications for which the reconstruction quality is not a constraint may be considered. The simulations are performed with five scales wavelet decomposition using the 9/7-tap filter bank biorthogonal wavelet. The entropy coding is performed using a specific defined binary code book and EZW algorithm. Experimental results are presented and compared to LEAD H263 EVAL. It is shown that if the reconstruction quality is the constraint, the system increases the bit rate to obtain the required quality. In the case where the bit rate must be constant, the system is unable to provide the required quality if the scene change occurs; however, the system is able to improve the quality while the scene changing disappears.

Keywords: Image Sequence Compression, Wavelet Transform, Scene Changing, Zero Tree, Bit Rate, Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
190 Image Compression Using Multiwavelet and Multi-Stage Vector Quantization

Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, P. Navaneethan

Abstract:

The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.

Keywords: Image compression, Multiwavelets, Multi-stagevector quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
189 Being a Lay Partner in Jesuit Higher Education in the Philippines: A Grounded Theory Application

Authors: Janet B. Badong-Badilla

Abstract:

In Jesuit universities, laypersons, who come from the same or different faith backgrounds or traditions, are considered as collaborators in mission. The Jesuits themselves support the contributions of the lay partners in realizing the mission of the Society of Jesus and recognize the important role that they play in education. This study aims to investigate and generate particular notions and understandings of lived experiences of being a lay partner in Jesuit universities in the Philippines, particularly those involved in higher education. Using the qualitative approach as introduced by grounded theorist Barney Glaser, the lay partners’ concept of being a partner, as lived in higher education, is generated systematically from the data collected in the field primarily through in-depth interviews, field notes and observations. Glaser’s constant comparative method of analysis of data is used going through the phases of open coding, theoretical coding, and selective coding from memoing to theoretical sampling to sorting and then writing. In this study, Glaser’s grounded theory as a methodology will provide a substantial insight into and articulation of the layperson’s actual experience of being a partner of the Jesuits in education. Such articulation provides a phenomenological approach or framework to an understanding of the meaning and core characteristics of Jesuit-Lay partnership in Jesuit educational institution of higher learning in the country. This study is expected to provide a framework or model for lay partnership in academic institutions that have the same practice of having lay partners in mission.

Keywords: Grounded theory, Jesuit mission in higher education, lay partner, lived experience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1069