Search results for: lossless compression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 495

Search results for: lossless compression

495 Efficient Lossless Compression of Weather Radar Data

Authors: Wei-hua Ai, Wei Yan, Xiang Li

Abstract:

Data compression is used operationally to reduce bandwidth and storage requirements. An efficient method for achieving lossless weather radar data compression is presented. The characteristics of the data are taken into account and the optical linear prediction is used for the PPI images in the weather radar data in the proposed method. The next PPI image is identical to the current one and a dramatic reduction in source entropy is achieved by using the prediction algorithm. Some lossless compression methods are used to compress the predicted data. Experimental results show that for the weather radar data, the method proposed in this paper outperforms the other methods.

Keywords: Lossless compression, weather radar data, optical linear prediction, PPI image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2256
494 An Analysis of Compression Methods and Implementation of Medical Images in Wireless Network

Authors: C. Rajan, K. Geetha, S. Geetha

Abstract:

The motivation of image compression technique is to reduce the irrelevance and redundancy of the image data in order to store or pass data in an efficient way from one place to another place. There are several types of compression methods available. Without the help of compression technique, the file size is knowingly larger, usually several megabytes, but by doing the compression technique, it is possible to reduce file size up to 10% as of the original without noticeable loss in quality. Image compression can be lossless or lossy. The compression technique can be applied to images, audio, video and text data. This research work mainly concentrates on methods of encoding, DCT, compression methods, security, etc. Different methodologies and network simulations have been analyzed here. Various methods of compression methodologies and its performance metrics has been investigated and presented in a table manner.

Keywords: Image compression techniques, encoding, DCT, lossy compression, lossless compression, JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188
493 Efficient Secured Lossless Coding of Medical Images– Using Modified Runlength Coding for Character Representation

Authors: S. Annadurai, P. Geetha

Abstract:

Lossless compression schemes with secure transmission play a key role in telemedicine applications that helps in accurate diagnosis and research. Traditional cryptographic algorithms for data security are not fast enough to process vast amount of data. Hence a novel Secured lossless compression approach proposed in this paper is based on reversible integer wavelet transform, EZW algorithm, new modified runlength coding for character representation and selective bit scrambling. The use of the lifting scheme allows generating truly lossless integer-to-integer wavelet transforms. Images are compressed/decompressed by well-known EZW algorithm. The proposed modified runlength coding greatly improves the compression performance and also increases the security level. This work employs scrambling method which is fast, simple to implement and it provides security. Lossless compression ratios and distortion performance of this proposed method are found to be better than other lossless techniques.

Keywords: EZW algorithm, lifting scheme, losslesscompression, reversible integer wavelet transform, securetransmission, selective bit scrambling, modified runlength coding .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366
492 A Survey on Lossless Compression of Bayer Color Filter Array Images

Authors: Alina Trifan, António J. R. Neves

Abstract:

Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.

Keywords: Bayer images, CFA, losseless compression, image coding standards.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
491 Dynamic Decompression for Text Files

Authors: Ananth Kamath, Ankit Kant, Aravind Srivatsa, Harisha J.A

Abstract:

Compression algorithms reduce the redundancy in data representation to decrease the storage required for that data. Lossless compression researchers have developed highly sophisticated approaches, such as Huffman encoding, arithmetic encoding, the Lempel-Ziv (LZ) family, Dynamic Markov Compression (DMC), Prediction by Partial Matching (PPM), and Burrows-Wheeler Transform (BWT) based algorithms. Decompression is also required to retrieve the original data by lossless means. A compression scheme for text files coupled with the principle of dynamic decompression, which decompresses only the section of the compressed text file required by the user instead of decompressing the entire text file. Dynamic decompressed files offer better disk space utilization due to higher compression ratios compared to most of the currently available text file formats.

Keywords: Compression, Dynamic Decompression, Text file format, Portable Document Format, Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
490 EZW Coding System with Artificial Neural Networks

Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar

Abstract:

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1932
489 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.

Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073
488 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.

Keywords: DCT, FIC, PIFS, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823
487 Detection of Breast Cancer in the JPEG2000 Domain

Authors: Fayez M. Idris, Nehal I. AlZubaidi

Abstract:

Breast cancer detection techniques have been reported to aid radiologists in analyzing mammograms. We note that most techniques are performed on uncompressed digital mammograms. Mammogram images are huge in size necessitating the use of compression to reduce storage/transmission requirements. In this paper, we present an algorithm for the detection of microcalcifications in the JPEG2000 domain. The algorithm is based on the statistical properties of the wavelet transform that the JPEG2000 coder employs. Simulation results were carried out at different compression ratios. The sensitivity of this algorithm ranges from 92% with a false positive rate of 4.7 down to 66% with a false positive rate of 2.1 using lossless compression and lossy compression at a compression ratio of 100:1, respectively.

Keywords: Breast cancer, JPEG2000, mammography, microcalcifications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
486 Quad Tree Decomposition Based Analysis of Compressed Image Data Communication for Lossy and Lossless Using WSN

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Quad Tree Decomposition based performance analysis of compressed image data communication for lossy and lossless through wireless sensor network is presented. Images have considerably higher storage requirement than text. While transmitting a multimedia content there is chance of the packets being dropped due to noise and interference. At the receiver end the packets that carry valuable information might be damaged or lost due to noise, interference and congestion. In order to avoid the valuable information from being dropped various retransmission schemes have been proposed. In this proposed scheme QTD is used. QTD is an image segmentation method that divides the image into homogeneous areas. In this proposed scheme involves analysis of parameters such as compression ratio, peak signal to noise ratio, mean square error, bits per pixel in compressed image and analysis of difficulties during data packet communication in Wireless Sensor Networks. By considering the above, this paper is to use the QTD to improve the compression ratio as well as visual quality and the algorithm in MATLAB 7.1 and NS2 Simulator software tool.

Keywords: Image compression, Compression Ratio, Quad tree decomposition, Wireless sensor networks, NS2 simulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
485 Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.

Keywords: Fractal image compression, IFS, PIFS, PSNR, Quadtree partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
484 Scintigraphic Image Coding of Region of Interest Based On SPIHT Algorithm Using Global Thresholding and Huffman Coding

Authors: A. Seddiki, M. Djebbouri, D. Guerchi

Abstract:

Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to the lossless compression in the region of interest of Scintigraphic images based on SPIHT algorithm and global transform thresholding using Huffman coding.

Keywords: Global Thresholding Transform, Huffman Coding, Region of Interest, SPIHT Coding, Scintigraphic images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
483 A New Technique for Progressive ECG Transmission using Discrete Radon Transform

Authors: Amine Naït-Ali

Abstract:

The aim of this paper is to present a new method which can be used for progressive transmission of electrocardiogram (ECG). The idea consists in transforming any ECG signal to an image, containing one beat in each row. In the first step, the beats are synchronized in order to reduce the high frequencies due to inter-beat transitions. The obtained image is then transformed using a discrete version of Radon Transform (DRT). Hence, transmitting the ECG, leads to transmit the most significant energy of the transformed image in Radon domain. For decoding purpose, the receptor needs to use the inverse Radon Transform as well as the two synchronization frames. The presented protocol can be adapted for lossy to lossless compression systems. In lossy mode we show that the compression ratio can be multiplied by an average factor of 2 for an acceptable quality of reconstructed signal. These results have been obtained on real signals from MIT database.

Keywords: Discrete Radon Transform, ECG compression, synchronization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1427
482 A Lossless Watermarking Based Authentication System For Medical Images

Authors: Samia Boucherkha, Mohamed Benmohamed

Abstract:

In this paper we investigate the watermarking authentication when applied to medical imagery field. We first give an overview of watermarking technology by paying attention to fragile watermarking since it is the usual scheme for authentication.We then analyze the requirements for image authentication and integrity in medical imagery, and we show finally that invertible schemes are the best suited for this particular field. A well known authentication method is studied. This technique is then adapted here for interleaving patient information and message authentication code with medical images in a reversible manner, that is using lossless compression. The resulting scheme enables on a side the exact recovery of the original image that can be unambiguously authenticated, and on the other side, the patient information to be saved or transmitted in a confidential way. To ensure greater security the patient information is encrypted before being embedded into images.

Keywords: Medical Imaging, Invertible Watermarking, Authentication, Integrity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2664
481 Performance Evaluation of Compression Algorithms for Developing and Testing Industrial Imaging Systems

Authors: Daniel F. Garcia, Julio Molleda, Francisco Gonzalez, Ruben Usamentiaga

Abstract:

The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.

Keywords: Lossless image compression, codec performanceevaluation, grayscale codec comparison, real-time image recording.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
480 Near-Lossless Image Coding based on Orthogonal Polynomials

Authors: Krishnamoorthy R, Rajavijayalakshmi K, Punidha R

Abstract:

In this paper, a near lossless image coding scheme based on Orthogonal Polynomials Transform (OPT) has been presented. The polynomial operators and polynomials basis operators are obtained from set of orthogonal polynomials functions for the proposed transform coding. The image is partitioned into a number of distinct square blocks and the proposed transform coding is applied to each of these individually. After applying the proposed transform coding, the transformed coefficients are rearranged into a sub-band structure. The Embedded Zerotree (EZ) coding algorithm is then employed to quantize the coefficients. The proposed transform is implemented for various block sizes and the performance is compared with existing Discrete Cosine Transform (DCT) transform coding scheme.

Keywords: Near-lossless Coding, Orthogonal Polynomials Transform, Embedded Zerotree Coding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
479 Highly Scalable, Reversible and Embedded Image Compression System

Authors: Federico Pérez González, Iñaki Goiricelaia Ordorika, Pedro Iriondo Bengoa

Abstract:

A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.

Keywords: Image compression, wavelet transform, highlyscalable, reversible transform, embedded, subcomponents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
478 Reversible, Embedded and Highly Scalable Image Compression System

Authors: Federico Pérez González, Iñaki Goirizelaia Ordorika, Pedro Iriondo Bengoa

Abstract:

In this work a new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuous-tone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different importance levels from which the bit stream will be generated. The subcomponents of each importance level are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several improvement levels.

Keywords: Image compression, wavelet transform, highly scalable, reversible transform, embedded, subcomponents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1300
477 Supercompression for Full-HD and 4k-3D (8k)Digital TV Systems

Authors: Mario Mastriani

Abstract:

In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.

Keywords: General-Purpose computation on Graphics Processing Units, Image Compression, Interpolation, Super-resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
476 A Novel Compression Algorithm for Electrocardiogram Signals based on Wavelet Transform and SPIHT

Authors: Sana Ktata, Kaïs Ouni, Noureddine Ellouze

Abstract:

Electrocardiogram (ECG) data compression algorithm is needed that will reduce the amount of data to be transmitted, stored and analyzed, but without losing the clinical information content. A wavelet ECG data codec based on the Set Partitioning In Hierarchical Trees (SPIHT) compression algorithm is proposed in this paper. The SPIHT algorithm has achieved notable success in still image coding. We modified the algorithm for the one-dimensional (1-D) case and applied it to compression of ECG data. By this compression method, small percent root mean square difference (PRD) and high compression ratio with low implementation complexity are achieved. Experiments on selected records from the MIT-BIH arrhythmia database revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. Compression ratios of up to 48:1 for ECG signals lead to acceptable results for visual inspection.

Keywords: Discrete Wavelet Transform, ECG compression, SPIHT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130
475 An Improvement of PDLZW implementation with a Modified WSC Updating Technique on FPGA

Authors: Perapong Vichitkraivin, Orachat Chitsobhuk

Abstract:

In this paper, an improvement of PDLZW implementation with a new dictionary updating technique is proposed. A unique dictionary is partitioned into hierarchical variable word-width dictionaries. This allows us to search through dictionaries in parallel. Moreover, the barrel shifter is adopted for loading a new input string into the shift register in order to achieve a faster speed. However, the original PDLZW uses a simple FIFO update strategy, which is not efficient. Therefore, a new window based updating technique is implemented to better classify the difference in how often each particular address in the window is referred. The freezing policy is applied to the address most often referred, which would not be updated until all the other addresses in the window have the same priority. This guarantees that the more often referred addresses would not be updated until their time comes. This updating policy leads to an improvement on the compression efficiency of the proposed algorithm while still keep the architecture low complexity and easy to implement.

Keywords: lossless data compression, LZW algorithm, PDLZW algorithm, WSC and dictionary update.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
474 Wavelet Compression of ECG Signals Using SPIHT Algorithm

Authors: Mohammad Pooyan, Ali Taheri, Morteza Moazami-Goudarzi, Iman Saboori

Abstract:

In this paper we present a novel approach for wavelet compression of electrocardiogram (ECG) signals based on the set partitioning in hierarchical trees (SPIHT) coding algorithm. SPIHT algorithm has achieved prominent success in image compression. Here we use a modified version of SPIHT for one dimensional signals. We applied wavelet transform with SPIHT coding algorithm on different records of MIT-BIH database. The results show the high efficiency of this method in ECG compression.

Keywords: ECG compression, wavelet, SPIHT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2397
473 Compression of Semistructured Documents

Authors: Leo Galambos, Jan Lansky, Katsiaryna Chernik

Abstract:

EGOTHOR is a search engine that indexes the Web and allows us to search the Web documents. Its hit list contains URL and title of the hits, and also some snippet which tries to shortly show a match. The snippet can be almost always assembled by an algorithm that has a full knowledge of the original document (mostly HTML page). It implies that the search engine is required to store the full text of the documents as a part of the index. Such a requirement leads us to pick up an appropriate compression algorithm which would reduce the space demand. One of the solutions could be to use common compression methods, for instance gzip or bzip2, but it might be preferable if we develop a new method which would take advantage of the document structure, or rather, the textual character of the documents. There already exist a special compression text algorithms and methods for a compression of XML documents. The aim of this paper is an integration of the two approaches to achieve an optimal level of the compression ratio

Keywords: Compression, search engine, HTML, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
472 Effective Context Lossless Image Coding Approach Based on Adaptive Prediction

Authors: Grzegorz Ulacha, Ryszard Stasiński

Abstract:

In the paper an effective context based lossless coding technique is presented. Three principal and few auxiliary contexts are defined. The predictor adaptation technique is an improved CoBALP algorithm, denoted CoBALP+. Cumulated predictor error combining 8 bias estimators is calculated. It is shown experimentally that indeed, the new technique is time-effective while it outperforms the well known methods having reasonable time complexity, and is inferior only to extremely computationally complex ones.

Keywords: Adaptive prediction, context coding, image losslesscoding, prediction error bias correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
471 A Parallel Quadtree Approach for Image Compression using Wavelets

Authors: Hamed Vahdat Nejad, Hossein Deldari

Abstract:

Wavelet transforms are multiresolution decompositions that can be used to analyze signals and images. Image compression is one of major applications of wavelet transforms in image processing. It is considered as one of the most powerful methods that provides a high compression ratio. However, its implementation is very time-consuming. At the other hand, parallel computing technologies are an efficient method for image compression using wavelets. In this paper, we propose a parallel wavelet compression algorithm based on quadtrees. We implement the algorithm using MatlabMPI (a parallel, message passing version of Matlab), and compute its isoefficiency function, and show that it is scalable. Our experimental results confirm the efficiency of the algorithm also.

Keywords: Image compression, MPI, Parallel computing, Wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
470 Parallel Image Compression and Analysis with Wavelets

Authors: M. Kutila, J. Viitanen

Abstract:

This paper presents image compression with wavelet based method. The wavelet transformation divides image to low- and high pass filtered parts. The traditional JPEG compression technique requires lower computation power with feasible losses, when only compression is needed. However, there is obvious need for wavelet based methods in certain circumstances. The methods are intended to the applications in which the image analyzing is done parallel with compression. Furthermore, high frequency bands can be used to detect changes or edges. Wavelets enable hierarchical analysis for low pass filtered sub-images. The first analysis can be done for a small image, and only if any interesting is found, the whole image is processed or reconstructed.

Keywords: image compression, jpeg, wavelet, vlc

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
469 Colour Image Compression Method Based On Fractal Block Coding Technique

Authors: Dibyendu Ghoshal, Shimal Das

Abstract:

Image compression based on fractal coding is a lossy compression method and normally used for gray level images range and domain blocks in rectangular shape. Fractal based digital image compression technique provide a large compression ratio and in this paper, it is proposed using YUV colour space and the fractal theory which is based on iterated transformation. Fractal geometry is mainly applied in the current study towards colour image compression coding. These colour images possesses correlations among the colour components and hence high compression ratio can be achieved by exploiting all these redundancies. The proposed method utilises the self-similarity in the colour image as well as the cross-correlations between them. Experimental results show that the greater compression ratio can be achieved with large domain blocks but more trade off in image quality is good to acceptable at less than 1 bit per pixel.

Keywords: Fractal coding, Iterated Function System (IFS), Image compression, YUV colour space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1977
468 Electrocardiogram Signal Compression Using Multiwavelet Transform

Authors: Morteza Moazami-Goudarzi, Mohammad. H. Moradi

Abstract:

In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MITBIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the cardbal2 by the means of identity (Id) prefiltering method to be the best effective transformation.

Keywords: ECG compression, Multiwavelet, Prefiltering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
467 Influence of Ambient Condition on Performance of Wet Compression Process

Authors: Kyoung Hoon Kim

Abstract:

Gas turbine systems with wet compression have a potential for future power generation, since they can offer a high efficiency and a high specific power with a relatively low cost. In this study influence of ambient condition on the performance of the wet compression process is investigated with a non-equilibrium analytical modeling based on droplet evaporation. Transient behaviors of droplet diameter and temperature of mixed air are investigated for various ambient temperatures. Special attention is paid for the effects of ambient temperature, pressure ratio, and water injection ratios on the important wet compression variables including compressor outlet temperature and compression work. Parametric studies show that downing of the ambient temperature leads to lower compressor outlet temperature and consequently lower consumption of compression work even in wet compression processes.

Keywords: water injection, droplet evaporation, wet compression, gas turbine, ambient condition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
466 Perceptual JPEG Compliant Coding by Using DCT-Based Visibility Thresholds of Color Images

Authors: Kuo-Cheng Liu

Abstract:

Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.

Keywords: Just-noticeable distortion (JND), discrete cosine transform (DCT), JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2581