Search results for: image enhancement.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1977

Search results for: image enhancement.

1947 An Efficient Gaussian Noise Removal Image Enhancement Technique for Gray Scale Images

Authors: V. Murugan, R. Balasubramanian

Abstract:

Image enhancement is a challenging issue in many applications. In the last two decades, there are various filters developed. This paper proposes a novel method which removes Gaussian noise from the gray scale images. The proposed technique is compared with Enhanced Fuzzy Peer Group Filter (EFPGF) for various noise levels. Experimental results proved that the proposed filter achieves better Peak-Signal-to-Noise-Ratio PSNR than the existing techniques. The proposed technique achieves 1.736dB gain in PSNR than the EFPGF technique.

Keywords: Gaussian noise, adaptive bilateral filter, fuzzy peer group filter, switching bilateral filter, PSNR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2486
1946 Tests and Measurements of Image Acquisition Characteristics for Image Sensors

Authors: Seongsoo Lee, Jong-Bae Lee, Wookkang Lee, Duyen Hai Pham

Abstract:

In the image sensors, the acquired image often differs from the real image in luminance or chrominance due to fabrication defects or nonlinear characteristics, which often lead to pixel defects or sensor failure. Therefore, the image acquisition characteristics of image sensors should be measured and tested before they are mounted on the target product. In this paper, the standardized test and measurement methods of image sensors are introduced. It applies standard light source to the image sensor under test, and the characteristics of the acquired image is compared with ideal values.

Keywords: Image Sensor, Image Acquisition Characteristics, Defect, Failure, Standard, Test, Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
1945 Single Image Defogging Method Using Variational Approach for Edge-Preserving Regularization

Authors: Wan-Hyun Cho, In-Seop Na, Seong-ChaeSeo, Sang-Kyoon Kim, Soon-Young Park

Abstract:

In this paper, we propose the variational approach to solve single image defogging problem. In the inference process of the atmospheric veil, we defined new functional for atmospheric veil that satisfy edge-preserving regularization property. By using the fundamental lemma of calculus of variations, we derive the Euler-Lagrange equation foratmospheric veil that can find the maxima of a given functional. This equation can be solved by using a gradient decent method and time parameter. Then, we can have obtained the estimated atmospheric veil, and then have conducted the image restoration by using inferred atmospheric veil. Finally we have improved the contrast of restoration image by various histogram equalization methods. The experimental results show that the proposed method achieves rather good defogging results.

Keywords: Image defogging, Image restoration, Atmospheric veil, Transmission, Variational approach, Euler-Lagrange equation, Image enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2945
1944 A Comparative Study of Image Segmentation Algorithms

Authors: Mehdi Hosseinzadeh, Parisa Khoshvaght

Abstract:

In some applications, such as image recognition or compression, segmentation refers to the process of partitioning a digital image into multiple segments. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. Image segmentation is to classify or cluster an image into several parts (regions) according to the feature of image, for example, the pixel value or the frequency response. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Several image segmentation algorithms were proposed to segment an image before recognition or compression. Up to now, many image segmentation algorithms exist and be extensively applied in science and daily life. According to their segmentation method, we can approximately categorize them into region-based segmentation, data clustering, and edge-base segmentation. In this paper, we give a study of several popular image segmentation algorithms that are available.

Keywords: Image Segmentation, hierarchical segmentation, partitional segmentation, density estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2920
1943 Scatterer Density in Nonlinear Diffusion for Speckle Reduction in Ultrasound Imaging: The Isotropic Case

Authors: Ahmed Badawi

Abstract:

This paper proposes a method for speckle reduction in medical ultrasound imaging while preserving the edges with the added advantages of adaptive noise filtering and speed. A nonlinear image diffusion method that incorporates local image parameter, namely, scatterer density in addition to gradient, to weight the nonlinear diffusion process, is proposed. The method was tested for the isotropic case with a contrast detail phantom and varieties of clinical ultrasound images, and then compared to linear and some other diffusion enhancement methods. Different diffusion parameters were tested and tuned to best reduce speckle noise and preserve edges. The method showed superior performance measured both quantitatively and qualitatively when incorporating scatterer density into the diffusivity function. The proposed filter can be used as a preprocessing step for ultrasound image enhancement before applying automatic segmentation, automatic volumetric calculations, or 3D ultrasound volume rendering.

Keywords: Ultrasound imaging, Nonlinear isotropic diffusion, Speckle noise, Scattering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
1942 Image Enhancement Algorithm of Photoacoustic Tomography Using Active Contour Filtering

Authors: Prasannakumar Palaniappan, Dong Ho Shin, Chul Gyu Song

Abstract:

The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.

Keywords: Contour filtering, linear array, photoacoustic tomography, universal back projection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
1941 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn

Abstract:

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3420
1940 Survey on Image Mining Using Genetic Algorithm

Authors: Jyoti Dua

Abstract:

One image is worth more than thousand words. Images if analyzed can reveal useful information. Low level image processing deals with the extraction of specific feature from a single image. Now the question arises: What technique should be used to extract patterns of very large and detailed image database? The answer of the question is: “Image Mining”. Image Mining deals with the extraction of image data relationship, implicit knowledge, and another pattern from the collection of images or image database. It is nothing but the extension of Data Mining. In the following paper, not only we are going to scrutinize the current techniques of image mining but also present a new technique for mining images using Genetic Algorithm.

Keywords: Image Mining, Data Mining, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2450
1939 Liver Tumor Detection by Classification through FD Enhancement of CT Image

Authors: N. Ghatwary, A. Ahmed, H. Jalab

Abstract:

In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.

Keywords: Fractional differential (FD), Computed Tomography (CT), fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
1938 Image Enhancement of Medical Images using Gabor Filter Bank on Hexagonal Sampled Grids

Authors: Veni.S , K.A.Narayanankutty

Abstract:

For about two decades scientists have been developing techniques for enhancing the quality of medical images using Fourier transform, DWT (Discrete wavelet transform),PDE model etc., Gabor wavelet on hexagonal sampled grid of the images is proposed in this work. This method has optimal approximation theoretic performances, for a good quality image. The computational cost is considerably low when compared to similar processing in the rectangular domain. As X-ray images contain light scattered pixels, instead of unique sigma, the parameter sigma of 0.5 to 3 is found to satisfy most of the image interpolation requirements in terms of high Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error (MSE) and better image quality by adopting windowing technique.

Keywords: Hexagonal lattices, Gabor filter, Interpolation, imageprocessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2744
1937 New Wavelet-Based Superresolution Algorithm for Speckle Reduction in SAR Images

Authors: Mario Mastriani

Abstract:

This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.

Keywords: Projection, speckle, superresolution, synthetic aperture radar, thresholding, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
1936 Design of Auto Exposure Unit Based On 2-Way Histogram Equalization

Authors: Junghwan Choi, Seongsoo Lee

Abstract:

Histogram equalization is often used in image enhancement, but it can be also used in auto exposure. However, conventional histogram equalization does not work well when many pixels are concentrated in a narrow luminance range.This paper proposes an auto exposure method based on 2-way histogram equalization. Two cumulative distribution functions are used, where one is from dark to bright and the other is from bright to dark. In this paper, the proposed auto exposure method is also designed and implemented for image signal processors with full-HD images.

Keywords: Histogram equalization, Auto exposure, Image signal processor, Low-cost, Full HD Video.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3400
1935 Performance of Compound Enhancement Algorithms on Dental Radiograph Images

Authors: S.A.Ahmad, M.N.Taib, N.E.A.Khalid, R.Ahmad, H.Taib

Abstract:

The purpose of this research is to compare the original intra-oral digital dental radiograph images with images that are enhanced using a combination of image processing algorithms. Intraoral digital dental radiograph images are often noisy, blur edges and low in contrast. A combination of sharpening and enhancement method are used to overcome these problems. Three types of proposed compound algorithms used are Sharp Adaptive Histogram Equalization (SAHE), Sharp Median Adaptive Histogram Equalization (SMAHE) and Sharp Contrast adaptive histogram equalization (SCLAHE). This paper presents an initial study of the perception of six dentists on the details of abnormal pathologies and improvement of image quality in ten intra-oral radiographs. The research focus on the detection of only three types of pathology which is periapical radiolucency, widen periodontal ligament space and loss of lamina dura. The overall result shows that SCLAHE-s slightly improve the appearance of dental abnormalities- over the original image and also outperform the other two proposed compound algorithms.

Keywords: intra-oral dental radiograph, histogram equalization, sharpening, CLAHE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
1934 A New Approach to Steganography using Sinc-Convolution Method

Authors: Ahmad R. Naghsh-Nilchi, Latifeh Pourmohammadbagher

Abstract:

Both image steganography and image encryption have advantages and disadvantages. Steganograhy allows us to hide a desired image containing confidential information in a covered or host image while image encryption is decomposing the desired image to a non-readable, non-comprehended manner. The encryption methods are usually much more robust than the steganographic ones. However, they have a high visibility and would provoke the attackers easily since it usually is obvious from an encrypted image that something is hidden! The combination of steganography and encryption will cover both of their weaknesses and therefore, it increases the security. In this paper an image encryption method based on sinc-convolution along with using an encryption key of 128 bit length is introduced. Then, the encrypted image is covered by a host image using a modified version of JSteg steganography algorithm. This method could be applied to almost all image formats including TIF, BMP, GIF and JPEG. The experiment results show that our method is able to hide a desired image with high security and low visibility.

Keywords: Sinc Approximation, Image Encryption, Sincconvolution, Image Steganography, JSTEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
1933 Effectiveness of Dominant Color Descriptor Technique in Medical Image Retrieval Application

Authors: Mohd Kamir Yusof

Abstract:

This paper presents a dominant color descriptor technique for medical image retrieval. The medical image system will collect and store into medical database. The purpose of dominant color descriptor (DCD) technique is to retrieve medical image and to display similar image using queried image. First, this technique will search and retrieve medical image based on keyword entered by user. After image is found, the system will assign this image as a queried image. DCD technique will calculate the image value of dominant color. Then, system will search and retrieve again medical image based on value of dominant color query image. Finally, the system will display similar images with the queried image to user. Simple application has been developed and tested using dominant color descriptor. Result based on experiment indicates this technique is effective and can be used for medical image retrieval.

Keywords: Medical Image Retrieval, Dominant ColorDescriptor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
1932 Enhancing Multi-Frame Images Using Self-Delaying Dynamic Networks

Authors: Lewis E. Hibell, Honghai Liu, David J. Brown

Abstract:

This paper presents the use of a newly created network structure known as a Self-Delaying Dynamic Network (SDN) to create a high resolution image from a set of time stepped input frames. These SDNs are non-recurrent temporal neural networks which can process time sampled data. SDNs can store input data for a lifecycle and feature dynamic logic based connections between layers. Several low resolution images and one high resolution image of a scene were presented to the SDN during training by a Genetic Algorithm. The SDN was trained to process the input frames in order to recreate the high resolution image. The trained SDN was then used to enhance a number of unseen noisy image sets. The quality of high resolution images produced by the SDN is compared to that of high resolution images generated using Bi-Cubic interpolation. The SDN produced images are superior in several ways to the images produced using Bi-Cubic interpolation.

Keywords: Image Enhancement, Neural Networks, Multi-Frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1199
1931 Blind Low Frequency Watermarking Method

Authors: Dimitar Taskovski, Sofija Bogdanova, Momcilo Bogdanov

Abstract:

We present a low frequency watermarking method adaptive to image content. The image content is analyzed and properties of HVS are exploited to generate a visual mask of the same size as the approximation image. Using this mask we embed the watermark in the approximation image without degrading the image quality. Watermark detection is performed without using the original image. Experimental results show that the proposed watermarking method is robust against most common image processing operations, which can be easily implemented and usually do not degrade the image quality.

Keywords: Blind, digital watermarking, low frequency, visualmask.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
1930 A Comparative Study of Image Segmentation using Edge-Based Approach

Authors: Rajiv Kumar, Arthanariee A. M.

Abstract:

Image segmentation is the process to segment a given image into several parts so that each of these parts present in the image can be further analyzed. There are numerous techniques of image segmentation available in literature. In this paper, authors have been analyzed the edge-based approach for image segmentation. They have been implemented the different edge operators like Prewitt, Sobel, LoG, and Canny on the basis of their threshold parameter. The results of these operators have been shown for various images.

Keywords: Edge Operator, Edge-based Segmentation, Image Segmentation, Matlab 10.4.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3611
1929 Image Transmission via Iterative Cellular-Turbo System

Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan

Abstract:

To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.

Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
1928 Automatically Driven Vector for Guidewire Segmentation in 2D and Biplane Fluoroscopy

Authors: Simon Lessard, Pascal Bigras, Caroline Lau, Daniel Roy, Gilles Soulez, Jacques A. de Guise

Abstract:

The segmentation of endovascular tools in fluoroscopy images can be accurately performed automatically or by minimum user intervention, using known modern techniques. It has been proven in literature, but no clinical implementation exists so far because the computational time requirements of such technology have not yet been met. A classical segmentation scheme is composed of edge enhancement filtering, line detection, and segmentation. A new method is presented that consists of a vector that propagates in the image to track an edge as it advances. The filtering is performed progressively in the projected path of the vector, whose orientation allows for oriented edge detection, and a minimal image area is globally filtered. Such an algorithm is rapidly computed and can be implemented in real-time applications. It was tested on medical fluoroscopy images from an endovascular cerebral intervention. Ex- periments showed that the 2D tracking was limited to guidewires without intersection crosspoints, while the 3D implementation was able to cope with such planar difficulties.

Keywords: Edge detection, Line Enhancement, Segmentation, Fluoroscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
1927 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques

Authors: Hossein Nezamabadi-pour, Saeid Saryazdi

Abstract:

In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.

Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
1926 Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages

Authors: Chutimon Thitipornvanid, Siripun Sanguansintukul

Abstract:

Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images.

Keywords: Metadata, Prediction, Multi-layer perceptron, Human facial image, Image mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1217
1925 Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images

Authors: Ali Zifan, Panos Liatsis, Panagiotis Kantartzis, Manolis Gavaises, Nicos Karcanias, Demosthenes Katritsis

Abstract:

We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes.

Keywords: Vessel enhancement, centerline extraction, symbiotic reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2275
1924 A Quantum Algorithm of Constructing Image Histogram

Authors: Yi Zhang, Kai Lu, Ying-hui Gao, Mo Wang

Abstract:

Histogram plays an important statistical role in digital image processing. However, the existing quantum image models are deficient to do this kind of image statistical processing because different gray scales are not distinguishable. In this paper, a novel quantum image representation model is proposed firstly in which the pixels with different gray scales can be distinguished and operated simultaneously. Based on the new model, a fast quantum algorithm of constructing histogram for quantum image is designed. Performance comparison reveals that the new quantum algorithm could achieve an approximately quadratic speedup than the classical counterpart. The proposed quantum model and algorithm have significant meanings for the future researches of quantum image processing.

Keywords: Quantum Image Representation, Quantum Algorithm, Image Histogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2359
1923 Modified Vector Quantization Method for Image Compression

Authors: K.Somasundaram, S.Domnic

Abstract:

A low bit rate still image compression scheme by compressing the indices of Vector Quantization (VQ) and generating residual codebook is proposed. The indices of VQ are compressed by exploiting correlation among image blocks, which reduces the bit per index. A residual codebook similar to VQ codebook is generated that represents the distortion produced in VQ. Using this residual codebook the distortion in the reconstructed image is removed, thereby increasing the image quality. Our scheme combines these two methods. Experimental results on standard image Lena show that our scheme can give a reconstructed image with a PSNR value of 31.6 db at 0.396 bits per pixel. Our scheme is also faster than the existing VQ variants.

Keywords: Image compression, Vector Quantization, Residual Codebook.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444
1922 Image Similarity: A Genetic Algorithm Based Approach

Authors: R. C. Joshi, Shashikala Tapaswi

Abstract:

The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results.

Keywords: Image Features, color descriptor, segmented classes, texture descriptors, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2330
1921 A Novel Dual-Purpose Image Watermarking Technique

Authors: Maha Sharkas, Dahlia R. ElShafie, Nadder Hamdy

Abstract:

Image watermarking has proven to be quite an efficient tool for the purpose of copyright protection and authentication over the last few years. In this paper, a novel image watermarking technique in the wavelet domain is suggested and tested. To achieve more security and robustness, the proposed techniques relies on using two nested watermarks that are embedded into the image to be watermarked. A primary watermark in form of a PN sequence is first embedded into an image (the secondary watermark) before being embedded into the host image. The technique is implemented using Daubechies mother wavelets where an arbitrary embedding factor α is introduced to improve the invisibility and robustness. The proposed technique has been applied on several gray scale images where a PSNR of about 60 dB was achieved.

Keywords: Image watermarking, Multimedia Security, Wavelets, Image Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
1920 Objective Performance of Compressed Image Quality Assessments

Authors: Ratchakit Sakuldee, Somkait Udomhunsakul

Abstract:

Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).

Keywords: JPEG, JPEG2000, objective image quality measurement, subjective image quality measurement, correlation coefficients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
1919 MAP-Based Image Super-resolution Reconstruction

Authors: Xueting Liu, Daojin Song, Chuandai Dong, Hongkui Li

Abstract:

From a set of shifted, blurred, and decimated image , super-resolution image reconstruction can get a high-resolution image. So it has become an active research branch in the field of image restoration. In general, super-resolution image restoration is an ill-posed problem. Prior knowledge about the image can be combined to make the problem well-posed, which contributes to some regularization methods. In the regularization methods at present, however, regularization parameter was selected by experience in some cases and other techniques have too heavy computation cost for computing the parameter. In this paper, we construct a new super-resolution algorithm by transforming the solving of the System stem Є=An into the solving of the equations X+A*X-1A=I , and propose an inverse iterative method.

Keywords: High-resolution MAP image, Reconstruction, Image interpolation, Motion Estimation, Hermitian positive definite solutions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
1918 Image Search by Features of Sorted Gray level Histogram Polynomial Curve

Authors: Awais Adnan, Muhammad Ali, Amir Hanif Dar

Abstract:

Image Searching was always a problem specially when these images are not properly managed or these are distributed over different locations. Currently different techniques are used for image search. On one end, more features of the image are captured and stored to get better results. Storing and management of such features is itself a time consuming job. While on the other extreme if fewer features are stored the accuracy rate is not satisfactory. Same image stored with different visual properties can further reduce the rate of accuracy. In this paper we present a new concept of using polynomials of sorted histogram of the image. This approach need less overhead and can cope with the difference in visual features of image.

Keywords: Sorted Histogram, Polynomial Curves, feature pointsof images, Grayscale, visual properties of image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430