Search results for: Image reduction
2998 Mammogram Image Size Reduction Using 16-8 bit Conversion Technique
Authors: Ayman A. AbuBaker, Rami S.Qahwaji, Musbah J. Aqel, Mohmmad H. Saleh
Abstract:
Two algorithms are proposed to reduce the storage requirements for mammogram images. The input image goes through a shrinking process that converts the 16-bit images to 8-bits by using pixel-depth conversion algorithm followed by enhancement process. The performance of the algorithms is evaluated objectively and subjectively. A 50% reduction in size is obtained with no loss of significant data at the breast region.Keywords: Breast cancer, Image processing, Image reduction, Mammograms, Image enhancement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20352997 Salient Points Reduction for Content-Based Image Retrieval
Authors: Yao-Hong Tsai
Abstract:
Salient points are frequently used to represent local properties of the image in content-based image retrieval. In this paper, we present a reduction algorithm that extracts the local most salient points such that they not only give a satisfying representation of an image, but also make the image retrieval process efficiently. This algorithm recursively reduces the continuous point set by their corresponding saliency values under a top-down approach. The resulting salient points are evaluated with an image retrieval system using Hausdoff distance. In this experiment, it shows that our method is robust and the extracted salient points provide better retrieval performance comparing with other point detectors.Keywords: Barnard detector, Content-based image retrieval, Points reduction, Salient point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14692996 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image
Authors: Yohei Saika, Yuji Haraguchi
Abstract:
We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19882995 Face Recognition Using Double Dimension Reduction
Authors: M. A Anjum, M. Y. Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Keywords: Biometrics, DCT, Face Recognition, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14922994 Coding of DWT Coefficients using Run-length Coding and Huffman Coding for the Purpose of Color Image Compression
Authors: Varun Setia, Vinod Kumar
Abstract:
In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it-s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique.
Keywords: Lossy compression, DWT, quantization, Run length coding, Huffman coding, JPEG2000.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29242993 A Novel Approach to Image Compression of Colour Images by Plane Reduction Technique
Authors: K.Sowmyan, A.Siddarth, D.Menaka
Abstract:
Several methods have been proposed for color image compression but the reconstructed image had very low signal to noise ratio which made it inefficient. This paper describes a lossy compression technique for color images which overcomes the drawbacks. The technique works on spatial domain where the pixel values of RGB planes of the input color image is mapped onto two dimensional planes. The proposed technique produced better results than JPEG2000, 2DPCA and a comparative study is reported based on the image quality measures such as PSNR and MSE.Experiments on real time images are shown that compare this methodology with previous ones and demonstrate its advantages.Keywords: Color Image compression, spatial domain, planereduction, root mean square, image restoration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16332992 An Adaptive Model for Blind Image Restoration using Bayesian Approach
Authors: S.K. Satpathy, S.K. Nayak, K. K. Nagwanshi, S. Panda, C. Ardil
Abstract:
Image restoration involves elimination of noise. Filtering techniques were adopted so far to restore images since last five decades. In this paper, we consider the problem of image restoration degraded by a blur function and corrupted by random noise. A method for reducing additive noise in images by explicit analysis of local image statistics is introduced and compared to other noise reduction methods. The proposed method, which makes use of an a priori noise model, has been evaluated on various types of images. Bayesian based algorithms and technique of image processing have been described and substantiated with experimentation using MATLAB.Keywords: Image Restoration, Probability DensityFunction (PDF), Neural Networks, Bayesian Classifier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22472991 A New Approach to Face Recognition Using Dual Dimension Reduction
Authors: M. Almas Anjum, M. Younus Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16872990 A Novel Fuzzy Technique for Image Noise Reduction
Authors: Hamed Vahdat Nejad, Hameed Reza Pourreza, Hasan Ebrahimi
Abstract:
A new fuzzy filter is presented for noise reduction of images corrupted with additive noise. The filter consists of two stages. In the first stage, all the pixels of image are processed for determining noisy pixels. For this, a fuzzy rule based system associates a degree to each pixel. The degree of a pixel is a real number in the range [0,1], which denotes a probability that the pixel is not considered as a noisy pixel. In the second stage, another fuzzy rule based system is employed. It uses the output of the previous fuzzy system to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Experimental results are obtained to show the feasibility of the proposed filter. These results are also compared to other filters by numerical measure and visual inspection.Keywords: Additive noise, Fuzzy logic, Image processing, Noise reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21122989 On the Reduction of Side Effects in Tomography
Authors: V. Masilamani, C. Vanniarajan, Kamala Krithivasan
Abstract:
As the Computed Tomography(CT) requires normally hundreds of projections to reconstruct the image, patients are exposed to more X-ray energy, which may cause side effects such as cancer. Even when the variability of the particles in the object is very less, Computed Tomography requires many projections for good quality reconstruction. In this paper, less variability of the particles in an object has been exploited to obtain good quality reconstruction. Though the reconstructed image and the original image have same projections, in general, they need not be the same. In addition to projections, if a priori information about the image is known, it is possible to obtain good quality reconstructed image. In this paper, it has been shown by experimental results why conventional algorithms fail to reconstruct from a few projections, and an efficient polynomial time algorithm has been given to reconstruct a bi-level image from its projections along row and column, and a known sub image of unknown image with smoothness constraints by reducing the reconstruction problem to integral max flow problem. This paper also discusses the necessary and sufficient conditions for uniqueness and extension of 2D-bi-level image reconstruction to 3D-bi-level image reconstruction.Keywords: Discrete Tomography, Image Reconstruction, Projection, Computed Tomography, Integral Max Flow Problem, Smooth Binary Image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13702988 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform
Authors: Vijaya Prakash.A.M, K.S.Gurumurthy
Abstract:
In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31392987 Rough Neural Networks in Adapting Cellular Automata Rule for Reducing Image Noise
Authors: Yasser F. Hassan
Abstract:
The reduction or removal of noise in a color image is an essential part of image processing, whether the final information is used for human perception or for an automatic inspection and analysis. This paper describes the modeling system based on the rough neural network model to adaptive cellular automata for various image processing tasks and noise remover. In this paper, we consider the problem of object processing in colored image using rough neural networks to help deriving the rules which will be used in cellular automata for noise image. The proposed method is compared with some classical and recent methods. The results demonstrate that the new model is capable of being trained to perform many different tasks, and that the quality of these results is comparable or better than established specialized algorithms.
Keywords: Rough Sets, Rough Neural Networks, Cellular Automata, Image Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19492986 Effect of Neighborhood Size on Negative Weights in Punctual Kriging Based Image Restoration
Authors: Asmatullah Chaudhry, Anwar M. Mirza
Abstract:
We present a general comparison of punctual kriging based image restoration for different neighbourhood sizes. The formulation of the technique under consideration is based on punctual kriging and fuzzy concepts for image restoration in spatial domain. Three different neighbourhood windows are considered to estimate the semivariance at different lags for studying its effect in reduction of negative weights resulted in punctual kriging, consequently restoration of degraded images. Our results show that effect of neighbourhood size higher than 5x5 on reduction in negative weights is insignificant. In addition, image quality measures, such as structure similarity indices, peak signal to noise ratios and the new variogram based quality measures; show that 3x3 window size gives better performance as compared with larger window sizes.
Keywords: Image restoration, punctual kriging, semi-variance, structure similarity index, negative weights in punctual kriging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23582985 The Robust Clustering with Reduction Dimension
Authors: Dyah E. Herwindiati
Abstract:
A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paperKeywords: Breakdown point, Consistency, 2DPCA, PCA, Outlier, Vector Variance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16982984 Tests and Measurements of Image Acquisition Characteristics for Image Sensors
Authors: Seongsoo Lee, Jong-Bae Lee, Wookkang Lee, Duyen Hai Pham
Abstract:
In the image sensors, the acquired image often differs from the real image in luminance or chrominance due to fabrication defects or nonlinear characteristics, which often lead to pixel defects or sensor failure. Therefore, the image acquisition characteristics of image sensors should be measured and tested before they are mounted on the target product. In this paper, the standardized test and measurement methods of image sensors are introduced. It applies standard light source to the image sensor under test, and the characteristics of the acquired image is compared with ideal values.
Keywords: Image Sensor, Image Acquisition Characteristics, Defect, Failure, Standard, Test, Measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16902983 A Comparative Study of Image Segmentation Algorithms
Authors: Mehdi Hosseinzadeh, Parisa Khoshvaght
Abstract:
In some applications, such as image recognition or compression, segmentation refers to the process of partitioning a digital image into multiple segments. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. Image segmentation is to classify or cluster an image into several parts (regions) according to the feature of image, for example, the pixel value or the frequency response. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Several image segmentation algorithms were proposed to segment an image before recognition or compression. Up to now, many image segmentation algorithms exist and be extensively applied in science and daily life. According to their segmentation method, we can approximately categorize them into region-based segmentation, data clustering, and edge-base segmentation. In this paper, we give a study of several popular image segmentation algorithms that are available.Keywords: Image Segmentation, hierarchical segmentation, partitional segmentation, density estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29182982 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis
Authors: Gaoyong Luo
Abstract:
The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20282981 Image Restoration in Non-Linear Filtering Domain using MDB approach
Authors: S. K. Satpathy, S. Panda, K. K. Nagwanshi, C. Ardil
Abstract:
This paper proposes a new technique based on nonlinear Minmax Detector Based (MDB) filter for image restoration. The aim of image enhancement is to reconstruct the true image from the corrupted image. The process of image acquisition frequently leads to degradation and the quality of the digitized image becomes inferior to the original image. Image degradation can be due to the addition of different types of noise in the original image. Image noise can be modeled of many types and impulse noise is one of them. Impulse noise generates pixels with gray value not consistent with their local neighborhood. It appears as a sprinkle of both light and dark or only light spots in the image. Filtering is a technique for enhancing the image. Linear filter is the filtering in which the value of an output pixel is a linear combination of neighborhood values, which can produce blur in the image. Thus a variety of smoothing techniques have been developed that are non linear. Median filter is the one of the most popular non-linear filter. When considering a small neighborhood it is highly efficient but for large window and in case of high noise it gives rise to more blurring to image. The Centre Weighted Mean (CWM) filter has got a better average performance over the median filter. However the original pixel corrupted and noise reduction is substantial under high noise condition. Hence this technique has also blurring affect on the image. To illustrate the superiority of the proposed approach, the proposed new scheme has been simulated along with the standard ones and various restored performance measures have been compared.
Keywords: Filtering, Minmax Detector Based (MDB), noise, centre weighted mean filter, PSNR, restoration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27392980 Survey on Image Mining Using Genetic Algorithm
Authors: Jyoti Dua
Abstract:
One image is worth more than thousand words. Images if analyzed can reveal useful information. Low level image processing deals with the extraction of specific feature from a single image. Now the question arises: What technique should be used to extract patterns of very large and detailed image database? The answer of the question is: “Image Mining”. Image Mining deals with the extraction of image data relationship, implicit knowledge, and another pattern from the collection of images or image database. It is nothing but the extension of Data Mining. In the following paper, not only we are going to scrutinize the current techniques of image mining but also present a new technique for mining images using Genetic Algorithm.
Keywords: Image Mining, Data Mining, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24452979 An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference
Authors: Ayman A. Aly, Abdallah A. Alshnnaway
Abstract:
The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.Keywords: Additive noise, edge preserving filtering, fuzzy image filtering, noise reduction, two dimensional mechanical images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15682978 Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm
Authors: Mahmoud Saeidi, Khadijeh Saeidi, Mahmoud Khaleghi
Abstract:
In this paper, we propose a novel spatiotemporal fuzzy based algorithm for noise filtering of image sequences. Our proposed algorithm uses adaptive weights based on a triangular membership functions. In this algorithm median filter is used to suppress noise. Experimental results show when the images are corrupted by highdensity Salt and Pepper noise, our fuzzy based algorithm for noise filtering of image sequences, are much more effective in suppressing noise and preserving edges than the previously reported algorithms such as [1-7]. Indeed, assigned weights to noisy pixels are very adaptive so that they well make use of correlation of pixels. On the other hand, the motion estimation methods are erroneous and in highdensity noise they may degrade the filter performance. Therefore, our proposed fuzzy algorithm doesn-t need any estimation of motion trajectory. The proposed algorithm admissibly removes noise without having any knowledge of Salt and Pepper noise density.Keywords: Image Sequences, Noise Reduction, fuzzy algorithm, triangular membership function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18812977 A New Approach to Steganography using Sinc-Convolution Method
Authors: Ahmad R. Naghsh-Nilchi, Latifeh Pourmohammadbagher
Abstract:
Both image steganography and image encryption have advantages and disadvantages. Steganograhy allows us to hide a desired image containing confidential information in a covered or host image while image encryption is decomposing the desired image to a non-readable, non-comprehended manner. The encryption methods are usually much more robust than the steganographic ones. However, they have a high visibility and would provoke the attackers easily since it usually is obvious from an encrypted image that something is hidden! The combination of steganography and encryption will cover both of their weaknesses and therefore, it increases the security. In this paper an image encryption method based on sinc-convolution along with using an encryption key of 128 bit length is introduced. Then, the encrypted image is covered by a host image using a modified version of JSteg steganography algorithm. This method could be applied to almost all image formats including TIF, BMP, GIF and JPEG. The experiment results show that our method is able to hide a desired image with high security and low visibility.Keywords: Sinc Approximation, Image Encryption, Sincconvolution, Image Steganography, JSTEG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18282976 Effectiveness of Dominant Color Descriptor Technique in Medical Image Retrieval Application
Authors: Mohd Kamir Yusof
Abstract:
This paper presents a dominant color descriptor technique for medical image retrieval. The medical image system will collect and store into medical database. The purpose of dominant color descriptor (DCD) technique is to retrieve medical image and to display similar image using queried image. First, this technique will search and retrieve medical image based on keyword entered by user. After image is found, the system will assign this image as a queried image. DCD technique will calculate the image value of dominant color. Then, system will search and retrieve again medical image based on value of dominant color query image. Finally, the system will display similar images with the queried image to user. Simple application has been developed and tested using dominant color descriptor. Result based on experiment indicates this technique is effective and can be used for medical image retrieval.Keywords: Medical Image Retrieval, Dominant ColorDescriptor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17422975 Blind Low Frequency Watermarking Method
Authors: Dimitar Taskovski, Sofija Bogdanova, Momcilo Bogdanov
Abstract:
We present a low frequency watermarking method adaptive to image content. The image content is analyzed and properties of HVS are exploited to generate a visual mask of the same size as the approximation image. Using this mask we embed the watermark in the approximation image without degrading the image quality. Watermark detection is performed without using the original image. Experimental results show that the proposed watermarking method is robust against most common image processing operations, which can be easily implemented and usually do not degrade the image quality.Keywords: Blind, digital watermarking, low frequency, visualmask.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15422974 A Novel VLSI Architecture of Hybrid Image Compression Model based on Reversible Blockade Transform
Authors: C. Hemasundara Rao, M. Madhavi Latha
Abstract:
Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. Furthermore, the discrete cosine transform has emerged as the new state-of-the art standard for image compression. In this paper, a hybrid image compression technique based on reversible blockade transform coding is proposed. The technique, implemented over regions of interest (ROIs), is based on selection of the coefficients that belong to different transforms, depending on the coefficients is proposed. This method allows: (1) codification of multiple kernals at various degrees of interest, (2) arbitrary shaped spectrum,and (3) flexible adjustment of the compression quality of the image and the background. No standard modification for JPEG2000 decoder was required. The method was applied over different types of images. Results show a better performance for the selected regions, when image coding methods were employed for the whole set of images. We believe that this method is an excellent tool for future image compression research, mainly on images where image coding can be of interest, such as the medical imaging modalities and several multimedia applications. Finally VLSI implementation of proposed method is shown. It is also shown that the kernal of Hartley and Cosine transform gives the better performance than any other model.Keywords: VLSI, Discrete Cosine Transform, JPEG, Hartleytransform, Radon Transform
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18372973 A Comparative Study of Image Segmentation using Edge-Based Approach
Authors: Rajiv Kumar, Arthanariee A. M.
Abstract:
Image segmentation is the process to segment a given image into several parts so that each of these parts present in the image can be further analyzed. There are numerous techniques of image segmentation available in literature. In this paper, authors have been analyzed the edge-based approach for image segmentation. They have been implemented the different edge operators like Prewitt, Sobel, LoG, and Canny on the basis of their threshold parameter. The results of these operators have been shown for various images.
Keywords: Edge Operator, Edge-based Segmentation, Image Segmentation, Matlab 10.4.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36072972 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.
Keywords: DTM, unmanned aerial vehicle, UAV, random, Kriging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8102971 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques
Authors: Hossein Nezamabadi-pour, Saeid Saryazdi
Abstract:
In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.
Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20252970 Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages
Authors: Chutimon Thitipornvanid, Siripun Sanguansintukul
Abstract:
Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images.Keywords: Metadata, Prediction, Multi-layer perceptron, Human facial image, Image mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12142969 A Quantum Algorithm of Constructing Image Histogram
Authors: Yi Zhang, Kai Lu, Ying-hui Gao, Mo Wang
Abstract:
Histogram plays an important statistical role in digital image processing. However, the existing quantum image models are deficient to do this kind of image statistical processing because different gray scales are not distinguishable. In this paper, a novel quantum image representation model is proposed firstly in which the pixels with different gray scales can be distinguished and operated simultaneously. Based on the new model, a fast quantum algorithm of constructing histogram for quantum image is designed. Performance comparison reveals that the new quantum algorithm could achieve an approximately quadratic speedup than the classical counterpart. The proposed quantum model and algorithm have significant meanings for the future researches of quantum image processing.Keywords: Quantum Image Representation, Quantum Algorithm, Image Histogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2356