Search results for: Image Features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2819

Search results for: Image Features

2669 Discrete and Stationary Adaptive Sub-Band Threshold Method for Improving Image Resolution

Authors: P. Joyce Beryl Princess, Y. Harold Robinson

Abstract:

Image Processing is a structure of Signal Processing for which the input is the image and the output is also an image or parameter of the image. Image Resolution has been frequently referred as an important aspect of an image. In Image Resolution Enhancement, images are being processed in order to obtain more enhanced resolution. To generate highly resoluted image for a low resoluted input image with high PSNR value. Stationary Wavelet Transform is used for Edge Detection and minimize the loss occurs during Downsampling. Inverse Discrete Wavelet Transform is to get highly resoluted image. Highly resoluted output is generated from the Low resolution input with high quality. Noisy input will generate output with low PSNR value. So Noisy resolution enhancement technique has been used for adaptive sub-band thresholding is used. Downsampling in each of the DWT subbands causes information loss in the respective subbands. SWT is employed to minimize this loss. Inverse Discrete wavelet transform (IDWT) is to convert the object which is downsampled using DWT into a highly resoluted object. Used Image denoising and resolution enhancement techniques will generate image with high PSNR value. Our Proposed method will improve Image Resolution and reached the optimized threshold.

Keywords: Image Processing, Inverse Discrete wavelet transform, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
2668 Using Satellite Images Datasets for Road Intersection Detection in Route Planning

Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever

Abstract:

Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition  problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.

Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 634
2667 Union is Strength in Lossy Image Compression

Authors: Mario Mastriani

Abstract:

In this work, we present a comparison between different techniques of image compression. First, the image is divided in blocks which are organized according to a certain scan. Later, several compression techniques are applied, combined or alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève Transform, etc. Simulations show that the combined versions are the best, with minor Mean Squared Error (MSE), and higher Peak Signal to Noise Ratio (PSNR) and better image quality, even in the presence of noise.

Keywords: Haar's basis, Image compression, Karhunen-LoèveTransform, Morton's scan, row-rafter scan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
2666 Digital Image Watermarking in the Wavelet Transform Domain

Authors: Kamran Hameed, Adeel Mumtaz, S.A.M. Gilani

Abstract:

In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo-s [1] technique is more robust for standard noise attacks than Dote-s [2] technique.

Keywords: Digital image, Copyright protection, Watermarking, Wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2603
2665 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction

Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz

Abstract:

This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.

Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864
2664 Color Image Edge Detection using Pseudo-Complement and Matrix Operations

Authors: T. N. Janakiraman, P. V. S. S. R. Chandra Mouli

Abstract:

A color image edge detection algorithm is proposed in this paper using Pseudo-complement and matrix rotation operations. First, pseudo-complement method is applied on the image for each channel. Then, matrix operations are applied on the output image of the first stage. Dominant pixels are obtained by image differencing between the pseudo-complement image and the matrix operated image. Median filtering is carried out to smoothen the image thereby removing the isolated pixels. Finally, the dominant or core pixels occurring in at least two channels are selected. On plotting the selected edge pixels, the final edge map of the given color image is obtained. The algorithm is also tested in HSV and YCbCr color spaces. Experimental results on both synthetic and real world images show that the accuracy of the proposed method is comparable to other color edge detectors. All the proposed procedures can be applied to any image domain and runs in polynomial time.

Keywords: Color edge detection, dominant pixels, matrixrotation/shift operations, pseudo-complement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292
2663 Multiscale Blind Image Restoration with a New Method

Authors: Alireza Mallahzadeh, Hamid Dehghani, Iman Elyasi

Abstract:

A new method, based on the normal shrink and modified version of Katssagelous and Lay, is proposed for multiscale blind image restoration. The method deals with the noise and blur in the images. It is shown that the normal shrink gives the highest S/N (signal to noise ratio) for image denoising process. The multiscale blind image restoration is divided in two sections. The first part of this paper proposes normal shrink for image denoising and the second part of paper proposes modified version of katssagelous and Lay for blur estimation and the combination of both methods to reach a multiscale blind image restoration.

Keywords: Multiscale blind image restoration, image denoising, blur estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
2662 Digital Image Encryption Scheme using Chaotic Sequences with a Nonlinear Function

Authors: H. Ogras, M. Turk

Abstract:

In this study, a system of encryption based on chaotic sequences is described. The system is used for encrypting digital image data for the purpose of secure image transmission. An image secure communication scheme based on Logistic map chaotic sequences with a nonlinear function is proposed in this paper. Encryption and decryption keys are obtained by one-dimensional Logistic map that generates secret key for the input of the nonlinear function. Receiver can recover the information using the received signal and identical key sequences through the inverse system technique. The results of computer simulations indicate that the transmitted source image can be correctly and reliably recovered by using proposed scheme even under the noisy channel. The performance of the system will be discussed through evaluating the quality of recovered image with and without channel noise.

Keywords: Digital image, Image encryption, Secure communication

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
2661 Face Recognition Using Discrete Orthogonal Hahn Moments

Authors: Fatima Akhmedova, Simon Liao

Abstract:

One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, nonredundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database.

Keywords: Face Recognition, Hahn moments, Recognition-by-parts, Time-lapse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
2660 A Survey on Principal Aspects of Secure Image Transmission

Authors: Ali Soleymani, Zulkarnain Md Ali, Md Jan Nordin

Abstract:

This paper is a review on the aspects and approaches of design an image cryptosystem. First a general introduction given for cryptography and images encryption and followed by different techniques in image encryption and related works for each technique surveyed. Finally, general security analysis methods for encrypted images are mentioned.

Keywords: Image, cryptography, encryption, security, analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336
2659 Automatic Microaneurysm Quantification for Diabetic Retinopathy Screening

Authors: A. Sopharak, B. Uyyanonvara, S. Barman

Abstract:

Microaneurysm is a key indicator of diabetic retinopathy that can potentially cause damage to retina. Early detection and automatic quantification are the keys to prevent further damage. In this paper, which focuses on automatic microaneurysm detection in images acquired through non-dilated pupils, we present a series of experiments on feature selection and automatic microaneurysm pixel classification. We found that the best feature set is a combination of 10 features: the pixel-s intensity of shade corrected image, the pixel hue, the standard deviation of shade corrected image, DoG4, the area of the candidate MA, the perimeter of the candidate MA, the eccentricity of the candidate MA, the circularity of the candidate MA, the mean intensity of the candidate MA on shade corrected image and the ratio of the major axis length and minor length of the candidate MA. The overall sensitivity, specificity, precision, and accuracy are 84.82%, 99.99%, 89.01%, and 99.99%, respectively.

Keywords: Diabetic retinopathy, microaneurysm, naive Bayes classifier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2141
2658 Bridging Quantitative and Qualitative of Glaucoma Detection

Authors: Noor Elaiza Abdul Khalid, Noorhayati Mohamed Noor, Zamalia Mahmud, Saadiah Yahya, and Norharyati Md Ariff

Abstract:

Glaucoma diagnosis involves extracting three features of the fundus image; optic cup, optic disc and vernacular. Present manual diagnosis is expensive, tedious and time consuming. A number of researches have been conducted to automate this process. However, the variability between the diagnostic capability of an automated system and ophthalmologist has yet to be established. This paper discusses the efficiency and variability between ophthalmologist opinion and digital technique; threshold. The efficiency and variability measures are based on image quality grading; poor, satisfactory or good. The images are separated into four channels; gray, red, green and blue. A scientific investigation was conducted on three ophthalmologists who graded the images based on the image quality. The images are threshold using multithresholding and graded as done by the ophthalmologist. A comparison of grade from the ophthalmologist and threshold is made. The results show there is a small variability between result of ophthalmologists and digital threshold.

Keywords: Digital Fundus Image, Glaucoma Detection, Multithresholding, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
2657 Image Restoration in Non-Linear Filtering Domain using MDB approach

Authors: S. K. Satpathy, S. Panda, K. K. Nagwanshi, C. Ardil

Abstract:

This paper proposes a new technique based on nonlinear Minmax Detector Based (MDB) filter for image restoration. The aim of image enhancement is to reconstruct the true image from the corrupted image. The process of image acquisition frequently leads to degradation and the quality of the digitized image becomes inferior to the original image. Image degradation can be due to the addition of different types of noise in the original image. Image noise can be modeled of many types and impulse noise is one of them. Impulse noise generates pixels with gray value not consistent with their local neighborhood. It appears as a sprinkle of both light and dark or only light spots in the image. Filtering is a technique for enhancing the image. Linear filter is the filtering in which the value of an output pixel is a linear combination of neighborhood values, which can produce blur in the image. Thus a variety of smoothing techniques have been developed that are non linear. Median filter is the one of the most popular non-linear filter. When considering a small neighborhood it is highly efficient but for large window and in case of high noise it gives rise to more blurring to image. The Centre Weighted Mean (CWM) filter has got a better average performance over the median filter. However the original pixel corrupted and noise reduction is substantial under high noise condition. Hence this technique has also blurring affect on the image. To illustrate the superiority of the proposed approach, the proposed new scheme has been simulated along with the standard ones and various restored performance measures have been compared.

Keywords: Filtering, Minmax Detector Based (MDB), noise, centre weighted mean filter, PSNR, restoration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2697
2656 Digital Image Forensics: Discovering the History of Digital Images

Authors: Gurinder Singh, Kulbir Singh

Abstract:

Digital multimedia contents such as image, video, and audio can be tampered easily due to the availability of powerful editing softwares. Multimedia forensics is devoted to analyze these contents by using various digital forensic techniques in order to validate their authenticity. Digital image forensics is dedicated to investigate the reliability of digital images by analyzing the integrity of data and by reconstructing the historical information of an image related to its acquisition phase. In this paper, a survey is carried out on the forgery detection by considering the most recent and promising digital image forensic techniques.

Keywords: Computer forensics, multimedia forensics, image ballistics, camera source identification, forgery detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
2655 Gray Level Image Encryption

Authors: Roza Afarin, Saeed Mozaffari

Abstract:

The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.

Keywords: Correlation coefficients, Genetic algorithm, Image encryption, Image entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2199
2654 Improving Image Quality in Remote Sensing Satellites using Channel Coding

Authors: H. M. Behairy, M. S. Khorsheed

Abstract:

Among other factors that characterize satellite communication channels is their high bit error rate. We present a system for still image transmission over noisy satellite channels. The system couples image compression together with error control codes to improve the received image quality while maintaining its bandwidth requirements. The proposed system is tested using a high resolution satellite imagery simulated over the Rician fading channel. Evaluation results show improvement in overall system including image quality and bandwidth requirements compared to similar systems with different coding schemes.

Keywords: Image Transmission, Image Compression, Channel Coding, Error-Control Coding, DCT, Convolution Codes, Viterbi Algorithm, PCGC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
2653 Algorithm for Bleeding Determination Based On Object Recognition and Local Color Features in Capsule Endoscopy

Authors: Yong-Gyu Lee, Jin Hee Park, Youngdae Seo, Gilwon Yoon

Abstract:

Automatic determination of blood in less bright or noisy capsule endoscopic images is difficult due to low S/N ratio. Especially it may not be accurate to analyze these images due to the influence of external disturbance. Therefore, we proposed detection methods that are not dependent only on color bands. In locating bleeding regions, the identification of object outlines in the frame and features of their local colors were taken into consideration. The results showed that the capability of detecting bleeding was much improved.

Keywords: Endoscopy, object recognition, bleeding, image processing, RGB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872
2652 Extracting Road Signs using the Color Information

Authors: Wen-Yen Wu, Tsung-Cheng Hsieh, Ching-Sung Lai

Abstract:

In this paper, we propose a method to extract the road signs. Firstly, the grabbed image is converted into the HSV color space to detect the road signs. Secondly, the morphological operations are used to reduce noise. Finally, extract the road sign using the geometric property. The feature extraction of road sign is done by using the color information. The proposed method has been tested for the real situations. From the experimental results, it is seen that the proposed method can extract the road sign features effectively.

Keywords: Color information, image processing, road sign.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2186
2651 Calculus Logarithmic Function for Image Encryption

Authors: Adil AL-Rammahi

Abstract:

When we prefer to make the data secure from various attacks and fore integrity of data, we must encrypt the data before it is transmitted or stored. This paper introduces a new effective and lossless image encryption algorithm using a natural logarithmic function. The new algorithm encrypts an image through a three stage process. In the first stage, a reference natural logarithmic function is generated as the foundation for the encryption image. The image numeral matrix is then analyzed to five integer numbers, and then the numbers’ positions are transformed to matrices. The advantages of this method is useful for efficiently encrypting a variety of digital images, such as binary images, gray images, and RGB images without any quality loss. The principles of the presented scheme could be applied to provide complexity and then security for a variety of data systems such as image and others.

Keywords: Linear Systems, Image Encryption, Calculus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2352
2650 Better Perception of Low Resolution Images Using Wavelet Interpolation Techniques

Authors: Tarun Gulati, Kapil Gupta, Dushyant Gupta

Abstract:

High resolution images are always desired as they contain the more information and they can better represent the original data. So, to convert the low resolution image into high resolution interpolation is done. The quality of such high resolution image depends on the interpolation function and is assessed in terms of sharpness of image. This paper focuses on Wavelet based Interpolation Techniques in which an input image is divided into subbands. Each subband is processed separately and finally combined the processed subbandsto get the super resolution image. 

Keywords: SWT, DWTSR, DWTSWT, DWCWT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
2649 Block-Based 2D to 3D Image Conversion Method

Authors: S. Sowmyayani, V. Murugan

Abstract:

With the advent of three-dimension (3D) technology, there are lots of research in converting 2D images to 3D images. The main difference between 2D and 3D is the visual illusion of depth in 3D images. In the recent era, there are more depth estimation techniques. The objective of this paper is to convert 2D images to 3D images with less computation time. For this, the input image is divided into blocks from which the depth information is obtained. Having the depth information, a depth map is generated. Then the 3D image is warped using the original image and the depth map. The proposed method is tested on Make3D dataset and NYU-V2 dataset. The experimental results are compared with other recent methods. The proposed method proved to work with less computation time and good accuracy.

Keywords: Depth map, 3D image warping, image rendering, bilateral filter, minimum spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 291
2648 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: Steganography, stego, LSB, crop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
2647 A Dual Digital-Image Watermarking Technique

Authors: Maha Sharkas, Dahlia ElShafie, Nadder Hamdy

Abstract:

Image watermarking has become an important tool for intellectual property protection and authentication. In this paper a watermarking technique is suggested that incorporates two watermarks in a host image for improved protection and robustness. A watermark, in form of a PN sequence (will be called the secondary watermark), is embedded in the wavelet domain of a primary watermark before being embedded in the host image. The technique has been tested using Lena image as a host and the camera man as the primary watermark. The embedded PN sequence was detectable through correlation among other five sequences where a PSNR of 44.1065 dB was measured. Furthermore, to test the robustness of the technique, the watermarked image was exposed to four types of attacks, namely compression, low pass filtering, salt and pepper noise and luminance change. In all cases the secondary watermark was easy to detect even when the primary one is severely distorted.

Keywords: DWT, Image watermarking, watermarkingtechniques, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663
2646 Relevant LMA Features for Human Motion Recognition

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.

Keywords: Human motion recognition, Discriminative LMA features, random forest, features reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719
2645 Speckle Reducing Contourlet Transform for Medical Ultrasound Images

Authors: P.S. Hiremath, Prema T. Akkasaligar, Sharan Badiger

Abstract:

Speckle noise affects all coherent imaging systems including medical ultrasound. In medical images, noise suppression is a particularly delicate and difficult task. A tradeoff between noise reduction and the preservation of actual image features has to be made in a way that enhances the diagnostically relevant image content. Even though wavelets have been extensively used for denoising speckle images, we have found that denoising using contourlets gives much better performance in terms of SNR, PSNR, MSE, variance and correlation coefficient. The objective of the paper is to determine the number of levels of Laplacian pyramidal decomposition, the number of directional decompositions to perform on each pyramidal level and thresholding schemes which yields optimal despeckling of medical ultrasound images, in particular. The proposed method consists of the log transformed original ultrasound image being subjected to contourlet transform, to obtain contourlet coefficients. The transformed image is denoised by applying thresholding techniques on individual band pass sub bands using a Bayes shrinkage rule. We quantify the achieved performance improvement.

Keywords: Contourlet transform, Despeckling, Pyramidal directionalfilter bank, Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
2644 Data Embedding Based on Better Use of Bits in Image Pixels

Authors: Rehab H. Alwan, Fadhil J. Kadhim, Ahmad T. Al-Taani

Abstract:

In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.

Keywords: Image embedding, Edge detection, gray level connectivity, information hiding, digital image compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2099
2643 Improvement of Blood Detection Accuracy using Image Processing Techniques suitable for Capsule Endoscopy

Authors: Yong-Gyu Lee, Gilwon Yoon

Abstract:

Bleeding in the digestive duct is an important diagnostic parameter for patients. Blood in the endoscopic image can be determined by investigating the color tone of blood due to the degree of oxygenation, under- or over- illumination, food debris and secretions, etc. However, we found that how to pre-process raw images obtained from the capsule detectors was very important. We applied various image process methods suitable for the capsule endoscopic image in order to remove noises and unbalanced sensitivities for the image pixels. The results showed that much improvement was achieved by additional pre-processing techniques on the algorithm of determining bleeding areas.

Keywords: blood detection, capsule endoscopy, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823
2642 Blind Image Deconvolution by Neural Recursive Function Approximation

Authors: Jiann-Ming Wu, Hsiao-Chang Chen, Chun-Chang Wu, Pei-Hsun Hsu

Abstract:

This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.

Keywords: Blind image deconvolution, linear shift-invariant(LSI), linear image degradation model, radial basis functions (rbf), recursive function, annealed Hopfield neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2006
2641 Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking

Authors: Siraa Ben Ftima, Mourad Talbi, Tahar Ezzedine

Abstract:

In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations.

Keywords: Color image, grayscale image, singular values decomposition, lifting wavelet transform, image watermarking, watermark, secure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
2640 Secure E-Pay System Using Steganography and Visual Cryptography

Authors: K. Suganya Devi, P. Srinivasan, M. P. Vaishnave, G. Arutperumjothi

Abstract:

Today’s internet world is highly prone to various online attacks, of which the most harmful attack is phishing. The attackers host the fake websites which are very similar and look alike. We propose an image based authentication using steganography and visual cryptography to prevent phishing. This paper presents a secure steganographic technique for true color (RGB) images and uses Discrete Cosine Transform to compress the images. The proposed method hides the secret data inside the cover image. The use of visual cryptography is to preserve the privacy of an image by decomposing the original image into two shares. Original image can be identified only when both qualified shares are simultaneously available. Individual share does not reveal the identity of the original image. Thus, the existence of the secret message is hard to be detected by the RS steganalysis.

Keywords: Image security, random LSB, steganography, visual cryptography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332