Search results for: digital image processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3356

Search results for: digital image processing

3146 A Comparative Study of Image Segmentation using Edge-Based Approach

Authors: Rajiv Kumar, Arthanariee A. M.

Abstract:

Image segmentation is the process to segment a given image into several parts so that each of these parts present in the image can be further analyzed. There are numerous techniques of image segmentation available in literature. In this paper, authors have been analyzed the edge-based approach for image segmentation. They have been implemented the different edge operators like Prewitt, Sobel, LoG, and Canny on the basis of their threshold parameter. The results of these operators have been shown for various images.

Keywords: Edge Operator, Edge-based Segmentation, Image Segmentation, Matlab 10.4.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3554
3145 Medical Image Segmentation Based On Vigorous Smoothing and Edge Detection Ideology

Authors: Jagadish H. Pujar, Pallavi S. Gurjal, Shambhavi D. S, Kiran S. Kunnur

Abstract:

Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.

Keywords: Image Segmentation, Image smoothing, Edge Detection, Impulsive noise, Gaussian noise, Median filter, Canny edge, Eigen values, Eigen vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
3144 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
3143 A Feature-based Invariant Watermarking Scheme Using Zernike Moments

Authors: Say Wei Foo, Qi Dong

Abstract:

In this paper, a novel feature-based image watermarking scheme is proposed. Zernike moments which have invariance properties are adopted in the scheme. In the proposed scheme, feature points are first extracted from host image and several circular patches centered on these points are generated. The patches are used as carriers of watermark information because they can be regenerated to locate watermark embedding positions even when watermarked images are severely distorted. Zernike transform is then applied to the patches to calculate local Zernike moments. Dither modulation is adopted to quantize the magnitudes of the Zernike moments followed by false alarm analysis. Experimental results show that quality degradation of watermarked image is visually transparent. The proposed scheme is very robust against image processing operations and geometric attacks.

Keywords: Image watermarking, Zernike moments, Featurepoint, Invariance, Robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792
3142 Enhance Image Transmission Based on DWT with Pixel Interleaver

Authors: Muhanned Alfarras

Abstract:

The recent growth of using multimedia transmission over wireless communication systems, have challenges to protect the data from lost due to wireless channel effect. Images are corrupted due to the noise and fading when transmitted over wireless channel, in wireless channel the image is transmitted block by block, Due to severe fading, entire image blocks can be damaged. The aim of this paper comes out from need to enhance the digital images at the wireless receiver side. Proposed Boundary Interpolation (BI) Algorithm using wavelet, have been adapted here used to reconstruction the lost block in the image at the receiver depend on the correlation between the lost block and its neighbors. New Proposed technique by using Boundary Interpolation (BI) Algorithm using wavelet with Pixel interleaver has been implemented. Pixel interleaver work on distribute the pixel to new pixel position of original image before transmitting the image. The block lost through wireless channel is only effects individual pixel. The lost pixels at the receiver side can be recovered by using Boundary Interpolation (BI) Algorithm using wavelet. The results showed that the New proposed algorithm boundary interpolation (BI) using wavelet with pixel interleaver is better in term of MSE and PSNR.

Keywords: Image Transmission, Wavelet, Pixel Interleaver, Boundary Interpolation Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
3141 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques

Authors: Hossein Nezamabadi-pour, Saeid Saryazdi

Abstract:

In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.

Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
3140 Video Data Mining based on Information Fusion for Tamper Detection

Authors: Girija Chetty, Renuka Biswas

Abstract:

In this paper, we propose novel algorithmic models based on information fusion and feature transformation in crossmodal subspace for different types of residue features extracted from several intra-frame and inter-frame pixel sub-blocks in video sequences for detecting digital video tampering or forgery. An evaluation of proposed residue features – the noise residue features and the quantization features, their transformation in cross-modal subspace, and their multimodal fusion, for emulated copy-move tamper scenario shows a significant improvement in tamper detection accuracy as compared to single mode features without transformation in cross-modal subspace.

Keywords: image tamper detection, digital forensics, correlation features image fusion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
3139 Proposed Developments of Elliptic Curve Digital Signature Algorithm

Authors: Sattar B. Sadkhan, Najlae Falah Hameed

Abstract:

The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of DSA, where it is a digital signature scheme designed to provide a digital signature based on a secret number known only to the signer and also on the actual message being signed. These digital signatures are considered the digital counterparts to handwritten signatures, and are the basis for validating the authenticity of a connection. The security of these schemes results from the infeasibility to compute the signature without the private key. In this paper we introduce a proposed to development the original ECDSA with more complexity.

Keywords: Elliptic Curve Digital Signature Algorithm, DSA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628
3138 A Robust Method for Encrypted Data Hiding Technique Based on Neighborhood Pixels Information

Authors: Ali Shariq Imran, M. Younus Javed, Naveed Sarfraz Khattak

Abstract:

This paper presents a novel method for data hiding based on neighborhood pixels information to calculate the number of bits that can be used for substitution and modified Least Significant Bits technique for data embedding. The modified solution is independent of the nature of the data to be hidden and gives correct results along with un-noticeable image degradation. The technique, to find the number of bits that can be used for data hiding, uses the green component of the image as it is less sensitive to human eye and thus it is totally impossible for human eye to predict whether the image is encrypted or not. The application further encrypts the data using a custom designed algorithm before embedding bits into image for further security. The overall process consists of three main modules namely embedding, encryption and extraction cm.

Keywords: Data hiding, image processing, information security, stagonography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
3137 Detection and Pose Estimation of People in Images

Authors: Mousa Mojarrad, Amir Masoud Rahmani, Mehrab Mohebi

Abstract:

Detection, feature extraction and pose estimation of people in images and video is made challenging by the variability of human appearance, the complexity of natural scenes and the high dimensionality of articulated body models and also the important field in Image, Signal and Vision Computing in recent years. In this paper, four types of people in 2D dimension image will be tested and proposed. The system will extract the size and the advantage of them (such as: tall fat, short fat, tall thin and short thin) from image. Fat and thin, according to their result from the human body that has been extract from image, will be obtained. Also the system extract every size of human body such as length, width and shown them in output.

Keywords: Analysis of Image Processing, Canny Edge Detection, Human Body Recognition, Measurement, Pose Estimation, 2D Human Dimension.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2250
3136 Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages

Authors: Chutimon Thitipornvanid, Siripun Sanguansintukul

Abstract:

Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images.

Keywords: Metadata, Prediction, Multi-layer perceptron, Human facial image, Image mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1168
3135 Statistical Texture Analysis

Authors: G. N. Srinivasan, G. Shobha

Abstract:

This paper presents an overview of the methodologies and algorithms for statistical texture analysis of 2D images. Methods for digital-image texture analysis are reviewed based on available literature and research work either carried out or supervised by the authors.

Keywords: Image Texture, Texture Analysis, Statistical Approaches, Structural approaches, spectral approaches, Morphological approaches, Fractals, Fourier Transforms, Gabor Filters, Wavelet transforms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 853
3134 Edge Detection in Digital Images Using Fuzzy Logic Technique

Authors: Abdallah A. Alshennawy, Ayman A. Aly

Abstract:

The fuzzy technique is an operator introduced in order to simulate at a mathematical level the compensatory behavior in process of decision making or subjective evaluation. The following paper introduces such operators on hand of computer vision application. In this paper a novel method based on fuzzy logic reasoning strategy is proposed for edge detection in digital images without determining the threshold value. The proposed approach begins by segmenting the images into regions using floating 3x3 binary matrix. The edge pixels are mapped to a range of values distinct from each other. The robustness of the proposed method results for different captured images are compared to those obtained with the linear Sobel operator. It is gave a permanent effect in the lines smoothness and straightness for the straight lines and good roundness for the curved lines. In the same time the corners get sharper and can be defined easily.

Keywords: Fuzzy logic, Edge detection, Image processing, computer vision, Mechanical parts, Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4715
3133 Active Contours with Prior Corner Detection

Authors: U.A.A. Niroshika, Ravinda G.N. Meegama

Abstract:

Deformable active contours are widely used in computer vision and image processing applications for image segmentation, especially in biomedical image analysis. The active contour or “snake" deforms towards a target object by controlling the internal, image and constraint forces. However, if the contour initialized with a lesser number of control points, there is a high probability of surpassing the sharp corners of the object during deformation of the contour. In this paper, a new technique is proposed to construct the initial contour by incorporating prior knowledge of significant corners of the object detected using the Harris operator. This new reconstructed contour begins to deform, by attracting the snake towards the targeted object, without missing the corners. Experimental results with several synthetic images show the ability of the new technique to deal with sharp corners with a high accuracy than traditional methods.

Keywords: Active Contours, Image Segmentation, Harris Operator, Snakes

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2234
3132 A Hybrid Approach for Color Image Quantization Using K-means and Firefly Algorithms

Authors: Parisut Jitpakdee, Pakinee Aimmanee, Bunyarit Uyyanonvara

Abstract:

Color Image quantization (CQ) is an important problem in computer graphics, image and processing. The aim of quantization is to reduce colors in an image with minimum distortion. Clustering is a widely used technique for color quantization; all colors in an image are grouped to small clusters. In this paper, we proposed a new hybrid approach for color quantization using firefly algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased algorithm that can be used for solving optimization problems. The proposed method can overcome the drawbacks of both algorithms such as the local optima converge problem in K-means and the early converge of firefly algorithm. Experiments on three commonly used images and the comparison results shows that the proposed algorithm surpasses both the base-line technique k-means clustering and original firefly algorithm.

Keywords: Clustering, Color quantization, Firefly algorithm, Kmeans.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2177
3131 Image Dehazing Using Dark Channel Prior and Fast Guided Filter in Daubechies Lifting Wavelet Transform Domain

Authors: Harpreet Kaur, Sudipta Majumdar

Abstract:

In this paper a method for image dehazing is proposed in lifting wavelet transform domain. Lifting Daubechies (D4) wavelet has been used to obtain the approximate image and detail images.  As the haze is contained in low frequency part, only the approximate image is used for further processing. This region is processed by dehazing algorithm based on dark channel prior (DCP). The dehazed approximate image is then recombined with the detail images using inverse lifting wavelet transform. Implementation of lifting wavelet transform has the advantage of auxiliary memory saving, fast implementation and simplicity. Also, the proposed method deals with near white scene problem, blue horizon issue and localized light sources in a way to enhance image quality and makes the algorithm robust. Simulation results present improvement in terms of visual quality, parameters such as root mean square (RMS) contrast, structural similarity index (SSIM), entropy and execution time.

Keywords: Dark channel prior, image dehazing, lifting wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1062
3130 Image Enhancement Algorithm of Photoacoustic Tomography Using Active Contour Filtering

Authors: Prasannakumar Palaniappan, Dong Ho Shin, Chul Gyu Song

Abstract:

The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.

Keywords: Contour filtering, linear array, photoacoustic tomography, universal back projection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
3129 A Novel Reversible Watermarking Method based on Adaptive Thresholding and Companding Technique

Authors: Nisar Ahmed Memon

Abstract:

Embedding and extraction of a secret information as well as the restoration of the original un-watermarked image is highly desirable in sensitive applications like military, medical, and law enforcement imaging. This paper presents a novel reversible data-hiding method for digital images using integer to integer wavelet transform and companding technique which can embed and recover the secret information as well as can restore the image to its pristine state. The novel method takes advantage of block based watermarking and iterative optimization of threshold for companding which avoids histogram pre and post-processing. Consequently, it reduces the associated overhead usually required in most of the reversible watermarking techniques. As a result, it keeps the distortion small between the marked and the original images. Experimental results show that the proposed method outperforms the existing reversible data hiding schemes reported in the literature.

Keywords: Adaptive Thresholding, Companding Technique, Integer Wavelet Transform, Reversible Watermarking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1831
3128 A Smart-Visio Microphone for Audio-Visual Speech Recognition “Vmike“

Authors: Y. Ni, K. Sebri

Abstract:

The practical implementation of audio-video coupled speech recognition systems is mainly limited by the hardware complexity to integrate two radically different information capturing devices with good temporal synchronisation. In this paper, we propose a solution based on a smart CMOS image sensor in order to simplify the hardware integration difficulties. By using on-chip image processing, this smart sensor can calculate in real time the X/Y projections of the captured image. This on-chip projection reduces considerably the volume of the output data. This data-volume reduction permits a transmission of the condensed visual information via the same audio channel by using a stereophonic input available on most of the standard computation devices such as PC, PDA and mobile phones. A prototype called VMIKE (Visio-Microphone) has been designed and realised by using standard 0.35um CMOS technology. A preliminary experiment gives encouraged results. Its efficiency will be further investigated in a large variety of applications such as biometrics, speech recognition in noisy environments, and vocal control for military or disabled persons, etc.

Keywords: Audio-Visual Speech recognition, CMOS Smartsensor, On-Chip image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
3127 Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases

Authors: N. W. U. D. Chathurani, Shlomo Geva, Vinod Chandran, Proboda Rajapaksha

Abstract:

Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.

Keywords: Feature fusion, image retrieval, membership function, normalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1304
3126 Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images

Authors: Dr. H. B. Kekre, Sudeep D. Thepade

Abstract:

Panoramic view generation has always offered novel and distinct challenges in the field of image processing. Panoramic view generation is nothing but construction of bigger view mosaic image from set of partial images of the desired view. The paper presents a solution to one of the problems of image seascape formation where some of the partial images are color and others are grayscale. The simplest solution could be to convert all image parts into grayscale images and fusing them to get grayscale image panorama. But in the multihued world, obtaining the colored seascape will always be preferred. This could be achieved by picking colors from the color parts and squirting them in grayscale parts of the seascape. So firstly the grayscale image parts should be colored with help of color image parts and then these parts should be fused to construct the seascape image. The problem of coloring grayscale images has no exact solution. In the proposed technique of panoramic view generation, the job of transferring color traits from reference color image to grayscale image is done by palette based method. In this technique, the color palette is prepared using pixel windows of some degrees taken from color image parts. Then the grayscale image part is divided into pixel windows with same degrees. For every window of grayscale image part the palette is searched and equivalent color values are found, which could be used to color grayscale window. For palette preparation we have used RGB color space and Kekre-s LUV color space. Kekre-s LUV color space gives better quality of coloring. The searching time through color palette is improved over the exhaustive search using Kekre-s fast search technique. After coloring the grayscale image pieces the next job is fusion of all these pieces to obtain panoramic view. For similarity estimation between partial images correlation coefficient is used.

Keywords: Panoramic View, Similarity Estimate, Color Transfer, Color Palette, Kekre's Fast Search, Kekre's LUV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
3125 Image Segmentation by Mathematical Morphology: An Approach through Linear, Bilinear and Conformal Transformation

Authors: Dibyendu Ghoshal, Pinaki Pratim Acharjya

Abstract:

Image segmentation process based on mathematical morphology has been studied in the paper. It has been established from the first principles of the morphological process, the entire segmentation is although a nonlinear signal processing task, the constituent wise, the intermediate steps are linear, bilinear and conformal transformation and they give rise to a non linear affect in a cumulative manner.

Keywords: Image segmentation, linear transform, bilinear transform, conformal transform, mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
3124 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of a Digital-Noiseless, Ultra-High-Speed Image Sensor

Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi

Abstract:

Since 2004, we have been developing an in-situ storage image sensor (ISIS) that captures more than 100 consecutive images at a frame rate of 10 Mfps with ultra-high sensitivity as well as the video camera for use with this ISIS. Currently, basic research is continuing in an attempt to increase the frame rate up to 100 Mfps and above. In order to suppress electro-magnetic noise at such high frequency, a digital-noiseless imaging transfer scheme has been developed utilizing solely sinusoidal driving voltages. This paper presents highly efficient-yet-accurate expressions to estimate attenuation as well as phase delay of driving voltages through RC networks of an ultra-high-speed image sensor. Elmore metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE data, we found a simple expression that significantly improves the accuracy of the approximation. Similarly, another simple closed-form model to estimate phase delay through fundamental RC networks is also obtained. Estimation error of both expressions is much less than previous works, only less 2% for most of the cases . The framework of this analysis can be extended to address similar issues of other VLSI structures.

Keywords: Dimensional Analysis, ISIS, Digital-noiseless, RC network, Attenuation, Phase Delay, Elmore model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
3123 Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks

Authors: Chaitanya Chawla, Divya Panwar, Gurneesh Singh Anand, M. P. S Bhatia

Abstract:

This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods.

Keywords: Image forensics, computer graphics, classification, deep learning, convolutional neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1120
3122 A Novel Multiresolution based Optimization Scheme for Robust Affine Parameter Estimation

Authors: J.Dinesh Peter

Abstract:

This paper describes a new method for affine parameter estimation between image sequences. Usually, the parameter estimation techniques can be done by least squares in a quadratic way. However, this technique can be sensitive to the presence of outliers. Therefore, parameter estimation techniques for various image processing applications are robust enough to withstand the influence of outliers. Progressively, some robust estimation functions demanding non-quadratic and perhaps non-convex potentials adopted from statistics literature have been used for solving these. Addressing the optimization of the error function in a factual framework for finding a global optimal solution, the minimization can begin with the convex estimator at the coarser level and gradually introduce nonconvexity i.e., from soft to hard redescending non-convex estimators when the iteration reaches finer level of multiresolution pyramid. Comparison has been made to find the performance of the results of proposed method with the results found individually using two different estimators.

Keywords: Image Processing, Affine parameter estimation, Outliers, Robust Statistics, Robust M-estimators

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
3121 Modified Vector Quantization Method for Image Compression

Authors: K.Somasundaram, S.Domnic

Abstract:

A low bit rate still image compression scheme by compressing the indices of Vector Quantization (VQ) and generating residual codebook is proposed. The indices of VQ are compressed by exploiting correlation among image blocks, which reduces the bit per index. A residual codebook similar to VQ codebook is generated that represents the distortion produced in VQ. Using this residual codebook the distortion in the reconstructed image is removed, thereby increasing the image quality. Our scheme combines these two methods. Experimental results on standard image Lena show that our scheme can give a reconstructed image with a PSNR value of 31.6 db at 0.396 bits per pixel. Our scheme is also faster than the existing VQ variants.

Keywords: Image compression, Vector Quantization, Residual Codebook.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
3120 A Way of Converting Color Images to Gray Scale Ones for the Color Blinds -Reducing the Colors for Tokyo Subway Map-

Authors: Katsuhiro Narikiyo, Naoto Kobayakawa

Abstract:

We proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color blinds. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them.

Keywords: Image processing, Color blind, JPEG

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
3119 Artifacts in Spiral X-ray CT Scanners: Problems and Solutions

Authors: Mehran Yazdi, Luc Beaulieu

Abstract:

Artifact is one of the most important factors in degrading the CT image quality and plays an important role in diagnostic accuracy. In this paper, some artifacts typically appear in Spiral CT are introduced. The different factors such as patient, equipment and interpolation algorithm which cause the artifacts are discussed and new developments and image processing algorithms to prevent or reduce them are presented.

Keywords: CT artifacts, Spiral CT, Artifact removal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4439
3118 Image Similarity: A Genetic Algorithm Based Approach

Authors: R. C. Joshi, Shashikala Tapaswi

Abstract:

The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results.

Keywords: Image Features, color descriptor, segmented classes, texture descriptors, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2279
3117 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492