Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32797
Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function

Authors: S. Anna Durai, E. Anna Saro

Abstract:

Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.

Keywords: Back-propagation Neural Network, Cumulative Distribution Function, Correlation, Convergence.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1333098

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2505

References:


[1] M.Egmont-Petersen, D.de.Ridder, Handels, "Image Processing with Neural Networks - a review", Pattern Recognition 35(2002) 2279- 2301, www.elsevier.com/locate/patcog
[2] Bogdan M.Wilamowski, Serdar Iplikci, Okyay Kaynak, M. Onder Efe "An Algorithm for Fast Convergence in Training Neural Networks".
[3] Fethi Belkhouche, Ibrahim Gokcen, U.Qidwai, "Chaotic gray-level image transformation, Journal of Electronic Imaging -- October - December 2005 -- Volume 14, Issue 4, 043001 (Received 18 February 2004; accepted 9 May 2005; published online 8 November 2005.
[4] Hahn-Ming Lee, Chih-Ming Cheb, Tzong-Ching Huang, "Learning improvement of back propagation algorithm by error saturation prevention method", Neurocomputing, November 2001.
[5] Mohammed A.Otair, Walid A. Salameh, "Speeding up Back-propagation Neural Networks" Proceedings of the 2005 Informing Science and IT Education Joint Conference.
[6] M.A.Otair, W.A.Salameh (Jordan), "An Improved Back-Propagation Neural Networks using a Modified Non-linear function", The IASTED Conference on Artificial Intelligence and Applictions, Innsbruck, Austria, February 2006.
[7] Simon Haykin, "Neural Networks - A Comprehensive foundation", 2nd Ed., Pearson Education, 2004.
[8] B.Verma, B.Blumenstin and S. Kulkarni, Griggith University, Australia, "A new Compression technique using an artificial neural network".
[9] Rafael C. Gonazalez, Richard E.Woods, "Digital Image Processing", 2nd Ed., PHI, 2005.