Search results for: image hash
1315 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: Active Contour, Bayesian, Echocardiographic image, Feature vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17121314 The Feasibility of Augmenting an Augmented Reality Image Card on a Quick Response Code
Authors: Alfred Chen, Shr Yu Lu, Cong Seng Hong, Yur-June Wang
Abstract:
This research attempts to study the feasibility of augmenting an augmented reality (AR) image card on a Quick Response (QR) code. The authors have developed a new visual tag, which contains a QR code and an augmented AR image card. The new visual tag has features of reading both of the revealed data of the QR code and the instant data from the AR image card. Furthermore, a handheld communicating device is used to read and decode the new visual tag, and then the concealed data of the new visual tag can be revealed and read through its visual display. In general, the QR code is designed to store the corresponding data or, as a key, to access the corresponding data from the server through internet. Those reveled data from the QR code are represented in text. Normally, the AR image card is designed to store the corresponding data in 3-Dimensional or animation/video forms. By using QR code's property of high fault tolerant rate, the new visual tag can access those two different types of data by using a handheld communicating device. The new visual tag has an advantage of carrying much more data than independent QR code or AR image card. The major findings of this research are: 1) the most efficient area for the designed augmented AR card augmenting on the QR code is 9% coverage area out of the total new visual tag-s area, and 2) the best location for the augmented AR image card augmenting on the QR code is located in the bottom-right corner of the new visual tag.Keywords: Augmented reality, QR code, Visual tag, Handheldcommunicating device
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15541313 Multi-Focus Image Fusion Using SFM and Wavelet Packet
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.
Keywords: Multi-focus image fusion, Wavelet Packet, Spatial Frequency Measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16131312 A Universal Model for Content-Based Image Retrieval
Authors: S. Nandagopalan, Dr. B. S. Adiga, N. Deepak
Abstract:
In this paper a novel approach for generalized image retrieval based on semantic contents is presented. A combination of three feature extraction methods namely color, texture, and edge histogram descriptor. There is a provision to add new features in future for better retrieval efficiency. Any combination of these methods, which is more appropriate for the application, can be used for retrieval. This is provided through User Interface (UI) in the form of relevance feedback. The image properties analyzed in this work are by using computer vision and image processing algorithms. For color the histogram of images are computed, for texture cooccurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD) that is found. For retrieval of images, a novel idea is developed based on greedy strategy to reduce the computational complexity. The entire system was developed using AForge.Imaging (an open source product), MATLAB .NET Builder, C#, and Oracle 10g. The system was tested with Coral Image database containing 1000 natural images and achieved better results.Keywords: Content Based Image Retrieval (CBIR), Cooccurrencematrix, Feature vector, Edge Histogram Descriptor(EHD), Greedy strategy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29321311 Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks
Authors: Chaitanya Chawla, Divya Panwar, Gurneesh Singh Anand, M. P. S Bhatia
Abstract:
This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods.Keywords: Image forensics, computer graphics, classification, deep learning, convolutional neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11731310 Feature Preserving Nonlinear Diffusion for Ultrasonic Image Denoising and Edge Enhancement
Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang, Yu Li
Abstract:
Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Keywords: anisotropic diffusion, coordinate transformationdirectional derivatives, edge enhancement, hyperbolic tangentfunction, image denoising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18131309 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.
Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12891308 On-line Image Mosaicing of Live Stem Cells
Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini
Abstract:
Image mosaicing is a technique that permits to enlarge the field of view of a camera. For instance, it is employed to achieve panoramas with common cameras or even in scientific applications, to achieve the image of a whole culture in microscopical imaging. Usually, a mosaic of cell cultures is achieved through using automated microscopes. However, this is often performed in batch, through CPU intensive minimization algorithms. In addition, live stem cells are studied in phase contrast, showing a low contrast that cannot be improved further. We present a method to study the flat field from live stem cells images even in case of 100% confluence, this permitting to build accurate mosaics on-line using high performance algorithms.
Keywords: Microscopy, image mosaicing, stem cells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15071307 Image Segmentation Based on Graph Theoretical Approach to Improve the Quality of Image Segmentation
Authors: Deepthi Narayan, Srikanta Murthy K., G. Hemantha Kumar
Abstract:
Graph based image segmentation techniques are considered to be one of the most efficient segmentation techniques which are mainly used as time & space efficient methods for real time applications. How ever, there is need to focus on improving the quality of segmented images obtained from the earlier graph based methods. This paper proposes an improvement to the graph based image segmentation methods already described in the literature. We contribute to the existing method by proposing the use of a weighted Euclidean distance to calculate the edge weight which is the key element in building the graph. We also propose a slight modification of the segmentation method already described in the literature, which results in selection of more prominent edges in the graph. The experimental results show the improvement in the segmentation quality as compared to the methods that already exist, with a slight compromise in efficiency.Keywords: Graph based image segmentation, threshold, Weighted Euclidean distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15621306 Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain
Authors: Krishnamoorthi R., Sheba Kezia Malarchelvi P. D.
Abstract:
In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.Keywords: Orthogonal Polynomials based Transformation, Digital Watermarking, Copyright Protection, Visual model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16951305 A Review on Medical Image Registration Techniques
Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry
Abstract:
This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.Keywords: Image registration techniques, medical images, neural networks, optimisation, transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18111304 Synthetic Transmit Aperture Method in Medical Ultrasonic Imaging
Authors: Ihor Trots, Andrzej Nowicki, Marcin Lewandowski
Abstract:
The work describes the use of a synthetic transmit aperture (STA) with a single element transmitting and all elements receiving in medical ultrasound imaging. STA technique is a novel approach to today-s commercial systems, where an image is acquired sequentially one image line at a time that puts a strict limit on the frame rate and the amount of data needed for high image quality. The STA imaging allows to acquire data simultaneously from all directions over a number of emissions, and the full image can be reconstructed. In experiments a 32-element linear transducer array with 0.48 mm inter-element spacing was used. Single element transmission aperture was used to generate a spherical wave covering the full image region. The 2D ultrasound images of wire phantom are presented obtained using the STA and commercial ultrasound scanner Antares to demonstrate the benefits of the SA imaging.Keywords: Ultrasound imaging, synthetic aperture, frame rate, beamforming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21031303 Color Constancy using Superpixel
Authors: Xingsheng Yuan, Zhengzhi Wang
Abstract:
Color constancy algorithms are generally based on the simplified assumption about the spectral distribution or the reflection attributes of the scene surface. However, in reality, these assumptions are too restrictive. The methodology is proposed to extend existing algorithm to applying color constancy locally to image patches rather than globally to the entire images. In this paper, a method based on low-level image features using superpixels is proposed. Superpixel segmentation partition an image into regions that are approximately uniform in size and shape. Instead of using entire pixel set for estimating the illuminant, only superpixels with the most valuable information are used. Based on large scale experiments on real-world scenes, it can be derived that the estimation is more accurate using superpixels than when using the entire image.Keywords: color constancy, illuminant estimation, superpixel
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14591302 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code
Authors: N. Muthukumaran, R. Ravi
Abstract:
The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.
Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20721301 Quality of Non-Point Source Pollutant Identification using Digital Image and Remote Sensing Image
Authors: Riki Mukhaiyar
Abstract:
The integration between technology of remote sensing, information from the data of digital image, and modeling technology for the simulation of water quality will provide easiness during the observation on the quality of water changes on the river surface. For example, Ciliwung River which is contaminated with non-point source pollutant from household wastes, particularly on its downstream. This fact informed that the quality of water in this river is getting worse. The land use for settlements and housing ranges between 62.84% - 81.26% on the downstream of Ciliwung River, give a significant picture in seeing factors that affected the water quality of Ciliwung River.Keywords: Digital Image, Digitize, Landuse, Non-Point SourcePollutant, Qual2e Simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17081300 Unequal Error Protection of Facial Features for Personal ID Images Coding
Abstract:
This paper presents an approach for an unequal error protection of facial features of personal ID images coding. We consider unequal error protection (UEP) strategies for the efficient progressive transmission of embedded image codes over noisy channels. This new method is based on the progressive image compression embedded zerotree wavelet (EZW) algorithm and UEP technique with defined region of interest (ROI). In this case is ROI equal facial features within personal ID image. ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques and encoding LL band separately. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The proposed unequal error protection of image transmission has shown to be more appropriate to low bit rate applications, producing better quality output for ROI of the compresses image. The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the UEP of image transmission with defined ROI with facial features and the equal error protection (EEP) over additive white gaussian noise (AWGN) channel.Keywords: Embedded zerotree wavelet (EZW), equal error protection (EEP), facial features, personal ID images, region of interest (ROI), unequal error protection (UEP)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14891299 Generation of Photo-Mosaic Images through Block Matching and Color Adjustment
Authors: Hae-Yeoun Lee
Abstract:
Mosaic refers to a technique that makes image by gathering lots of small materials in various colors. This paper presents an automatic algorithm that makes the photo-mosaic image using photos. The algorithm is composed of 4 steps: partition and feature extraction, block matching, redundancy removal and color adjustment. The input image is partitioned in the small block to extract feature. Each block is matched to find similar photo in database by comparing similarity with Euclidean difference between blocks. The intensity of the block is adjusted to enhance the similarity of image by replacing the value of light and darkness with that of relevant block. Further, the quality of image is improved by minimizing the redundancy of tiles in the adjacent blocks. Experimental results support that the proposed algorithm is excellent in quantitative analysis and qualitative analysis.
Keywords: Photo-mosaic, Euclidean distance, Block matching, Intensity adjustment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35701298 Adaptive Anisotropic Diffusion for Ultrasonic Image Denoising and Edge Enhancement
Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang, Yu Li
Abstract:
Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Keywords: anisotropic diffusion, coordinate transformation, directional derivatives, edge enhancement, hyperbolic tangent function, image denoising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18991297 A Nonoblivious Image Watermarking System Based on Singular Value Decomposition and Texture Segmentation
Authors: Soroosh Rezazadeh, Mehran Yazdi
Abstract:
In this paper, a robust digital image watermarking scheme for copyright protection applications using the singular value decomposition (SVD) is proposed. In this scheme, an entropy masking model has been applied on the host image for the texture segmentation. Moreover, the local luminance and textures of the host image are considered for watermark embedding procedure to increase the robustness of the watermarking scheme. In contrast to all existing SVD-based watermarking systems that have been designed to embed visual watermarks, our system uses a pseudo-random sequence as a watermark. We have tested the performance of our method using a wide variety of image processing attacks on different test images. A comparison is made between the results of our proposed algorithm with those of a wavelet-based method to demonstrate the superior performance of our algorithm.Keywords: Watermarking, copyright protection, singular value decomposition, entropy masking, texture segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17611296 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image
Authors: Yohei Saika, Yuji Haraguchi
Abstract:
We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19871295 Highly Scalable, Reversible and Embedded Image Compression System
Authors: Federico Pérez González, Iñaki Goiricelaia Ordorika, Pedro Iriondo Bengoa
Abstract:
A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Keywords: Image compression, wavelet transform, highlyscalable, reversible transform, embedded, subcomponents.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14121294 Reversible Medical Image Watermarking For Tamper Detection And Recovery With Run Length Encoding Compression
Authors: Siau-Chuin Liew, Siau-Way Liew, Jasni Mohd Zain
Abstract:
Digital watermarking in medical images can ensure the authenticity and integrity of the image. This design paper reviews some existing watermarking schemes and proposes a reversible tamper detection and recovery watermarking scheme. Watermark data from ROI (Region Of Interest) are stored in RONI (Region Of Non Interest). The embedded watermark allows tampering detection and tampered image recovery. The watermark is also reversible and data compression technique was used to allow higher embedding capacity.Keywords: data compression, medical image, reversible, tamperdetection and recovery, watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20771293 Quality Evaluation of Compressed MRI Medical Images for Telemedicine Applications
Authors: Seddeq E. Ghrare, Salahaddin M. Shreef
Abstract:
Medical image modalities such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), X-ray are adapted to diagnose disease. These modalities provide flexible means of reviewing anatomical cross-sections and physiological state in different parts of the human body. The raw medical images have a huge file size and need large storage requirements. So it should be such a way to reduce the size of those image files to be valid for telemedicine applications. Thus the image compression is a key factor to reduce the bit rate for transmission or storage while maintaining an acceptable reproduction quality, but it is natural to rise the question of how much an image can be compressed and still preserve sufficient information for a given clinical application. Many techniques for achieving data compression have been introduced. In this study, three different MRI modalities which are Brain, Spine and Knee have been compressed and reconstructed using wavelet transform. Subjective and objective evaluation has been done to investigate the clinical information quality of the compressed images. For the objective evaluation, the results show that the PSNR which indicates the quality of the reconstructed image is ranging from (21.95 dB to 30.80 dB, 27.25 dB to 35.75 dB, and 26.93 dB to 34.93 dB) for Brain, Spine, and Knee respectively. For the subjective evaluation test, the results show that the compression ratio of 40:1 was acceptable for brain image, whereas for spine and knee images 50:1 was acceptable.Keywords: Medical Image, Magnetic Resonance Imaging, Image Compression, Discrete Wavelet Transform, Telemedicine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29751292 Image Enhancement Algorithm of Photoacoustic Tomography Using Active Contour Filtering
Authors: Prasannakumar Palaniappan, Dong Ho Shin, Chul Gyu Song
Abstract:
The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.
Keywords: Contour filtering, linear array, photoacoustic tomography, universal back projection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18371291 A Real-Time Image Change Detection System
Authors: Madina Hamiane, Amina Khunji
Abstract:
Detecting changes in multiple images of the same scene has recently seen increased interest due to the many contemporary applications including smart security systems, smart homes, remote sensing, surveillance, medical diagnosis, weather forecasting, speed and distance measurement, post-disaster forensics and much more. These applications differ in the scale, nature, and speed of change. This paper presents an application of image processing techniques to implement a real-time change detection system. Change is identified by comparing the RGB representation of two consecutive frames captured in real-time. The detection threshold can be controlled to account for various luminance levels. The comparison result is passed through a filter before decision making to reduce false positives, especially at lower luminance conditions. The system is implemented with a MATLAB Graphical User interface with several controls to manage its operation and performance.Keywords: Image change detection, Image processing, image filtering, thresholding, B/W quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25621290 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis
Authors: Gaoyong Luo
Abstract:
The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20271289 Image Sensor Matrix High Speed Simulation
Authors: Z. Feng, V. Viswanathan, D. Navarro, I. O'Connor
Abstract:
This paper presents a new high speed simulation methodology to solve the long simulation time problem of CMOS image sensor matrix. Generally, for integrating the pixel matrix in SOC and simulating the system performance, designers try to model the pixel in various modeling languages such as VHDL-AMS, SystemC or Matlab. We introduce a new alternative method based on spice model in cadence design platform to achieve accuracy and reduce simulation time. The simulation results indicate that the pixel output voltage maximum error is at 0.7812% and time consumption reduces from 2.2 days to 13 minutes achieving about 240X speed-up for the 256x256 pixel matrix.
Keywords: CMOS image sensor, high speed simulation, image sensor matrix simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20111288 An Improved C-Means Model for MRI Segmentation
Authors: Ying Shen, Weihua Zhu
Abstract:
Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.
Keywords: Magnetic Resonance Image, C-means model, image segmentation, information entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9171287 A New Image Psychovisual Coding Quality Measurement based Region of Interest
Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf
Abstract:
To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.
Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14971286 Encryption Image via Mutual Singular Value Decomposition
Authors: Adil Al-Rammahi
Abstract:
Image or document encryption is needed through egovernment data base. Really in this paper we introduce two matrices images, one is the public, and the second is the secret (original). The analyses of each matrix is achieved using the transformation of singular values decomposition. So each matrix is transformed or analyzed to three matrices say row orthogonal basis, column orthogonal basis, and spectral diagonal basis. Product of the two row basis is calculated. Similarly the product of the two column basis is achieved. Finally we transform or save the files of public, row product and column product. In decryption stage, the original image is deduced by mutual method of the three public files.
Keywords: Image cryptography, Singular values decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085