Search results for: Contentbased Image Retrieval System
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9647

Search results for: Contentbased Image Retrieval System

9137 Cross-Search Technique and its Visualization of Peer-to-Peer Distributed Clinical Documents

Authors: Yong Jun Choi, Juman Byun, Simon Berkovich

Abstract:

One of the ubiquitous routines in medical practice is searching through voluminous piles of clinical documents. In this paper we introduce a distributed system to search and exchange clinical documents. Clinical documents are distributed peer-to-peer. Relevant information is found in multiple iterations of cross-searches between the clinical text and its domain encyclopedia.

Keywords: Clinical documents, cross-search, document exchange, information retrieval, peer-to-peer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1307
9136 Exploiting Query Feedback for Efficient Query Routing in Unstructured Peer-to-peer Networks

Authors: Iskandar Ishak, Naomie Salim

Abstract:

Unstructured peer-to-peer networks are popular due to its robustness and scalability. Query schemes that are being used in unstructured peer-to-peer such as the flooding and interest-based shortcuts suffer various problems such as using large communication overhead long delay response. The use of routing indices has been a popular approach for peer-to-peer query routing. It helps the query routing processes to learn the routing based on the feedbacks collected. In an unstructured network where there is no global information available, efficient and low cost routing approach is needed for routing efficiency. In this paper, we propose a novel mechanism for query-feedback oriented routing indices to achieve routing efficiency in unstructured network at a minimal cost. The approach also applied information retrieval technique to make sure the content of the query is understandable and will make the routing process not just based to the query hits but also related to the query content. Experiments have shown that the proposed mechanism performs more efficient than flood-based routing.

Keywords: Unstructured peer-to-peer, Searching, Retrieval, Internet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
9135 Fuzzy Mathematical Morphology approach in Image Processing

Authors: Yee Yee Htun, Dr. Khaing Khaing Aye

Abstract:

Morphological operators transform the original image into another image through the interaction with the other image of certain shape and size which is known as the structure element. Mathematical morphology provides a systematic approach to analyze the geometric characteristics of signals or images, and has been applied widely too many applications such as edge detection, objection segmentation, noise suppression and so on. Fuzzy Mathematical Morphology aims to extend the binary morphological operators to grey-level images. In order to define the basic morphological operations such as fuzzy erosion, dilation, opening and closing, a general method based upon fuzzy implication and inclusion grade operators is introduced. The fuzzy morphological operations extend the ordinary morphological operations by using fuzzy sets where for fuzzy sets, the union operation is replaced by a maximum operation, and the intersection operation is replaced by a minimum operation. In this work, it consists of two articles. In the first one, fuzzy set theory, fuzzy Mathematical morphology which is based on fuzzy logic and fuzzy set theory; fuzzy Mathematical operations and their properties will be studied in details. As a second part, the application of fuzziness in Mathematical morphology in practical work such as image processing will be discussed with the illustration problems.

Keywords: Binary Morphological, Fuzzy sets, Grayscalemorphology, Image processing, Mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3254
9134 Evaluation of Classifiers Based On I2C Distance for Action Recognition

Authors: Lei Zhang, Tao Wang, Xiantong Zhen

Abstract:

Naive Bayes Nearest Neighbor (NBNN) and its variants, i,e., local NBNN and the NBNN kernels, are local feature-based classifiers that have achieved impressive performance in image classification. By exploiting instance-to-class (I2C) distances (instance means image/video in image/video classification), they avoid quantization errors of local image descriptors in the bag of words (BoW) model. However, the performances of NBNN, local NBNN and the NBNN kernels have not been validated on video analysis. In this paper, we introduce these three classifiers into human action recognition and conduct comprehensive experiments on the benchmark KTH and the realistic HMDB datasets. The results shows that those I2C based classifiers consistently outperform the SVM classifier with the BoW model.

Keywords: Instance-to-class distance, NBNN, Local NBNN, NBNN kernel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
9133 An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference

Authors: Ayman A. Aly, Abdallah A. Alshnnaway

Abstract:

The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.

Keywords: Additive noise, edge preserving filtering, fuzzy image filtering, noise reduction, two dimensional mechanical images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
9132 Color Image Segmentation using Adaptive Spatial Gaussian Mixture Model

Authors: M.Sujaritha, S. Annadurai

Abstract:

An adaptive spatial Gaussian mixture model is proposed for clustering based color image segmentation. A new clustering objective function which incorporates the spatial information is introduced in the Bayesian framework. The weighting parameter for controlling the importance of spatial information is made adaptive to the image content to augment the smoothness towards piecewisehomogeneous region and diminish the edge-blurring effect and hence the name adaptive spatial finite mixture model. The proposed approach is compared with the spatially variant finite mixture model for pixel labeling. The experimental results with synthetic and Berkeley dataset demonstrate that the proposed method is effective in improving the segmentation and it can be employed in different practical image content understanding applications.

Keywords: Adaptive; Spatial, Mixture model, Segmentation, Color.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2504
9131 Video Shot Detection and Key Frame Extraction Using Faber Shauder DWT and SVD

Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi

Abstract:

Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.

Keywords: Key Frame Extraction, Shot detection, FSDWT, Singular Value Decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2522
9130 Enhancing Multi-Frame Images Using Self-Delaying Dynamic Networks

Authors: Lewis E. Hibell, Honghai Liu, David J. Brown

Abstract:

This paper presents the use of a newly created network structure known as a Self-Delaying Dynamic Network (SDN) to create a high resolution image from a set of time stepped input frames. These SDNs are non-recurrent temporal neural networks which can process time sampled data. SDNs can store input data for a lifecycle and feature dynamic logic based connections between layers. Several low resolution images and one high resolution image of a scene were presented to the SDN during training by a Genetic Algorithm. The SDN was trained to process the input frames in order to recreate the high resolution image. The trained SDN was then used to enhance a number of unseen noisy image sets. The quality of high resolution images produced by the SDN is compared to that of high resolution images generated using Bi-Cubic interpolation. The SDN produced images are superior in several ways to the images produced using Bi-Cubic interpolation.

Keywords: Image Enhancement, Neural Networks, Multi-Frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1202
9129 Hidden State Probabilistic Modeling for Complex Wavelet Based Image Registration

Authors: F. C. Calnegru

Abstract:

This article presents a computationally tractable probabilistic model for the relation between the complex wavelet coefficients of two images of the same scene. The two images are acquisitioned at distinct moments of times, or from distinct viewpoints, or by distinct sensors. By means of the introduced probabilistic model, we argue that the similarity between the two images is controlled not by the values of the wavelet coefficients, which can be altered by many factors, but by the nature of the wavelet coefficients, that we model with the help of hidden state variables. We integrate this probabilistic framework in the construction of a new image registration algorithm. This algorithm has sub-pixel accuracy and is robust to noise and to other variations like local illumination changes. We present the performance of our algorithm on various image types.

Keywords: Complex wavelet transform, image registration, modeling using hidden state variables, probabilistic similaritymeasure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
9128 An Amalgam Approach for DICOM Image Classification and Recognition

Authors: J. Umamaheswari, G. Radhamani

Abstract:

This paper describes about the process of recognition and classification of brain images such as normal and abnormal based on PSO-SVM. Image Classification is becoming more important for medical diagnosis process. In medical area especially for diagnosis the abnormality of the patient is classified, which plays a great role for the doctors to diagnosis the patient according to the severeness of the diseases. In case of DICOM images it is very tough for optimal recognition and early detection of diseases. Our work focuses on recognition and classification of DICOM image based on collective approach of digital image processing. For optimal recognition and classification Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Support Vector Machine (SVM) are used. The collective approach by using PSO-SVM gives high approximation capability and much faster convergence.

Keywords: Recognition, classification, Relaxed Median Filter, Adaptive thresholding, clustering and Neural Networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2269
9127 Application of LSB Based Steganographic Technique for 8-bit Color Images

Authors: Mamta Juneja, Parvinder S. Sandhu, Ekta Walia

Abstract:

Steganography is the process of hiding one file inside another such that others can neither identify the meaning of the embedded object, nor even recognize its existence. Current trends favor using digital image files as the cover file to hide another digital file that contains the secret message or information. One of the most common methods of implementation is Least Significant Bit Insertion, in which the least significant bit of every byte is altered to form the bit-string representing the embedded file. Altering the LSB will only cause minor changes in color, and thus is usually not noticeable to the human eye. While this technique works well for 24-bit color image files, steganography has not been as successful when using an 8-bit color image file, due to limitations in color variations and the use of a colormap. This paper presents the results of research investigating the combination of image compression and steganography. The technique developed starts with a 24-bit color bitmap file, then compresses the file by organizing and optimizing an 8-bit colormap. After the process of compression, a text message is hidden in the final, compressed image. Results indicate that the final technique has potential of being useful in the steganographic world.

Keywords: Compression, Colormap, Encryption, Steganographyand LSB Insertion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3007
9126 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: Simulation, visual navigation, mobile robot, data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1052
9125 Review of the Software Used for 3D Volumetric Reconstruction of the Liver

Authors: P. Strakos, M. Jaros, T. Karasek, T. Kozubek, P. Vavra, T. Jonszta

Abstract:

In medical imaging, segmentation of different areas of human body like bones, organs, tissues, etc. is an important issue. Image segmentation allows isolating the object of interest for further processing that can lead for example to 3D model reconstruction of whole organs. Difficulty of this procedure varies from trivial for bones to quite difficult for organs like liver. The liver is being considered as one of the most difficult human body organ to segment. It is mainly for its complexity, shape versatility and proximity of other organs and tissues. Due to this facts usually substantial user effort has to be applied to obtain satisfactory results of the image segmentation. Process of image segmentation then deteriorates from automatic or semi-automatic to fairly manual one. In this paper, overview of selected available software applications that can handle semi-automatic image segmentation with further 3D volume reconstruction of human liver is presented. The applications are being evaluated based on the segmentation results of several consecutive DICOM images covering the abdominal area of the human body.

Keywords: Image segmentation, semi-automatic, software, 3D volumetric reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4473
9124 Medical Imaging Fusion: A Teaching-Learning Simulation Environment

Authors: Cristina M. R. Caridade, Ana Rita F. Morais

Abstract:

The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with health care facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool, developed in MATLAB using Graphical User Interface, for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing to view original images and fusion images, compare processed and original images, adjust parameters and save images. The tool proposed in an innovative teaching and learning environment, consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques, necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides a real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.

Keywords: Image fusion, image processing, teaching-learning simulation tool, biomedical engineering education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 70
9123 Energy Efficient Transmission of Image over DWT-OFDM System

Authors: Lakshmi Pujitha Dachuri, Nalini Uppala

Abstract:

In many applications retransmissions of lost packets are not permitted. OFDM is a multi-carrier modulation scheme having excellent performance which allows overlapping in frequency domain. With OFDM there is a simple way of dealing with multipath relatively simple DSP algorithms.

 In this paper, an image frame is compressed using DWT, and the compressed data is arranged in data vectors, each with equal number of coefficients. These vectors are quantized and binary coded to get the bit steams, which are then packetized and intelligently mapped to the OFDM system. Based on one-bit channel state information at the transmitter, the descriptions in order of descending priority are assigned to the currently good channels such that poorer sub-channels can only affect the lesser important data vectors. We consider only one-bit channel state information available at the transmitter, informing only about the sub-channels to be good or bad. For a good sub-channel, instantaneous received power should be greater than a threshold Pth. Otherwise, the sub-channel is in fading state and considered bad for that batch of coefficients. In order to reduce the system power consumption, the mapped descriptions onto the bad sub channels are dropped at the transmitter. The binary channel state information gives an opportunity to map the bit streams intelligently and to save a reasonable amount of power. By using MAT LAB simulation we can analysis the performance of our proposed scheme, in terms of system energy saving without compromising the received quality in terms of peak signal-noise ratio.

Keywords: Binary channel state, Channel state feedback, DWT-OFDM system, Energy saving, Fading broadcast channel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2817
9122 Image Segmentation by Mathematical Morphology: An Approach through Linear, Bilinear and Conformal Transformation

Authors: Dibyendu Ghoshal, Pinaki Pratim Acharjya

Abstract:

Image segmentation process based on mathematical morphology has been studied in the paper. It has been established from the first principles of the morphological process, the entire segmentation is although a nonlinear signal processing task, the constituent wise, the intermediate steps are linear, bilinear and conformal transformation and they give rise to a non linear affect in a cumulative manner.

Keywords: Image segmentation, linear transform, bilinear transform, conformal transform, mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2200
9121 An Analysis of Compression Methods and Implementation of Medical Images in Wireless Network

Authors: C. Rajan, K. Geetha, S. Geetha

Abstract:

The motivation of image compression technique is to reduce the irrelevance and redundancy of the image data in order to store or pass data in an efficient way from one place to another place. There are several types of compression methods available. Without the help of compression technique, the file size is knowingly larger, usually several megabytes, but by doing the compression technique, it is possible to reduce file size up to 10% as of the original without noticeable loss in quality. Image compression can be lossless or lossy. The compression technique can be applied to images, audio, video and text data. This research work mainly concentrates on methods of encoding, DCT, compression methods, security, etc. Different methodologies and network simulations have been analyzed here. Various methods of compression methodologies and its performance metrics has been investigated and presented in a table manner.

Keywords: Image compression techniques, encoding, DCT, lossy compression, lossless compression, JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1193
9120 A Semi-Fragile Watermarking Scheme for Color Image Authentication

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.

Keywords: Image Authentication, YST Color Space, Intensity Mean, LSBs, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839
9119 Quad Tree Decomposition Based Analysis of Compressed Image Data Communication for Lossy and Lossless Using WSN

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Quad Tree Decomposition based performance analysis of compressed image data communication for lossy and lossless through wireless sensor network is presented. Images have considerably higher storage requirement than text. While transmitting a multimedia content there is chance of the packets being dropped due to noise and interference. At the receiver end the packets that carry valuable information might be damaged or lost due to noise, interference and congestion. In order to avoid the valuable information from being dropped various retransmission schemes have been proposed. In this proposed scheme QTD is used. QTD is an image segmentation method that divides the image into homogeneous areas. In this proposed scheme involves analysis of parameters such as compression ratio, peak signal to noise ratio, mean square error, bits per pixel in compressed image and analysis of difficulties during data packet communication in Wireless Sensor Networks. By considering the above, this paper is to use the QTD to improve the compression ratio as well as visual quality and the algorithm in MATLAB 7.1 and NS2 Simulator software tool.

Keywords: Image compression, Compression Ratio, Quad tree decomposition, Wireless sensor networks, NS2 simulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2394
9118 Feature's Extraction of Human Body Composition in Images by Segmentation Method

Authors: Mousa Mojarrad, Mashallah Abbasi Dezfouli, Amir Masoud Rahmani

Abstract:

Detection and recognition of the Human Body Composition and extraction their measures (width and length of human body) in images are a major issue in detecting objects and the important field in Image, Signal and Vision Computing in recent years. Finding people and extraction their features in Images are particularly important problem of object recognition, because people can have high variability in the appearance. This variability may be due to the configuration of a person (e.g., standing vs. sitting vs. jogging), the pose (e.g. frontal vs. lateral view), clothing, and variations in illumination. In this study, first, Human Body is being recognized in image then the measures of Human Body extract from the image.

Keywords: Analysis of image processing, canny edge detection, classification, feature extraction, human body recognition, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2779
9117 Low-MAC FEC Controller for JPEG2000 Image Transmission Over IEEE 802.15.4

Authors: Kyu-Yeul Wang, Sang-Seol Lee, Jea-Yeon Song, Jea-Young Choi, Seong-Seob Shin, Dong-Sun Kim, Duck-Jin Chung

Abstract:

In this paper, we propose the low-MAC FEC controller for practical implementation of JPEG2000 image transmission using IEEE 802.15.4. The proposed low-MAC FEC controller has very small HW size and spends little computation to estimate channel state. Because of this advantage, it is acceptable to apply IEEE 802.15.4 which has to operate more than 1 year with battery. For the image transmission, we integrate the low-MAC FEC controller and RCPC coder in sensor node of LR-WPAN. The modified sensor node has increase of 3% hardware size than conventional zigbee sensor node.

Keywords: FEC, IEEE 802.15.4, JPEG2000, low-MAC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
9116 An Improved Switching Median filter for Uniformly Distributed Impulse Noise Removal

Authors: Rajoo Pandey

Abstract:

The performance of an image filtering system depends on its ability to detect the presence of noisy pixels in the image. Most of the impulse detection schemes assume the presence of salt and pepper noise in the images and do not work satisfactorily in case of uniformly distributed impulse noise. In this paper, a new algorithm is presented to improve the performance of switching median filter in detection of uniformly distributed impulse noise. The performance of the proposed scheme is demonstrated by the results obtained from computer simulations on various images.

Keywords: Switching median filter, Impulse noise, Imagefiltering, Impulse detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
9115 Optimizing Exposure Parameters in Digital Mammography: A Study in Morocco

Authors: Talbi Mohammed, Oustous Aziz, Ben Messaoud Mounir, Sebihi Rajaa, Khalis Mohammed

Abstract:

Background: Breast cancer is the leading cause of death for women around the world. Screening mammography is the reference examination, due to its sensitivity for detecting small lesions and micro-calcifications. Therefore, it is essential to ensure quality mammographic examinations with the most optimal dose. These conditions depend on the choice of exposure parameters. Clinically, practices must be evaluated in order to determine the most appropriate exposure parameters. Material and Methods: We performed our measurements on a mobile mammography unit (PLANMED Sofie-classic.) in Morocco. A solid dosimeter (AGMS Radcal) and a MTM 100 phantom allow to quantify the delivered dose and the image quality. For image quality assessment, scores are defined by the rate of visible inserts (MTM 100 phantom), obtained and compared for each acquisition. Results: The results show that the parameters of the mammography unit on which we have made our measurements can be improved in order to offer a better compromise between image quality and breast dose. The last one can be reduced up from 13.27% to 22.16%, while preserving comparable image quality.

Keywords: Mammography, image quality, breast dose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797
9114 Advanced Image Analysis Tools Development for the Early Stage Bronchial Cancer Detection

Authors: P. Bountris, E. Farantatos, N. Apostolou

Abstract:

Autofluorescence (AF) bronchoscopy is an established method to detect dysplasia and carcinoma in situ (CIS). For this reason the “Sotiria" Hospital uses the Karl Storz D-light system. However, in early tumor stages the visualization is not that obvious. With the help of a PC, we analyzed the color images we captured by developing certain tools in Matlab®. We used statistical methods based on texture analysis, signal processing methods based on Gabor models and conversion algorithms between devicedependent color spaces. Our belief is that we reduced the error made by the naked eye. The tools we implemented improve the quality of patients' life.

Keywords: Bronchoscopy, digital image processing, lung cancer, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
9113 Analysis of Message Authentication in Turbo Coded Halftoned Images using Exit Charts

Authors: Andhe Dharani, P. S. Satyanarayana, Andhe Pallavi

Abstract:

Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).

Keywords: Halftoning, Turbo codes, security, operationallifetime, Turbo based stego system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
9112 An Improved Method to Watermark Images Sensitive to Blocking Artifacts

Authors: Afzel Noore

Abstract:

A new digital watermarking technique for images that are sensitive to blocking artifacts is presented. Experimental results show that the proposed MDCT based approach produces highly imperceptible watermarked images and is robust to attacks such as compression, noise, filtering and geometric transformations. The proposed MDCT watermarking technique is applied to fingerprints for ensuring security. The face image and demographic text data of an individual are used as multiple watermarks. An AFIS system was used to quantitatively evaluate the matching performance of the MDCT-based watermarked fingerprint. The high fingerprint matching scores show that the MDCT approach is resilient to blocking artifacts. The quality of the extracted face and extracted text images was computed using two human visual system metrics and the results show that the image quality was high.

Keywords: Digital watermarking, data hiding, modified discretecosine transformation (MDCT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
9111 A Way of Converting Color Images to Gray Scale Ones for the Color Blinds -Reducing the Colors for Tokyo Subway Map-

Authors: Katsuhiro Narikiyo, Naoto Kobayakawa

Abstract:

We proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color blinds. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them.

Keywords: Image processing, Color blind, JPEG

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
9110 Particle Image Velocimetry for Measuring Water Flow Velocity

Authors: King Kuok Kuok, Po Chan Chiu

Abstract:

Floods are natural phenomena, which may turn into disasters causing widespread damage, health problems and even deaths. Nowadays, floods had become more serious and more frequent due to climatic changes. During flooding, discharge measurement still can be taken by standing on the bridge across the river using portable measurement instrument. However, it is too dangerous to get near to the river especially during high flood. Therefore, this study employs Particle Image Velocimetry (PIV) as a tool to measure the surface flow velocity. PIV is a image processing technique to track the movement of water from one point to another. The PIV codes are developed using Matlab. In this study, 18 ping pong balls were scattered over the surface of the drain and images were taken with a digital SLR camera. The images obtained were analyzed using the PIV code. Results show that PIV is able to produce the flow velocity through analyzing the series of images captured.

Keywords: Particle Image Velocimetry, flow velocity, surface flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2855
9109 Adaptive Skin Segmentation Using Color Distance Map

Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae

Abstract:

In this paper an effective approach for segmenting human skin regions in images taken at different environment is proposed. The proposed method uses a color distance map that is flexible enough to reliably detect the skin regions even if the illumination conditions of the image vary. Local image conditions is also focused, which help the technique to adaptively detect differently illuminated skin regions of an image. Moreover, usage of local information also helps the skin detection process to get rid of picking up much noisy pixels.

Keywords: Color Distance map, Reference skin color, Regiongrowing, Skin segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
9108 Comparative Study of Different Enhancement Techniques for Computed Tomography Images

Authors: C. G. Jinimole, A. Harsha

Abstract:

One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.

Keywords: Computed tomography, enhancement techniques, increasing contrast, PSNR and MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382