Search results for: Texture Image Segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1777

Search results for: Texture Image Segmentation

1507 Feature Extraction for Surface Classification – An Approach with Wavelets

Authors: Smriti H. Bhandari, S. M. Deshpande

Abstract:

Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.

Keywords: Dual-tree complex wavelet transform, surface metrology, surface roughness, texture classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2201
1506 Combining an Optimized Closed Principal Curve-Based Method and Evolutionary Neural Network for Ultrasound Prostate Segmentation

Authors: Tao Peng, Jing Zhao, Yanqing Xu, Jing Cai

Abstract:

Due to missing/ambiguous boundaries between the prostate and neighboring structures, the presence of shadow artifacts, as well as the large variability in prostate shapes, ultrasound prostate segmentation is challenging. To handle these issues, this paper develops a hybrid method for ultrasound prostate segmentation by combining an optimized closed principal curve-based method and the evolutionary neural network; the former can fit curves with great curvature and generate a contour composed of line segments connected by sorted vertices, and the latter is used to express an appropriate map function (represented by parameters of evolutionary neural network) for generating the smooth prostate contour to match the ground truth contour. Both qualitative and quantitative experimental results showed that our proposed method obtains accurate and robust performances.

Keywords: Ultrasound prostate segmentation, optimized closed polygonal segment method, evolutionary neural network, smooth mathematical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 375
1505 Analysis and Comparison of Image Encryption Algorithms

Authors: İsmet Öztürk, İbrahim Soğukpınar

Abstract:

With the fast progression of data exchange in electronic way, information security is becoming more important in data storage and transmission. Because of widely using images in industrial process, it is important to protect the confidential image data from unauthorized access. In this paper, we analyzed current image encryption algorithms and compression is added for two of them (Mirror-like image encryption and Visual Cryptography). Implementations of these two algorithms have been realized for experimental purposes. The results of analysis are given in this paper.

Keywords: image encryption, image cryptosystem, security, transmission

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4904
1504 Effective Implementation of Burst SegmentationTechniques in OBS Networks

Authors: A. Abid, F. M. Abbou, H. T. Ewe

Abstract:

Optical Bursts Switching (OBS) is a relatively new optical switching paradigm. Contention and burst loss in OBS networks are major concerns. To resolve contentions, an interesting alternative to discarding the entire data burst is to partially drop the burst. Partial burst dropping is based on burst segmentation concept that its implementation is constrained by some technical challenges, besides the complexity added to the algorithms and protocols on both edge and core nodes. In this paper, the burst segmentation concept is investigated, and an implementation scheme is proposed and evaluated. An appropriate dropping policy that effectively manages the size of the segmented data bursts is presented. The dropping policy is further supported by a new control packet format that provides constant transmission overhead.

Keywords: Burst length, Burst Segmentation, Optical BurstSwitching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
1503 Functional Store Image and Corporate Social Responsibility Image: A Congruity Analysis on Store Loyalty

Authors: Jamaliah Mohd. Yusof, Rosidah Musa, Sofiah Abd. Rahman

Abstract:

With previous studies that examined the importance of functional store image and CSR, this study is aimed at examining their effects in the self-congruity model in influencing store loyalty. In particular, this study developed and tested a structural model in the context of retailing industry on the self-congruity theory. Whilst much of the self-congruity studies have incorporated functional store image, there has been lack of studies that examined social responsibility image of retail stores in the self-congruity studies. Findings indicate that self-congruity influence on store loyalty was mediated by both functional store image and social responsibility image. In influencing store loyalty, the findings have shown that social responsibility image has a stronger influence on store loyalty than functional store image. This study offers important findings and implications for future research as it presents a new framework on the importance of social responsibility image.

Keywords: Self-congruity, functional store image, social responsibility image, store loyalty

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299
1502 Supercompression for Full-HD and 4k-3D (8k)Digital TV Systems

Authors: Mario Mastriani

Abstract:

In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.

Keywords: General-Purpose computation on Graphics Processing Units, Image Compression, Interpolation, Super-resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
1501 Springback Property and Texture Distribution of Grained Pure Copper

Authors: Takashi Sakai, Hitoshi Omata, Jun-Ichi Koyama

Abstract:

To improve the material characteristics of single- and poly-crystals of pure copper, the respective relationships between crystallographic orientations and microstructures, and the bending and mechanical properties were examined. And texture distribution is also analyzed. A grain refinement procedure was performed to obtain a grained structure. Furthermore, some analytical results related to crystal direction maps, inverse pole figures, and textures were obtained from SEM-EBSD analyses. Results showed that these grained metallic materials have peculiar springback characteristics with various bending angles.

Keywords: Pure Copper, Grain Refinement, Environmental Materials, SEM-EBSD Analysis, Texture, Microstructure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
1500 Scatterer Density in Nonlinear Diffusion for Speckle Reduction in Ultrasound Imaging: The Isotropic Case

Authors: Ahmed Badawi

Abstract:

This paper proposes a method for speckle reduction in medical ultrasound imaging while preserving the edges with the added advantages of adaptive noise filtering and speed. A nonlinear image diffusion method that incorporates local image parameter, namely, scatterer density in addition to gradient, to weight the nonlinear diffusion process, is proposed. The method was tested for the isotropic case with a contrast detail phantom and varieties of clinical ultrasound images, and then compared to linear and some other diffusion enhancement methods. Different diffusion parameters were tested and tuned to best reduce speckle noise and preserve edges. The method showed superior performance measured both quantitatively and qualitatively when incorporating scatterer density into the diffusivity function. The proposed filter can be used as a preprocessing step for ultrasound image enhancement before applying automatic segmentation, automatic volumetric calculations, or 3D ultrasound volume rendering.

Keywords: Ultrasound imaging, Nonlinear isotropic diffusion, Speckle noise, Scattering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
1499 Automatic Number Plate Recognition System Based on Deep Learning

Authors: T. Damak, O. Kriaa, A. Baccar, M. A. Ben Ayed, N. Masmoudi

Abstract:

In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used in the safety, the security, and the commercial aspects. Forethought, several methods and techniques are computing to achieve the better levels in terms of accuracy and real time execution. This paper proposed a computer vision algorithm of Number Plate Localization (NPL) and Characters Segmentation (CS). In addition, it proposed an improved method in Optical Character Recognition (OCR) based on Deep Learning (DL) techniques. In order to identify the number of detected plate after NPL and CS steps, the Convolutional Neural Network (CNN) algorithm is proposed. A DL model is developed using four convolution layers, two layers of Maxpooling, and six layers of fully connected. The model was trained by number image database on the Jetson TX2 NVIDIA target. The accuracy result has achieved 95.84%.

Keywords: Automatic number plate recognition, character segmentation, convolutional neural network, CNN, deep learning, number plate localization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1215
1498 Combined Hashing/Watermarking Method for Image Authentication

Authors: Vlado Kitanovski, Dimitar Taskovski, Sofija Bogdanova

Abstract:

In this paper we present a combined hashing/watermarking method for image authentication. A robust image hash, invariant to legitimate modifications, but fragile to illegitimate modifications is generated from the local image characteristics. To increase security of the system the watermark is generated using the image hash as a key. Quantized Index Modulation of DCT coefficients is used for watermark embedding. Watermark detection is performed without use of the original image. Experimental results demonstrate the effectiveness of the presented method in terms of robustness and fragility.

Keywords: authentication, blind watermarking, image hash, semi-fragile watermarking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952
1497 Neural Network based Texture Analysis of Liver Tumor from Computed Tomography Images

Authors: K.Mala, V.Sadasivam, S.Alagappan

Abstract:

Advances in clinical medical imaging have brought about the routine production of vast numbers of medical images that need to be analyzed. As a result an enormous amount of computer vision research effort has been targeted at achieving automated medical image analysis. Computed Tomography (CT) is highly accurate for diagnosing liver tumors. This study aimed to evaluate the potential role of the wavelet and the neural network in the differential diagnosis of liver tumors in CT images. The tumors considered in this study are hepatocellular carcinoma, cholangio carcinoma, hemangeoma and hepatoadenoma. Each suspicious tumor region was automatically extracted from the CT abdominal images and the textural information obtained was used to train the Probabilistic Neural Network (PNN) to classify the tumors. Results obtained were evaluated with the help of radiologists. The system differentiates the tumor with relatively high accuracy and is therefore clinically useful.

Keywords: Fuzzy c means clustering, texture analysis, probabilistic neural network, LVQ neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2940
1496 Local Image Descriptor using VQ-SIFT for Image Retrieval

Authors: Qiu Chen, Feifei Lee, Koji Kotani, Tadahiro Ohmi

Abstract:

In this paper, we present local image descriptor using VQ-SIFT for more effective and efficient image retrieval. Instead of SIFT's weighted orientation histograms, we apply vector quantization (VQ) histogram as an alternate representation for SIFT features. Experimental results show that SIFT features using VQ-based local descriptors can achieve better image retrieval accuracy than the conventional algorithm while the computational cost is significantly reduced.

Keywords: SIFT feature, Vector quantization histogram, Localdescriptor, Image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2351
1495 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification

Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian

Abstract:

Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.

Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713
1494 Labeling Method in Steganography

Authors: H. Motameni, M. Norouzi, M. Jahandar, A. Hatami

Abstract:

In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).

Keywords: Binary image, labeling, low bit, neighborhood, RGB image, steganography, threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2086
1493 Parallel Image Compression and Analysis with Wavelets

Authors: M. Kutila, J. Viitanen

Abstract:

This paper presents image compression with wavelet based method. The wavelet transformation divides image to low- and high pass filtered parts. The traditional JPEG compression technique requires lower computation power with feasible losses, when only compression is needed. However, there is obvious need for wavelet based methods in certain circumstances. The methods are intended to the applications in which the image analyzing is done parallel with compression. Furthermore, high frequency bands can be used to detect changes or edges. Wavelets enable hierarchical analysis for low pass filtered sub-images. The first analysis can be done for a small image, and only if any interesting is found, the whole image is processed or reconstructed.

Keywords: image compression, jpeg, wavelet, vlc

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
1492 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn

Abstract:

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3369
1491 An Image Matching Method for Digital Images Using Morphological Approach

Authors: Pinaki Pratim Acharjya, Dibyendu Ghoshal

Abstract:

Image matching methods play a key role in deciding correspondence between two image scenes. This paper presents a method for the matching of digital images using mathematical morphology. The proposed method has been applied to real life images. The matching process has shown successful and promising results.

Keywords: Digital image, gradients, image matching, mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2885
1490 An Efficient Pixel Based Cervical Disc Localization

Authors: J. Preetha, S. Selvarajan

Abstract:

When neck pain is associated with pain, numbness, or weakness in the arm, shoulder, or hand, further investigation is needed as these are symptoms indicating pressure on one or more nerve roots. Evaluation necessitates a neurologic examination and imaging using an MRI/CT scan. A degenerating disc loses some thickness and is less flexible, causing inter-vertebrae space to narrow. A radiologist diagnoses an Intervertebral Disc Degeneration (IDD) by localizing every inter-vertebral disc and identifying the pathology in a disc based on its geometry and appearance. Accurate localizing is necessary to diagnose IDD pathology. But, the underlying image signal is ambiguous: a disc’s intensity overlaps the spinal nerve fibres. Even the structure changes from case to case, with possible spinal column bending (scoliosis). The inter-vertebral disc pathology’s quantitative assessment needs accurate localization of the cervical region discs. In this work, the efficacy of multilevel set segmentation model, to segment cervical discs is investigated. The segmented images are annotated using a simple distance matrix.

Keywords: Intervertebral Disc Degeneration (IDD), Cervical Disc Localization, multilevel set segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
1489 Optimizing the Capacity of a Convolutional Neural Network for Image Segmentation and Pattern Recognition

Authors: Yalong Jiang, Zheru Chi

Abstract:

In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.

Keywords: CNN, capsule network, capacity optimization, character recognition, data augmentation; semantic segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
1488 Integral Image-Based Differential Filters

Authors: Kohei Inoue, Kenji Hara, Kiichi Urahama

Abstract:

We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method.

Keywords: Integral images, differential images, differential filters, image fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2050
1487 Evaluating Content Based Image Retrieval Techniques with the One Million Images CLIC Test Bed

Authors: Pierre-Alain Moëllic, Patrick Hède, Gr egory Grefenstette, Christophe Millet

Abstract:

Pattern recognition and image recognition methods are commonly developed and tested using testbeds, which contain known responses to a query set. Until now, testbeds available for image analysis and content-based image retrieval (CBIR) have been scarce and small-scale. Here we present the one million images CEA-List Image Collection (CLIC) testbed that we have produced, and report on our use of this testbed to evaluate image analysis merging techniques. This testbed will soon be made publicly available through the EU MUSCLE Network of Excellence.

Keywords: CBIR, CLIC, evaluation, image indexing and retrieval, testbed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
1486 Efficient CT Image Volume Rendering for Diagnosis

Authors: HaeNa Lee, Sun K. Yoo

Abstract:

Volume rendering is widely used in medical CT image visualization. Applying 3D image visualization to diagnosis application can require accurate volume rendering with high resolution. Interpolation is important in medical image processing applications such as image compression or volume resampling. However, it can distort the original image data because of edge blurring or blocking effects when image enhancement procedures were applied. In this paper, we proposed adaptive tension control method exploiting gradient information to achieve high resolution medical image enhancement in volume visualization, where restored images are similar to original images as much as possible. The experimental results show that the proposed method can improve image quality associated with the adaptive tension control efficacy.

Keywords: Tension control, Interpolation, Ray-casting, Medical imaging analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318
1485 Dispersed Error Control based on Error Filter Design for Improving Halftone Image Quality

Authors: Sang-Chul Kim, Sung-Il Chien

Abstract:

The error diffusion method generates worm artifacts, and weakens the edge of the halftone image when the continuous gray scale image is reproduced by a binary image. First, to enhance the edges, we propose the edge-enhancing filter by considering the quantization error information and gradient of the neighboring pixels. Furthermore, to remove worm artifacts often appearing in a halftone image, we add adaptively random noise into the weights of an error filter.

Keywords: Artifact suppression, Edge enhancement, Error diffusion method, Halftone image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
1484 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: Image processing, Illumination equalization, Shadow filtering, Object detection, Colour models, Image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 972
1483 A Novel Approach to Image Compression of Colour Images by Plane Reduction Technique

Authors: K.Sowmyan, A.Siddarth, D.Menaka

Abstract:

Several methods have been proposed for color image compression but the reconstructed image had very low signal to noise ratio which made it inefficient. This paper describes a lossy compression technique for color images which overcomes the drawbacks. The technique works on spatial domain where the pixel values of RGB planes of the input color image is mapped onto two dimensional planes. The proposed technique produced better results than JPEG2000, 2DPCA and a comparative study is reported based on the image quality measures such as PSNR and MSE.Experiments on real time images are shown that compare this methodology with previous ones and demonstrate its advantages.

Keywords: Color Image compression, spatial domain, planereduction, root mean square, image restoration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
1482 A Trends Analysis of Image Processing in Unmanned Aerial Vehicle

Authors: Jae-Neung Lee, Keun-Chang Kwak

Abstract:

This paper describes an analysis of domestic and international trends of image processing for data in UAV (unmanned aerial vehicle) and also explains about UAV and Quadcopter. Overseas examples of image processing using UAV include image processing for totaling the total numberof vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT(scale invariant features transform) matching, and application of median filter and thresholding. In Korea, many studies are underway including visualization of new urban buildings.

Keywords: Image Processing, UAV, Quadcopter, Target detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7618
1481 Robust Semi-Blind Digital Image Watermarking Technique in DT-CWT Domain

Authors: Samira Mabtoul, Elhassan Ibn Elhaj, Driss Aboutajdine

Abstract:

In this paper a new robust digital image watermarking algorithm based on the Complex Wavelet Transform is proposed. This technique embeds different parts of a watermark into different blocks of an image under the complex wavelet domain. To increase security of the method, two chaotic maps are employed, one map is used to determine the blocks of the host image for watermark embedding, and another map is used to encrypt the watermark image. Simulation results are presented to demonstrate the effectiveness of the proposed algorithm.

Keywords: Image watermarking, Chaotic map, DT-CWT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
1480 Fuzzy Hyperbolization Image Enhancement and Artificial Neural Network for Anomaly Detection

Authors: Sri Hartati, 1Agus Harjoko, Brad G. Nickerson

Abstract:

A prototype of an anomaly detection system was developed to automate process of recognizing an anomaly of roentgen image by utilizing fuzzy histogram hyperbolization image enhancement and back propagation artificial neural network. The system consists of image acquisition, pre-processor, feature extractor, response selector and output. Fuzzy Histogram Hyperbolization is chosen to improve the quality of the roentgen image. The fuzzy histogram hyperbolization steps consist of fuzzyfication, modification of values of membership functions and defuzzyfication. Image features are extracted after the the quality of the image is improved. The extracted image features are input to the artificial neural network for detecting anomaly. The number of nodes in the proposed ANN layers was made small. Experimental results indicate that the fuzzy histogram hyperbolization method can be used to improve the quality of the image. The system is capable to detect the anomaly in the roentgen image.

Keywords: Image processing, artificial neural network, anomaly detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
1479 On the Performance of Information Criteria in Latent Segment Models

Authors: Jaime R. S. Fonseca

Abstract:

Nevertheless the widespread application of finite mixture models in segmentation, finite mixture model selection is still an important issue. In fact, the selection of an adequate number of segments is a key issue in deriving latent segments structures and it is desirable that the selection criteria used for this end are effective. In order to select among several information criteria, which may support the selection of the correct number of segments we conduct a simulation study. In particular, this study is intended to determine which information criteria are more appropriate for mixture model selection when considering data sets with only categorical segmentation base variables. The generation of mixtures of multinomial data supports the proposed analysis. As a result, we establish a relationship between the level of measurement of segmentation variables and some (eleven) information criteria-s performance. The criterion AIC3 shows better performance (it indicates the correct number of the simulated segments- structure more often) when referring to mixtures of multinomial segmentation base variables.

Keywords: Quantitative Methods, Multivariate Data Analysis, Clustering, Finite Mixture Models, Information Theoretical Criteria, Simulation experiments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
1478 On the Reduction of Side Effects in Tomography

Authors: V. Masilamani, C. Vanniarajan, Kamala Krithivasan

Abstract:

As the Computed Tomography(CT) requires normally hundreds of projections to reconstruct the image, patients are exposed to more X-ray energy, which may cause side effects such as cancer. Even when the variability of the particles in the object is very less, Computed Tomography requires many projections for good quality reconstruction. In this paper, less variability of the particles in an object has been exploited to obtain good quality reconstruction. Though the reconstructed image and the original image have same projections, in general, they need not be the same. In addition to projections, if a priori information about the image is known, it is possible to obtain good quality reconstructed image. In this paper, it has been shown by experimental results why conventional algorithms fail to reconstruct from a few projections, and an efficient polynomial time algorithm has been given to reconstruct a bi-level image from its projections along row and column, and a known sub image of unknown image with smoothness constraints by reducing the reconstruction problem to integral max flow problem. This paper also discusses the necessary and sufficient conditions for uniqueness and extension of 2D-bi-level image reconstruction to 3D-bi-level image reconstruction.

Keywords: Discrete Tomography, Image Reconstruction, Projection, Computed Tomography, Integral Max Flow Problem, Smooth Binary Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1333