Search results for: Architectural Image
1433 Composite Relevance Feedback for Image Retrieval
Authors: Pushpa B. Patil, Manesh B. Kokare
Abstract:
This paper presents content-based image retrieval (CBIR) frameworks with relevance feedback (RF) based on combined learning of support vector machines (SVM) and AdaBoosts. The framework incorporates only most relevant images obtained from both the learning algorithm. To speed up the system, it removes irrelevant images from the database, which are returned from SVM learner. It is the key to achieve the effective retrieval performance in terms of time and accuracy. The experimental results show that this framework had significant improvement in retrieval effectiveness, which can finally improve the retrieval performance.
Keywords: Image retrieval, relevance feedback, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19931432 Optimizing Exposure Parameters in Digital Mammography: A Study in Morocco
Authors: Talbi Mohammed, Oustous Aziz, Ben Messaoud Mounir, Sebihi Rajaa, Khalis Mohammed
Abstract:
Background: Breast cancer is the leading cause of death for women around the world. Screening mammography is the reference examination, due to its sensitivity for detecting small lesions and micro-calcifications. Therefore, it is essential to ensure quality mammographic examinations with the most optimal dose. These conditions depend on the choice of exposure parameters. Clinically, practices must be evaluated in order to determine the most appropriate exposure parameters. Material and Methods: We performed our measurements on a mobile mammography unit (PLANMED Sofie-classic.) in Morocco. A solid dosimeter (AGMS Radcal) and a MTM 100 phantom allow to quantify the delivered dose and the image quality. For image quality assessment, scores are defined by the rate of visible inserts (MTM 100 phantom), obtained and compared for each acquisition. Results: The results show that the parameters of the mammography unit on which we have made our measurements can be improved in order to offer a better compromise between image quality and breast dose. The last one can be reduced up from 13.27% to 22.16%, while preserving comparable image quality.
Keywords: Mammography, image quality, breast dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7881431 A Way of Converting Color Images to Gray Scale Ones for the Color Blinds -Reducing the Colors for Tokyo Subway Map-
Authors: Katsuhiro Narikiyo, Naoto Kobayakawa
Abstract:
We proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color blinds. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them.
Keywords: Image processing, Color blind, JPEG
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14001430 Particle Image Velocimetry for Measuring Water Flow Velocity
Authors: King Kuok Kuok, Po Chan Chiu
Abstract:
Floods are natural phenomena, which may turn into disasters causing widespread damage, health problems and even deaths. Nowadays, floods had become more serious and more frequent due to climatic changes. During flooding, discharge measurement still can be taken by standing on the bridge across the river using portable measurement instrument. However, it is too dangerous to get near to the river especially during high flood. Therefore, this study employs Particle Image Velocimetry (PIV) as a tool to measure the surface flow velocity. PIV is a image processing technique to track the movement of water from one point to another. The PIV codes are developed using Matlab. In this study, 18 ping pong balls were scattered over the surface of the drain and images were taken with a digital SLR camera. The images obtained were analyzed using the PIV code. Results show that PIV is able to produce the flow velocity through analyzing the series of images captured.
Keywords: Particle Image Velocimetry, flow velocity, surface flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28451429 Adaptive Skin Segmentation Using Color Distance Map
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
In this paper an effective approach for segmenting human skin regions in images taken at different environment is proposed. The proposed method uses a color distance map that is flexible enough to reliably detect the skin regions even if the illumination conditions of the image vary. Local image conditions is also focused, which help the technique to adaptively detect differently illuminated skin regions of an image. Moreover, usage of local information also helps the skin detection process to get rid of picking up much noisy pixels.Keywords: Color Distance map, Reference skin color, Regiongrowing, Skin segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20071428 Comparative Study of Different Enhancement Techniques for Computed Tomography Images
Authors: C. G. Jinimole, A. Harsha
Abstract:
One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.
Keywords: Computed tomography, enhancement techniques, increasing contrast, PSNR and MSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13781427 Edge Detection with the Parametric Filtering Method (Comparison with Canny Method)
Authors: Yacine Ait Ali Yahia, Abderazak Guessoum
Abstract:
In this paper, a new method of image edge-detection and characterization is presented. “Parametric Filtering method" uses a judicious defined filter, which preserves the signal correlation structure as input in the autocorrelation of the output. This leads, showing the evolution of the image correlation structure as well as various distortion measures which quantify the deviation between two zones of the signal (the two Hamming signals) for the protection of an image edge.Keywords: Edge detection, parametrable recursive filter, autocorrelation structure, distortion measurements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12871426 A Hyper-Domain Image Watermarking Method based on Macro Edge Block and Wavelet Transform for Digital Signal Processor
Authors: Yi-Pin Hsu, Shin-Yu Lin
Abstract:
In order to protect original data, watermarking is first consideration direction for digital information copyright. In addition, to achieve high quality image, the algorithm maybe can not run on embedded system because the computation is very complexity. However, almost nowadays algorithms need to build on consumer production because integrator circuit has a huge progress and cheap price. In this paper, we propose a novel algorithm which efficient inserts watermarking on digital image and very easy to implement on digital signal processor. In further, we select a general and cheap digital signal processor which is made by analog device company to fit consumer application. The experimental results show that the image quality by watermarking insertion can achieve 46 dB can be accepted in human vision and can real-time execute on digital signal processor.
Keywords: watermarking, digital signal processor, embedded system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12491425 A New Approach to Image Segmentation via Fuzzification of Rènyi Entropy of Generalized Distributions
Authors: Samy Sadek, Ayoub Al-Hamadi, Axel Panning, Bernd Michaelis, Usama Sayed
Abstract:
In this paper, we propose a novel approach for image segmentation via fuzzification of Rènyi Entropy of Generalized Distributions (REGD). The fuzzy REGD is used to precisely measure the structural information of image and to locate the optimal threshold desired by segmentation. The proposed approach draws upon the postulation that the optimal threshold concurs with maximum information content of the distribution. The contributions in the paper are as follow: Initially, the fuzzy REGD as a measure of the spatial structure of image is introduced. Then, we propose an efficient entropic segmentation approach using fuzzy REGD. However the proposed approach belongs to entropic segmentation approaches (i.e. these approaches are commonly applied to grayscale images), it is adapted to be viable for segmenting color images. Lastly, diverse experiments on real images that show the superior performance of the proposed method are carried out.Keywords: Entropy of generalized distributions, entropy fuzzification, entropic image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32321424 An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization
Authors: Ahmed Rekik, Mourad Zribi, Ahmed Ben Hamida, Mohamed Benjelloun
Abstract:
This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.
Keywords: Unsupervised classification, Pearson system, Satellite image, Segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20411423 Outdoor Anomaly Detection with a Spectroscopic Line Detector
Authors: O. J. G. Somsen
Abstract:
One of the tasks of optical surveillance is to detect anomalies in large amounts of image data. However, if the size of the anomaly is very small, limited information is available to distinguish it from the surrounding environment. Spectral detection provides a useful source of additional information and may help to detect anomalies with a size of a few pixels or less. Unfortunately, spectral cameras are expensive because of the difficulty of separating two spatial in addition to one spectral dimension. We investigate the possibility of modifying a simple spectral line detector for outdoor detection. This may be especially useful if the area of interest forms a line, such as the horizon. We use a monochrome CCD that also enables detection into the near infrared. A simple camera is attached to the setup to determine which part of the environment is spectrally imaged. Our preliminary results indicate that sensitive detection of very small targets is indeed possible. Spectra could be taken from the various targets by averaging columns in the line image. By imaging a set of lines of various widths we found narrow lines that could not be seen in the color image but remained visible in the spectral line image. A simultaneous analysis of the entire spectra can produce better results than visual inspection of the line spectral image. We are presently developing calibration targets for spatial and spectral focusing and alignment with the spatial camera. This will present improved results and more use in outdoor application.Keywords: Anomaly detection, spectroscopic line imaging, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16471422 Selection of Appropriate Classification Technique for Lithological Mapping of Gali Jagir Area, Pakistan
Authors: Khunsa Fatima, Umar K. Khattak, Allah Bakhsh Kausar
Abstract:
Satellite images interpretation and analysis assist geologists by providing valuable information about geology and minerals of an area to be surveyed. A test site in Fatejang of district Attock has been studied using Landsat ETM+ and ASTER satellite images for lithological mapping. Five different supervised image classification techniques namely maximum likelihood, parallelepiped, minimum distance to mean, mahalanobis distance and spectral angle mapper have been performed upon both satellite data images to find out the suitable classification technique for lithological mapping in the study area. Results of these five image classification techniques were compared with the geological map produced by Geological Survey of Pakistan. Result of maximum likelihood classification technique applied on ASTER satellite image has highest correlation of 0.66 with the geological map. Field observations and XRD spectra of field samples also verified the results. A lithological map was then prepared based on the maximum likelihood classification of ASTER satellite image.
Keywords: ASTER, Landsat-ETM+, Satellite, Image classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29211421 Image Indexing Using a Color Similarity Metric based on the Human Visual System
Authors: Angelo Nodari, Ignazio Gallo
Abstract:
The novelty proposed in this study is twofold and consists in the developing of a new color similarity metric based on the human visual system and a new color indexing based on a textual approach. The new color similarity metric proposed is based on the color perception of the human visual system. Consequently the results returned by the indexing system can fulfill as much as possibile the user expectations. We developed a web application to collect the users judgments about the similarities between colors, whose results are used to estimate the metric proposed in this study. In order to index the image's colors, we used a text indexing engine to facilitate the integration of visual features in a database of text documents. The textual signature is build by weighting the image's colors in according to their occurrence in the image. The use of a textual indexing engine, provide us a simple, fast and robust solution to index images. A typical usage of the system proposed in this study, is the development of applications whose data type is both visual and textual. In order to evaluate the proposed method we chose a price comparison engine as a case of study, collecting a series of commercial offers containing the textual description and the image representing a specific commercial offer.
Keywords: Color Extraction, Content-Based Image Retrieval, Indexing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30271420 VDGMSISS: A Verifiable and Detectable Multi-Secret Images Sharing Scheme with General Access Structure
Authors: Justie Su-Tzu Juan, Ming-Jheng Li, Ching-Fen Lee, Ruei-Yu Wu
Abstract:
A secret image sharing scheme is a way to protect images. The main idea is dispersing the secret image into numerous shadow images. A secret image sharing scheme can withstand the impersonal attack and achieve the highly practical property of multiuse is more practical. Therefore, this paper proposes a verifiable and detectable secret image-sharing scheme called VDGMSISS to solve the impersonal attack and to achieve some properties such as encrypting multi-secret images at one time and multi-use. Moreover, our scheme can also be used for any genera access structure.Keywords: Multi-secret images sharing scheme, verifiable, detectable, general access structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4521419 Color Image Segmentation Using Competitive and Cooperative Learning Approach
Authors: Yinggan Tang, Xinping Guan
Abstract:
Color image segmentation can be considered as a cluster procedure in feature space. k-means and its adaptive version, i.e. competitive learning approach are powerful tools for data clustering. But k-means and competitive learning suffer from several drawbacks such as dead-unit problem and need to pre-specify number of cluster. In this paper, we will explore to use competitive and cooperative learning approach to perform color image segmentation. In competitive and cooperative learning approach, seed points not only compete each other, but also the winner will dynamically select several nearest competitors to form a cooperative team to adapt to the input together, finally it can automatically select the correct number of cluster and avoid the dead-units problem. Experimental results show that CCL can obtain better segmentation result.Keywords: Color image segmentation, competitive learning, cluster, k-means algorithm, competitive and cooperative learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16171418 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework
Authors: T. P. Athira, Gibin Chacko George
Abstract:
This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.
Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21631417 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method
Authors: Souvik Bhattacharyya, Gautam Sanyal
Abstract:
Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22611416 An Adaptive Mammographic Image Enhancement in Orthogonal Polynomials Domain
Authors: R. Krishnamoorthy, N. Amudhavalli, M.K. Sivakkolunthu
Abstract:
X-ray mammography is the most effective method for the early detection of breast diseases. However, the typical diagnostic signs such as microcalcifications and masses are difficult to detect because mammograms are of low-contrast and noisy. In this paper, a new algorithm for image denoising and enhancement in Orthogonal Polynomials Transformation (OPT) is proposed for radiologists to screen mammograms. In this method, a set of OPT edge coefficients are scaled to a new set by a scale factor called OPT scale factor. The new set of coefficients is then inverse transformed resulting in contrast improved image. Applications of the proposed method to mammograms with subtle lesions are shown. To validate the effectiveness of the proposed method, we compare the results to those obtained by the Histogram Equalization (HE) and the Unsharp Masking (UM) methods. Our preliminary results strongly suggest that the proposed method offers considerably improved enhancement capability over the HE and UM methods.Keywords: mammograms, image enhancement, orthogonalpolynomials, contrast improvement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20111415 Using Self Organizing Feature Maps for Classification in RGB Images
Authors: Hassan Masoumi, Ahad Salimi, Nazanin Barhemmat, Babak Gholami
Abstract:
Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feedforward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on selforganizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification.Keywords: Classification, SOFM, neural network, RGB images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23191414 Performance Improvement in the Bivariate Models by using Modified Marginal Variance of Noisy Observations for Image-Denoising Applications
Authors: R. Senthilkumar
Abstract:
Most simple nonlinear thresholding rules for wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper compares different Shrinkage functions used for image-denoising. The second part of the paper compares different bivariate models and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill, Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values are calculated for each noise level.Keywords: BiShrink, Image-Denoising, PSNR, Shrinkage function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13471413 A Genetic Algorithm for Clustering on Image Data
Authors: Qin Ding, Jim Gasvoda
Abstract:
Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.
Keywords: Clustering, data mining, genetic algorithm, image data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20531412 Improved Posterized Color Images based on Color Quantization and Contrast Enhancement
Authors: Oh-Yeol Kwon, Sung-Il Chien
Abstract:
A conventional image posterization method occasionally fails to preserve the shape and color of objects due to the uneffective color reduction. This paper proposes a new image posterizartion method by using modified color quantization for preserving the shape and color of objects and color contrast enhancement for improving lightness contrast and saturation. Experiment results show that our proposed method can provide visually more satisfactory posterization result than that of the conventional method.Keywords: Color contrast enhancement, color quantization, color segmentation, image posterization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26751411 Weld Defect Detection in Industrial Radiography Based Digital Image Processing
Authors: N. Nacereddine, M. Zelmat, S. S. Belaïfa, M. Tridi
Abstract:
Industrial radiography is a famous technique for the identification and evaluation of discontinuities, or defects, such as cracks, porosity and foreign inclusions found in welded joints. Although this technique has been well developed, improving both the inspection process and operating time, it does suffer from several drawbacks. The poor quality of radiographic images is due to the physical nature of radiography as well as small size of the defects and their poor orientation relatively to the size and thickness of the evaluated parts. Digital image processing techniques allow the interpretation of the image to be automated, avoiding the presence of human operators making the inspection system more reliable, reproducible and faster. This paper describes our attempt to develop and implement digital image processing algorithms for the purpose of automatic defect detection in radiographic images. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of global and local preprocessing and segmentation methods must be appropriated.
Keywords: Digital image processing, global and localapproaches, radiographic film, weld defect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40731410 Image Compression Using Multiwavelet and Multi-Stage Vector Quantization
Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, P. Navaneethan
Abstract:
The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.
Keywords: Image compression, Multiwavelets, Multi-stagevector quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19371409 Data Hiding by Vector Quantization in Color Image
Authors: Yung-Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: Data hiding, vector quantization, watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17771408 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner
Authors: A. Umemuro, M. Sato, M. Narita, S. Hori, S. Sakurai, T. Nakayama, A. Nakazawa, T. Ogura
Abstract:
Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.
Keywords: EEG scanner, eye-detector, mammography, observers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3641407 A New Approach for Image Segmentation using Pillar-Kmeans Algorithm
Authors: Ali Ridho Barakbah, Yasushi Kiyoki
Abstract:
This paper presents a new approach for image segmentation by applying Pillar-Kmeans algorithm. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after optimized by Pillar Algorithm. The Pillar algorithm considers the pillars- placement which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as identical to the number of centroids amongst the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in aspects of precision and computation time. It designates the initial centroids- positions by calculating the accumulated distance metric between each data point and all previous centroids, and then selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means and Gaussian Mixture Model algorithm and involving RGB, HSV, HSL and CIELAB color spaces. The experimental results clarify the effectiveness of our approach to improve the segmentation quality in aspects of precision and computational time.Keywords: Image segmentation, K-means clustering, Pillaralgorithm, color spaces.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33721406 Studying Iranian Religious Minority Architecture: Differences and Commonalities in Religious and National Architecture after Safavid
Authors: Saeideh Soltanmohammadlou, Pilar M Guerrieri, Amir Kianfar, Sara Sadeghian, Yasaman Nafezi, Emily Irvin
Abstract:
Architecture is rooted in the experiences of the residents in a place. Its foundations are based on needs and circumstances of each territory in terms of climate, available materials, economics and governmental policies, and cultural ideals and ideas of the people that live there. The architectural history of Iran echoes these architectural origins and has revealed certain trends reflecting this territory and culture. However, in recent years, new architectural patterns are developing that diverge from what has previously been considered classic forms of Iranian architecture. This article investigates architectural elements that make up the architecture created by religious minorities after the Safavid dynasty (one of the most significant ruling dynasties of Iran (from 1501 to 1736) in Iranian cities: Isfahan, Tabriz, Kerman, and Uremia. Similarities and differences are revealed between the architecture that composes neighborhoods of religious minorities in Iran and common national architectural trends in each era after this dynasty. This dynasty is specific as a point of reference in this article because Islam was identified as the state religion of Iran during this era. This decision changed the course of architecture in the country to incorporate religious motifs and meanings. The study associated with this article was conducted as a survey that sought to find links between architecture of religious minorities with Iranian national architecture. Interestingly, a merging of architectural forms and trends occur as immigrants interact with Iranian Islamic meanings. These observations are significant within the context of modern architecture around the world and within Western discourse because what are considered religious minorities in Iran are the dominant religions in Western nations. This makes Iran’s architecture particularly unique as it creates a kind of inverse relationship, than that of Western nations, to the ways in which religion influences architectural history.
Keywords: Architecture, ethnic architecture, national architecture, religion architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6891405 Hit-or-Miss Transform as a Tool for Similar Shape Detection
Authors: Osama Mohamed Elrajubi, Idris El-Feghi, Mohamed Abu Baker Saghayer
Abstract:
This paper describes an identification of specific shapes within binary images using the morphological Hit-or-Miss Transform (HMT). Hit-or-Miss transform is a general binary morphological operation that can be used in searching of particular patterns of foreground and background pixels in an image. It is actually a basic operation of binary morphology since almost all other binary morphological operators are derived from it. The input of this method is a binary image and a structuring element (a template which will be searched in a binary image) while the output is another binary image. In this paper a modification of Hit-or-Miss transform has been proposed. The accuracy of algorithm is adjusted according to the similarity of the template and the sought template. The implementation of this method has been done by C language. The algorithm has been tested on several images and the results have shown that this new method can be used for similar shape detection.
Keywords: Hit-or/and-Miss Operator/Transform, HMT, binary morphological operation, shape detection, binary images processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51281404 Error Effects on SAR Image Resolution using Range Doppler Imaging Algorithm
Authors: Su Su Yi Mon, Fang Jiancheng
Abstract:
Synthetic Aperture Radar (SAR) is an imaging radar form by taking full advantage of the relative movement of the antenna with respect to the target. Through the simultaneous processing of the radar reflections over the movement of the antenna via the Range Doppler Algorithm (RDA), the superior resolution of a theoretical wider antenna, termed synthetic aperture, is obtained. Therefore, SAR can achieve high resolution two dimensional imagery of the ground surface. In addition, two filtering steps in range and azimuth direction provide accurate enough result. This paper develops a simulation in which realistic SAR images can be generated. Also, the effect of velocity errors in the resulting image has also been investigated. Taking some velocity errors into account, the simulation results on the image resolution would be presented. Most of the times, algorithms need to be adjusted for particular datasets, or particular applications.
Keywords: Synthetic Aperture Radar (SAR), Range Doppler Algorithm (RDA), Image Resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3349