Search results for: Colour image segmentation
1265 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kr. Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.
Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4301264 Some Physico-Chemical and Nutritional Properties of `Musmula` Medlar (Mespilus germanica L.) Grown in Northeast Anatolia
Authors: Ismail Hakki Kalyoncu, Nilda Ersoy, Ayse Yalcin Elidemir, Inci Tolay
Abstract:
In this study, The physico-chemical and nutritional properties of `Musmula` Medlar (Mespilus germanica L.) fruit and seed grown in Northeast Anatolia was investigated. In the fruit, length, width, thickness, weight, total soluble solids, colour (1), colour (2) [L, a, b values], protein, crude ash, crude fiber, crude oil, texture and pH were determinated as 4.34 cm, 4.22 cm, 3.67 cm, 38.36 g, 23.97 %, S60O60Y41,, [53.85, 17.15, 33.75], 1.06 %, 0.79 %, 4.24 %, 0.005 %, 1.21 kg/cm2 and 4.26 respectively. Also, pulp ratio, seed ratio and pulp/seed ratio were found to be 92.88 %, 7.11 % and 14.07 %, respectively. In addition, the mineral composition of medlar fruit in Northeast Anatolia was studied. In the fruit, 23 minerals were analyzed and 19 minerals were present at detectable levels. The medlar fruit was richest in potassium (6962 ppm), calcium (1186.378 ppm), magnesium (1070.08 ppm) and phosphor (763.425 ppm).Keywords: Fruits, Mespilus germanica L., mineral compounds, physico-chemical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33521263 HSV Image Watermarking Scheme Based on Visual Cryptography
Authors: Rawan I. Zaghloul, Enas F. Al-Rawashdeh
Abstract:
In this paper a simple watermarking method for color images is proposed. The proposed method is based on watermark embedding for the histograms of the HSV planes using visual cryptography watermarking. The method has been proved to be robust for various image processing operations such as filtering, compression, additive noise, and various geometrical attacks such as rotation, scaling, cropping, flipping, and shearing.Keywords: Histogram, HSV image, Visual Cryptography, Watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19701262 Automated 3D Segmentation System for Detecting Tumor and Its Heterogeneity in Patients with High Grade Ovarian Epithelial Cancer
Authors: D. A. Binas, M. Konidari, C. Bourgioti, L. Angela Moulopoulou, T. L. Economopoulos, G. K. Matsopoulos
Abstract:
High grade ovarian epithelial cancer (OEC) is the most fatal gynecological cancer and poor prognosis of this entity is closely related to considerable intratumoral genetic heterogeneity. By examining imaging data, it is possible to assess the heterogeneity of tumorous tissue. This study presents a methodology for aligning, segmenting and finally visualizing information from various magnetic resonance imaging series, in order to construct 3D models of heterogeneity maps from the same tumor in OEC patients. The proposed system may be used as an adjunct digital tool by health professionals for personalized medicine, as it allows for an easy visual assessment of the heterogeneity of the examined tumor.
Keywords: K-means segmentation, ovarian epithelial cancer, quantitative characteristics, registration, tumor visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6981261 Grouping and Indexing Color Features for Efficient Image Retrieval
Authors: M. V. Sudhamani, C. R. Venugopal
Abstract:
Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.
Keywords: Content-based, indexing, cluster, region.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18111260 Pulsed Multi-Layered Image Filtering: A VLSI Implementation
Authors: Christian Mayr, Holger Eisenreich, Stephan Henker, René Schüffny
Abstract:
Image convolution similar to the receptive fields found in mammalian visual pathways has long been used in conventional image processing in the form of Gabor masks. However, no VLSI implementation of parallel, multi-layered pulsed processing has been brought forward which would emulate this property. We present a technical realization of such a pulsed image processing scheme. The discussed IC also serves as a general testbed for VLSI-based pulsed information processing, which is of interest especially with regard to the robustness of representing an analog signal in the phase or duration of a pulsed, quasi-digital signal, as well as the possibility of direct digital manipulation of such an analog signal. The network connectivity and processing properties are reconfigurable so as to allow adaptation to various processing tasks.Keywords: Neural image processing, pulse computation application, pulsed Gabor convolution, VLSI pulse routing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13911259 A Sub Pixel Resolution Method
Authors: S. Khademi, A. Darudi, Z. Abbasi
Abstract:
One of the main limitations for the resolution of optical instruments is the size of the sensor-s pixels. In this paper we introduce a new sub pixel resolution algorithm to enhance the resolution of images. This method is based on the analysis of multiimages which are fast recorded during the fine relative motion of image and pixel arrays of CCDs. It is shown that by applying this method for a sample noise free image one will enhance the resolution with 10-14 order of error.Keywords: Sub Pixel Resolution, Moving Pixels, CCD, Image, Optical Instrument.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19941258 A Hybrid Approach for Color Image Quantization Using K-means and Firefly Algorithms
Authors: Parisut Jitpakdee, Pakinee Aimmanee, Bunyarit Uyyanonvara
Abstract:
Color Image quantization (CQ) is an important problem in computer graphics, image and processing. The aim of quantization is to reduce colors in an image with minimum distortion. Clustering is a widely used technique for color quantization; all colors in an image are grouped to small clusters. In this paper, we proposed a new hybrid approach for color quantization using firefly algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased algorithm that can be used for solving optimization problems. The proposed method can overcome the drawbacks of both algorithms such as the local optima converge problem in K-means and the early converge of firefly algorithm. Experiments on three commonly used images and the comparison results shows that the proposed algorithm surpasses both the base-line technique k-means clustering and original firefly algorithm.Keywords: Clustering, Color quantization, Firefly algorithm, Kmeans.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22181257 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor
Authors: Jinseon Song, Yongwan Park
Abstract:
In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14591256 Performance Evaluation of Compression Algorithms for Developing and Testing Industrial Imaging Systems
Authors: Daniel F. Garcia, Julio Molleda, Francisco Gonzalez, Ruben Usamentiaga
Abstract:
The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.Keywords: Lossless image compression, codec performanceevaluation, grayscale codec comparison, real-time image recording.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14191255 Face Recognition Using Double Dimension Reduction
Authors: M. A Anjum, M. Y. Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Keywords: Biometrics, DCT, Face Recognition, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14921254 Analysis of Patterns in TV Commercials that Recognize NGO Image
Authors: J. Areerut, F. Samuel
Abstract:
The purpose of this research is to analyze the pattern of television commercials and how they encourage non-governmental organizations to build their image in Thailand. It realizes how public relations can impact an organization's image. It is a truth that bad public relations management can cause hurt a reputation. On the other hand, a very small amount of work in public relations helps your organization to be recognized broadly and eventually accepted even wider. The main idea in this paper is to study and analyze patterns of television commercials that could impact non-governmental organization's images in a greater way. This research uses questionnaires and content analysis to summarize results. The findings show the aspects of how patterns of television commercials that are suited to non-governmental organization work in Thailand. It will be useful for any non-governmental organization that wishes to build their image through television commercials and also for further work based on this research.
Keywords: Television Commercial (TVC), Organization Image, Non-Governmental Organization: NGO, Public Relation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23861253 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16761252 National Image in the Age of Mass Self-Communication: An Analysis of Internet Users' Perception of Portugal
Authors: L. Godinho, N. Teixeira
Abstract:
Nowadays, massification of Internet access represents one of the major challenges to the traditional powers of the State, among which the power to control its external image. The virtual world has also sparked the interest of social sciences which consider it a new field of study, an immense open text where sense is expressed. In this paper, that immense text has been accessed to so as to understand the perception Internet users from all over the world have of Portugal. Ours is a quantitative and qualitative approach, as we have resorted to buzz, thematic and category analysis. The results confirm the predominance of sea stereotype in others' vision of the Portuguese people, and evidence that national image has adapted to network communication through processes of individuation and paganization.Keywords: Internet, national image, perception, web analytics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10561251 Low Resolution Single Neural Network Based Face Recognition
Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum
Abstract:
This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17501250 Image Transmission via Iterative Cellular-Turbo System
Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan
Abstract:
To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18121249 Extraction of Semantic Digital Signatures from MRI Photos for Image-Identification Purposes
Authors: Marios Poulos, George Bokos
Abstract:
This paper makes an attempt to solve the problem of searching and retrieving of similar MRI photos via Internet services using morphological features which are sourced via the original image. This study is aiming to be considered as an additional tool of searching and retrieve methods. Until now the main way of the searching mechanism is based on the syntactic way using keywords. The technique it proposes aims to serve the new requirements of libraries. One of these is the development of computational tools for the control and preservation of the intellectual property of digital objects, and especially of digital images. For this purpose, this paper proposes the use of a serial number extracted by using a previously tested semantic properties method. This method, with its center being the multi-layers of a set of arithmetic points, assures the following two properties: the uniqueness of the final extracted number and the semantic dependence of this number on the image used as the method-s input. The major advantage of this method is that it can control the authentication of a published image or its partial modification to a reliable degree. Also, it acquires the better of the known Hash functions that the digital signature schemes use and produces alphanumeric strings for cases of authentication checking, and the degree of similarity between an unknown image and an original image.Keywords: Computational Geometry, MRI photos, Image processing, pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15211248 Indexing and Searching of Image Data in Multimedia Databases Using Axial Projection
Authors: Khalid A. Kaabneh
Abstract:
This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.
Keywords: Axial Projection, images, indexing, multimedia database, searching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13871247 Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping
Authors: Adnan A. Y. Mustafa
Abstract:
In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.
Keywords: Big images, binary images, similarity, matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9191246 Evaluation of Sensory Attributes of Snack from Maize-Moringa Seed Flour Blends
Authors: O. Aluko, M. R. Brai, A. O. Adelore
Abstract:
Healthy snack (cookie) was produced from corn flour and moringa seed flour blends. The samples were mixed in various proportions and analysed for proximate composition and functional characteristics. The healthy snack (cookies) was evaluated for sensory parameters of Colour, Crispness, Taste, Aroma and Overall Acceptability. The proximate analysis of the flour obtained from different proportion showed that proximate composition increased with increase in substitution level of moringa seed flour especially with protein, fat and crude fibre. The protein contents of samples range from 1.75 to 6.58, fat from 0.60 to 6.80, while fibre from 0.85 to 2.06. There was no significance difference in the functional properties of the blend when compared with 100% corn flour. Sensory evaluation results shows a significant difference in Colour, Taste, Crispness, Aroma and Overall Acceptability of healthy snack (cookies) sample from different blends at 5% significance level.
Keywords: Healthy snack, moringa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38391245 The Mechanistic Deconvolutive Image Sensor Model for an Arbitrary Pan–Tilt Plane of View
Authors: S. H. Lim, T. Furukawa
Abstract:
This paper presents a generalized form of the mechanistic deconvolution technique (GMD) to modeling image sensors applicable in various pan–tilt planes of view. The mechanistic deconvolution technique (UMD) is modified with the given angles of a pan–tilt plane of view to formulate constraint parameters and characterize distortion effects, and thereby, determine the corrected image data. This, as a result, does not require experimental setup or calibration. Due to the mechanistic nature of the sensor model, the necessity for the sensor image plane to be orthogonal to its z-axis is eliminated, and it reduces the dependency on image data. An experiment was constructed to evaluate the accuracy of a model created by GMD and its insensitivity to changes in sensor properties and in pan and tilt angles. This was compared with a pre-calibrated model and a model created by UMD using two sensors with different specifications. It achieved similar accuracy with one-seventh the number of iterations and attained lower mean error by a factor of 2.4 when compared to the pre-calibrated and UMD model respectively. The model has also shown itself to be robust and, in comparison to pre-calibrated and UMD model, improved the accuracy significantly.Keywords: Image sensor modeling, mechanistic deconvolution, calibration, lens distortion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15271244 The Feasibility of Augmenting an Augmented Reality Image Card on a Quick Response Code
Authors: Alfred Chen, Shr Yu Lu, Cong Seng Hong, Yur-June Wang
Abstract:
This research attempts to study the feasibility of augmenting an augmented reality (AR) image card on a Quick Response (QR) code. The authors have developed a new visual tag, which contains a QR code and an augmented AR image card. The new visual tag has features of reading both of the revealed data of the QR code and the instant data from the AR image card. Furthermore, a handheld communicating device is used to read and decode the new visual tag, and then the concealed data of the new visual tag can be revealed and read through its visual display. In general, the QR code is designed to store the corresponding data or, as a key, to access the corresponding data from the server through internet. Those reveled data from the QR code are represented in text. Normally, the AR image card is designed to store the corresponding data in 3-Dimensional or animation/video forms. By using QR code's property of high fault tolerant rate, the new visual tag can access those two different types of data by using a handheld communicating device. The new visual tag has an advantage of carrying much more data than independent QR code or AR image card. The major findings of this research are: 1) the most efficient area for the designed augmented AR card augmenting on the QR code is 9% coverage area out of the total new visual tag-s area, and 2) the best location for the augmented AR image card augmenting on the QR code is located in the bottom-right corner of the new visual tag.Keywords: Augmented reality, QR code, Visual tag, Handheldcommunicating device
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15551243 Multi-Focus Image Fusion Using SFM and Wavelet Packet
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.
Keywords: Multi-focus image fusion, Wavelet Packet, Spatial Frequency Measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16141242 A Universal Model for Content-Based Image Retrieval
Authors: S. Nandagopalan, Dr. B. S. Adiga, N. Deepak
Abstract:
In this paper a novel approach for generalized image retrieval based on semantic contents is presented. A combination of three feature extraction methods namely color, texture, and edge histogram descriptor. There is a provision to add new features in future for better retrieval efficiency. Any combination of these methods, which is more appropriate for the application, can be used for retrieval. This is provided through User Interface (UI) in the form of relevance feedback. The image properties analyzed in this work are by using computer vision and image processing algorithms. For color the histogram of images are computed, for texture cooccurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD) that is found. For retrieval of images, a novel idea is developed based on greedy strategy to reduce the computational complexity. The entire system was developed using AForge.Imaging (an open source product), MATLAB .NET Builder, C#, and Oracle 10g. The system was tested with Coral Image database containing 1000 natural images and achieved better results.Keywords: Content Based Image Retrieval (CBIR), Cooccurrencematrix, Feature vector, Edge Histogram Descriptor(EHD), Greedy strategy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29331241 Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks
Authors: Chaitanya Chawla, Divya Panwar, Gurneesh Singh Anand, M. P. S Bhatia
Abstract:
This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods.Keywords: Image forensics, computer graphics, classification, deep learning, convolutional neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11751240 Feature Preserving Nonlinear Diffusion for Ultrasonic Image Denoising and Edge Enhancement
Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang, Yu Li
Abstract:
Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Keywords: anisotropic diffusion, coordinate transformationdirectional derivatives, edge enhancement, hyperbolic tangentfunction, image denoising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18131239 On-line Image Mosaicing of Live Stem Cells
Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini
Abstract:
Image mosaicing is a technique that permits to enlarge the field of view of a camera. For instance, it is employed to achieve panoramas with common cameras or even in scientific applications, to achieve the image of a whole culture in microscopical imaging. Usually, a mosaic of cell cultures is achieved through using automated microscopes. However, this is often performed in batch, through CPU intensive minimization algorithms. In addition, live stem cells are studied in phase contrast, showing a low contrast that cannot be improved further. We present a method to study the flat field from live stem cells images even in case of 100% confluence, this permitting to build accurate mosaics on-line using high performance algorithms.
Keywords: Microscopy, image mosaicing, stem cells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15081238 Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain
Authors: Krishnamoorthi R., Sheba Kezia Malarchelvi P. D.
Abstract:
In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.Keywords: Orthogonal Polynomials based Transformation, Digital Watermarking, Copyright Protection, Visual model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16961237 A Review on Medical Image Registration Techniques
Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry
Abstract:
This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.Keywords: Image registration techniques, medical images, neural networks, optimisation, transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18121236 Synthetic Transmit Aperture Method in Medical Ultrasonic Imaging
Authors: Ihor Trots, Andrzej Nowicki, Marcin Lewandowski
Abstract:
The work describes the use of a synthetic transmit aperture (STA) with a single element transmitting and all elements receiving in medical ultrasound imaging. STA technique is a novel approach to today-s commercial systems, where an image is acquired sequentially one image line at a time that puts a strict limit on the frame rate and the amount of data needed for high image quality. The STA imaging allows to acquire data simultaneously from all directions over a number of emissions, and the full image can be reconstructed. In experiments a 32-element linear transducer array with 0.48 mm inter-element spacing was used. Single element transmission aperture was used to generate a spherical wave covering the full image region. The 2D ultrasound images of wire phantom are presented obtained using the STA and commercial ultrasound scanner Antares to demonstrate the benefits of the SA imaging.Keywords: Ultrasound imaging, synthetic aperture, frame rate, beamforming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104