Search results for: Image retrieval
1422 MTSSM - A Framework for Multi-Track Segmentation of Symbolic Music
Authors: Brigitte Rafael, Stefan M. Oertl
Abstract:
Music segmentation is a key issue in music information retrieval (MIR) as it provides an insight into the internal structure of a composition. Structural information about a composition can improve several tasks related to MIR such as searching and browsing large music collections, visualizing musical structure, lyric alignment, and music summarization. The authors of this paper present the MTSSM framework, a twolayer framework for the multi-track segmentation of symbolic music. The strength of this framework lies in the combination of existing methods for local track segmentation and the application of global structure information spanning via multiple tracks. The first layer of the MTSSM uses various string matching techniques to detect the best candidate segmentations for each track of a multi-track composition independently. The second layer combines all single track results and determines the best segmentation for each track in respect to the global structure of the composition.Keywords: Pattern Recognition, Music Information Retrieval, Machine Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16291421 Face Recognition Using Double Dimension Reduction
Authors: M. A Anjum, M. Y. Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Keywords: Biometrics, DCT, Face Recognition, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14921420 A Fuzzy Implementation for Optimization of Storage Locations in an Industrial AS/RS
Authors: C. Senanayake, S. Veera Ragavan
Abstract:
Warehousing is commonly used in factories for the storage of products until delivery of orders. As the amount of products stored increases it becomes tedious to be carried out manually. In recent years, the manual storing has converted into fully or partially computer controlled systems, also known as Automated Storage and Retrieval Systems (AS/RS). This paper discusses an ASRS system, which was designed such that the best storage location for the products is determined by utilizing a fuzzy control system. The design maintains the records of the products to be/already in store and the storage/retrieval times along with the availability status of the storage locations. This paper discusses on the maintenance of the above mentioned records and the utilization of the concept of fuzzy logic in order to determine the optimum storage location for the products. The paper will further discuss on the dynamic splitting and merging of the storage locations depending on the product sizes.Keywords: ASRS, fuzzy control systems, MySQL database, dynamic splitting and merging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21421419 Analysis of Patterns in TV Commercials that Recognize NGO Image
Authors: J. Areerut, F. Samuel
Abstract:
The purpose of this research is to analyze the pattern of television commercials and how they encourage non-governmental organizations to build their image in Thailand. It realizes how public relations can impact an organization's image. It is a truth that bad public relations management can cause hurt a reputation. On the other hand, a very small amount of work in public relations helps your organization to be recognized broadly and eventually accepted even wider. The main idea in this paper is to study and analyze patterns of television commercials that could impact non-governmental organization's images in a greater way. This research uses questionnaires and content analysis to summarize results. The findings show the aspects of how patterns of television commercials that are suited to non-governmental organization work in Thailand. It will be useful for any non-governmental organization that wishes to build their image through television commercials and also for further work based on this research.
Keywords: Television Commercial (TVC), Organization Image, Non-Governmental Organization: NGO, Public Relation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23861418 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: Colour data, local stereo matching, stereo correspondence, disparity map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9171417 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16771416 Fuzzy Based Visual Texture Feature for Psoriasis Image Analysis
Authors: G. Murugeswari, A. Suruliandi
Abstract:
This paper proposes a rotational invariant texture feature based on the roughness property of the image for psoriasis image analysis. In this work, we have applied this feature for image classification and segmentation. The fuzzy concept is employed to overcome the imprecision of roughness. Since the psoriasis lesion is modeled by a rough surface, the feature is extended for calculating the Psoriasis Area Severity Index value. For classification and segmentation, the Nearest Neighbor algorithm is applied. We have obtained promising results for identifying affected lesions by using the roughness index and severity level estimation.
Keywords: Fuzzy texture feature, psoriasis, roughness feature, skin disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21161415 National Image in the Age of Mass Self-Communication: An Analysis of Internet Users' Perception of Portugal
Authors: L. Godinho, N. Teixeira
Abstract:
Nowadays, massification of Internet access represents one of the major challenges to the traditional powers of the State, among which the power to control its external image. The virtual world has also sparked the interest of social sciences which consider it a new field of study, an immense open text where sense is expressed. In this paper, that immense text has been accessed to so as to understand the perception Internet users from all over the world have of Portugal. Ours is a quantitative and qualitative approach, as we have resorted to buzz, thematic and category analysis. The results confirm the predominance of sea stereotype in others' vision of the Portuguese people, and evidence that national image has adapted to network communication through processes of individuation and paganization.Keywords: Internet, national image, perception, web analytics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10561414 Low Resolution Single Neural Network Based Face Recognition
Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum
Abstract:
This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17501413 Image Transmission via Iterative Cellular-Turbo System
Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan
Abstract:
To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18161412 Extraction of Semantic Digital Signatures from MRI Photos for Image-Identification Purposes
Authors: Marios Poulos, George Bokos
Abstract:
This paper makes an attempt to solve the problem of searching and retrieving of similar MRI photos via Internet services using morphological features which are sourced via the original image. This study is aiming to be considered as an additional tool of searching and retrieve methods. Until now the main way of the searching mechanism is based on the syntactic way using keywords. The technique it proposes aims to serve the new requirements of libraries. One of these is the development of computational tools for the control and preservation of the intellectual property of digital objects, and especially of digital images. For this purpose, this paper proposes the use of a serial number extracted by using a previously tested semantic properties method. This method, with its center being the multi-layers of a set of arithmetic points, assures the following two properties: the uniqueness of the final extracted number and the semantic dependence of this number on the image used as the method-s input. The major advantage of this method is that it can control the authentication of a published image or its partial modification to a reliable degree. Also, it acquires the better of the known Hash functions that the digital signature schemes use and produces alphanumeric strings for cases of authentication checking, and the degree of similarity between an unknown image and an original image.Keywords: Computational Geometry, MRI photos, Image processing, pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15211411 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band
Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava
Abstract:
An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32291410 Indexing and Searching of Image Data in Multimedia Databases Using Axial Projection
Authors: Khalid A. Kaabneh
Abstract:
This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.
Keywords: Axial Projection, images, indexing, multimedia database, searching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13871409 Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping
Authors: Adnan A. Y. Mustafa
Abstract:
In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.
Keywords: Big images, binary images, similarity, matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9201408 A Survey of Response Generation of Dialogue Systems
Authors: Yifan Fan, Xudong Luo, Pingping Lin
Abstract:
An essential task in the field of artificial intelligence is to allow computers to interact with people through natural language. Therefore, researches such as virtual assistants and dialogue systems have received widespread attention from industry and academia. The response generation plays a crucial role in dialogue systems, so to push forward the research on this topic, this paper surveys various methods for response generation. We sort out these methods into three categories. First one includes finite state machine methods, framework methods, and instance methods. The second contains full-text indexing methods, ontology methods, vast knowledge base method, and some other methods. The third covers retrieval methods and generative methods. We also discuss some hybrid methods based knowledge and deep learning. We compare their disadvantages and advantages and point out in which ways these studies can be improved further. Our discussion covers some studies published in leading conferences such as IJCAI and AAAI in recent years.Keywords: Retrieval, generative, deep learning, response generation, knowledge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12041407 Signed Approach for Mining Web Content Outliers
Authors: G. Poonkuzhali, K.Thiagarajan, K.Sarukesi, G.V.Uma
Abstract:
The emergence of the Internet has brewed the revolution of information storage and retrieval. As most of the data in the web is unstructured, and contains a mix of text, video, audio etc, there is a need to mine information to cater to the specific needs of the users without loss of important hidden information. Thus developing user friendly and automated tools for providing relevant information quickly becomes a major challenge in web mining research. Most of the existing web mining algorithms have concentrated on finding frequent patterns while neglecting the less frequent ones that are likely to contain outlying data such as noise, irrelevant and redundant data. This paper mainly focuses on Signed approach and full word matching on the organized domain dictionary for mining web content outliers. This Signed approach gives the relevant web documents as well as outlying web documents. As the dictionary is organized based on the number of characters in a word, searching and retrieval of documents takes less time and less space.Keywords: Outliers, Relevant document, , Signed Approach, Web content mining, Web documents..
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23491406 The Mechanistic Deconvolutive Image Sensor Model for an Arbitrary Pan–Tilt Plane of View
Authors: S. H. Lim, T. Furukawa
Abstract:
This paper presents a generalized form of the mechanistic deconvolution technique (GMD) to modeling image sensors applicable in various pan–tilt planes of view. The mechanistic deconvolution technique (UMD) is modified with the given angles of a pan–tilt plane of view to formulate constraint parameters and characterize distortion effects, and thereby, determine the corrected image data. This, as a result, does not require experimental setup or calibration. Due to the mechanistic nature of the sensor model, the necessity for the sensor image plane to be orthogonal to its z-axis is eliminated, and it reduces the dependency on image data. An experiment was constructed to evaluate the accuracy of a model created by GMD and its insensitivity to changes in sensor properties and in pan and tilt angles. This was compared with a pre-calibrated model and a model created by UMD using two sensors with different specifications. It achieved similar accuracy with one-seventh the number of iterations and attained lower mean error by a factor of 2.4 when compared to the pre-calibrated and UMD model respectively. The model has also shown itself to be robust and, in comparison to pre-calibrated and UMD model, improved the accuracy significantly.Keywords: Image sensor modeling, mechanistic deconvolution, calibration, lens distortion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15281405 The Feasibility of Augmenting an Augmented Reality Image Card on a Quick Response Code
Authors: Alfred Chen, Shr Yu Lu, Cong Seng Hong, Yur-June Wang
Abstract:
This research attempts to study the feasibility of augmenting an augmented reality (AR) image card on a Quick Response (QR) code. The authors have developed a new visual tag, which contains a QR code and an augmented AR image card. The new visual tag has features of reading both of the revealed data of the QR code and the instant data from the AR image card. Furthermore, a handheld communicating device is used to read and decode the new visual tag, and then the concealed data of the new visual tag can be revealed and read through its visual display. In general, the QR code is designed to store the corresponding data or, as a key, to access the corresponding data from the server through internet. Those reveled data from the QR code are represented in text. Normally, the AR image card is designed to store the corresponding data in 3-Dimensional or animation/video forms. By using QR code's property of high fault tolerant rate, the new visual tag can access those two different types of data by using a handheld communicating device. The new visual tag has an advantage of carrying much more data than independent QR code or AR image card. The major findings of this research are: 1) the most efficient area for the designed augmented AR card augmenting on the QR code is 9% coverage area out of the total new visual tag-s area, and 2) the best location for the augmented AR image card augmenting on the QR code is located in the bottom-right corner of the new visual tag.Keywords: Augmented reality, QR code, Visual tag, Handheldcommunicating device
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15561404 Multi-Focus Image Fusion Using SFM and Wavelet Packet
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a multi-focus image fusion method using Spatial Frequency Measurements (SFM) and Wavelet Packet was proposed. The proposed fusion approach, firstly, the two fused images were transformed and decomposed into sixteen subbands using Wavelet packet. Next, each subband was partitioned into sub-blocks and each block was identified the clearer regions by using the Spatial Frequency Measurement (SFM). Finally, the recovered fused image was reconstructed by performing the Inverse Wavelet Transform. From the experimental results, it was found that the proposed method outperformed the traditional SFM based methods in terms of objective and subjective assessments.
Keywords: Multi-focus image fusion, Wavelet Packet, Spatial Frequency Measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16161403 Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks
Authors: Chaitanya Chawla, Divya Panwar, Gurneesh Singh Anand, M. P. S Bhatia
Abstract:
This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods.Keywords: Image forensics, computer graphics, classification, deep learning, convolutional neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11751402 Feature Preserving Nonlinear Diffusion for Ultrasonic Image Denoising and Edge Enhancement
Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang, Yu Li
Abstract:
Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Keywords: anisotropic diffusion, coordinate transformationdirectional derivatives, edge enhancement, hyperbolic tangentfunction, image denoising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18131401 On-line Image Mosaicing of Live Stem Cells
Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini
Abstract:
Image mosaicing is a technique that permits to enlarge the field of view of a camera. For instance, it is employed to achieve panoramas with common cameras or even in scientific applications, to achieve the image of a whole culture in microscopical imaging. Usually, a mosaic of cell cultures is achieved through using automated microscopes. However, this is often performed in batch, through CPU intensive minimization algorithms. In addition, live stem cells are studied in phase contrast, showing a low contrast that cannot be improved further. We present a method to study the flat field from live stem cells images even in case of 100% confluence, this permitting to build accurate mosaics on-line using high performance algorithms.
Keywords: Microscopy, image mosaicing, stem cells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15091400 Image Segmentation Based on Graph Theoretical Approach to Improve the Quality of Image Segmentation
Authors: Deepthi Narayan, Srikanta Murthy K., G. Hemantha Kumar
Abstract:
Graph based image segmentation techniques are considered to be one of the most efficient segmentation techniques which are mainly used as time & space efficient methods for real time applications. How ever, there is need to focus on improving the quality of segmented images obtained from the earlier graph based methods. This paper proposes an improvement to the graph based image segmentation methods already described in the literature. We contribute to the existing method by proposing the use of a weighted Euclidean distance to calculate the edge weight which is the key element in building the graph. We also propose a slight modification of the segmentation method already described in the literature, which results in selection of more prominent edges in the graph. The experimental results show the improvement in the segmentation quality as compared to the methods that already exist, with a slight compromise in efficiency.Keywords: Graph based image segmentation, threshold, Weighted Euclidean distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15631399 A Combination of Similarity Ranking and Time for Social Research Paper Searching
Authors: P. Jomsri
Abstract:
Nowadays social media are important tools for web resource discovery. The performance and capabilities of web searches are vital, especially search results from social research paper bookmarking. This paper proposes a new algorithm for ranking method that is a combination of similarity ranking with paper posted time or CSTRank. The paper posted time is static ranking for improving search results. For this particular study, the paper posted time is combined with similarity ranking to produce a better ranking than other methods such as similarity ranking or SimRank. The retrieval performance of combination rankings is evaluated using mean values of NDCG. The evaluation in the experiments implies that the chosen CSTRank ranking by using weight score at ratio 90:10 can improve the efficiency of research paper searching on social bookmarking websites.Keywords: combination ranking, information retrieval, time, similarity ranking, static ranking, weight score
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16661398 Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain
Authors: Krishnamoorthi R., Sheba Kezia Malarchelvi P. D.
Abstract:
In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.Keywords: Orthogonal Polynomials based Transformation, Digital Watermarking, Copyright Protection, Visual model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16971397 A Review on Medical Image Registration Techniques
Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry
Abstract:
This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.Keywords: Image registration techniques, medical images, neural networks, optimisation, transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18141396 Synthetic Transmit Aperture Method in Medical Ultrasonic Imaging
Authors: Ihor Trots, Andrzej Nowicki, Marcin Lewandowski
Abstract:
The work describes the use of a synthetic transmit aperture (STA) with a single element transmitting and all elements receiving in medical ultrasound imaging. STA technique is a novel approach to today-s commercial systems, where an image is acquired sequentially one image line at a time that puts a strict limit on the frame rate and the amount of data needed for high image quality. The STA imaging allows to acquire data simultaneously from all directions over a number of emissions, and the full image can be reconstructed. In experiments a 32-element linear transducer array with 0.48 mm inter-element spacing was used. Single element transmission aperture was used to generate a spherical wave covering the full image region. The 2D ultrasound images of wire phantom are presented obtained using the STA and commercial ultrasound scanner Antares to demonstrate the benefits of the SA imaging.Keywords: Ultrasound imaging, synthetic aperture, frame rate, beamforming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21041395 Color Constancy using Superpixel
Authors: Xingsheng Yuan, Zhengzhi Wang
Abstract:
Color constancy algorithms are generally based on the simplified assumption about the spectral distribution or the reflection attributes of the scene surface. However, in reality, these assumptions are too restrictive. The methodology is proposed to extend existing algorithm to applying color constancy locally to image patches rather than globally to the entire images. In this paper, a method based on low-level image features using superpixels is proposed. Superpixel segmentation partition an image into regions that are approximately uniform in size and shape. Instead of using entire pixel set for estimating the illuminant, only superpixels with the most valuable information are used. Based on large scale experiments on real-world scenes, it can be derived that the estimation is more accurate using superpixels than when using the entire image.Keywords: color constancy, illuminant estimation, superpixel
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14611394 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code
Authors: N. Muthukumaran, R. Ravi
Abstract:
The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.
Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20731393 Quality of Non-Point Source Pollutant Identification using Digital Image and Remote Sensing Image
Authors: Riki Mukhaiyar
Abstract:
The integration between technology of remote sensing, information from the data of digital image, and modeling technology for the simulation of water quality will provide easiness during the observation on the quality of water changes on the river surface. For example, Ciliwung River which is contaminated with non-point source pollutant from household wastes, particularly on its downstream. This fact informed that the quality of water in this river is getting worse. The land use for settlements and housing ranges between 62.84% - 81.26% on the downstream of Ciliwung River, give a significant picture in seeing factors that affected the water quality of Ciliwung River.Keywords: Digital Image, Digitize, Landuse, Non-Point SourcePollutant, Qual2e Simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711