Search results for: Image Processing.
2639 Volterra Filter for Color Image Segmentation
Authors: M. B. Meenavathi, K. Rajesh
Abstract:
Color image segmentation plays an important role in computer vision and image processing areas. In this paper, the features of Volterra filter are utilized for color image segmentation. The discrete Volterra filter exhibits both linear and nonlinear characteristics. The linear part smoothes the image features in uniform gray zones and is used for getting a gross representation of objects of interest. The nonlinear term compensates for the blurring due to the linear term and preserves the edges which are mainly used to distinguish the various objects. The truncated quadratic Volterra filters are mainly used for edge preserving along with Gaussian noise cancellation. In our approach, the segmentation is based on K-means clustering algorithm in HSI space. Both the hue and the intensity components are fully utilized. For hue clustering, the special cyclic property of the hue component is taken into consideration. The experimental results show that the proposed technique segments the color image while preserving significant features and removing noise effects.Keywords: Color image segmentation, HSI space, K–means clustering, Volterra filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18572638 A New Approach to Steganography using Sinc-Convolution Method
Authors: Ahmad R. Naghsh-Nilchi, Latifeh Pourmohammadbagher
Abstract:
Both image steganography and image encryption have advantages and disadvantages. Steganograhy allows us to hide a desired image containing confidential information in a covered or host image while image encryption is decomposing the desired image to a non-readable, non-comprehended manner. The encryption methods are usually much more robust than the steganographic ones. However, they have a high visibility and would provoke the attackers easily since it usually is obvious from an encrypted image that something is hidden! The combination of steganography and encryption will cover both of their weaknesses and therefore, it increases the security. In this paper an image encryption method based on sinc-convolution along with using an encryption key of 128 bit length is introduced. Then, the encrypted image is covered by a host image using a modified version of JSteg steganography algorithm. This method could be applied to almost all image formats including TIF, BMP, GIF and JPEG. The experiment results show that our method is able to hide a desired image with high security and low visibility.Keywords: Sinc Approximation, Image Encryption, Sincconvolution, Image Steganography, JSTEG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18282637 Mining Image Features in an Automatic Two-Dimensional Shape Recognition System
Authors: R. A. Salam, M.A. Rodrigues
Abstract:
The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.Keywords: Image mining, feature selection, shape recognition, peak measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14582636 Cost Effective Real-Time Image Processing Based Optical Mark Reader
Authors: Amit Kumar, Himanshu Singal, Arnav Bhavsar
Abstract:
In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.Keywords: OMR, image processing, hough circle transform, interpolation, detection, Binary Thresholding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15442635 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images
Authors: Shahriar Farzam, Maryam Rastgarpour
Abstract:
Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).
Keywords: Curvelet transform, image enhancement, CBCT, image denoising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12602634 Using Electrical Impedance Tomography to Control a Robot
Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi
Abstract:
Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.Keywords: Electrical impedance tomography, EIT, Surgeon robot, image processing of Electrical impedance tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23332633 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision
Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari
Abstract:
In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.
Keywords: Computer vision, rice kernel, husking, breakage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15312632 White Blood Cells Identification and Counting from Microscopic Blood Image
Authors: Lorenzo Putzu, Cecilia Di Ruberto
Abstract:
The counting and analysis of blood cells allows the evaluation and diagnosis of a vast number of diseases. In particular, the analysis of white blood cells (WBCs) is a topic of great interest to hematologists. Nowadays the morphological analysis of blood cells is performed manually by skilled operators. This involves numerous drawbacks, such as slowness of the analysis and a nonstandard accuracy, dependent on the operator skills. In literature there are only few examples of automated systems in order to analyze the white blood cells, most of which only partial. This paper presents a complete and fully automatic method for white blood cells identification from microscopic images. The proposed method firstly individuates white blood cells from which, subsequently, nucleus and cytoplasm are extracted. The whole work has been developed using MATLAB environment, in particular the Image Processing Toolbox.Keywords: Automatic detection, Biomedical image processing, Segmentation, White blood cell analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89042631 Building Gabor Filters from Retinal Responses
Authors: Johannes Partzsch, Christian Mayr, Rene Schuffny
Abstract:
Starting from a biologically inspired framework, Gabor filters were built up from retinal filters via LMSE algorithms. Asubset of retinal filter kernels was chosen to form a particular Gabor filter by using a weighted sum. One-dimensional optimization approaches were shown to be inappropriate for the problem. All model parameters were fixed with biological or image processing constraints. Detailed analysis of the optimization procedure led to the introduction of a minimization constraint. Finally, quantization of weighting factors was investigated. This resulted in an optimized cascaded structure of a Gabor filter bank implementation with lower computational cost.
Keywords: Gabor filter, image processing, optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23982630 Tool Condition Monitoring of Ceramic Inserted Tools in High Speed Machining through Image Processing
Authors: Javier A. Dominguez Caballero, Graeme A. Manson, Matthew B. Marshall
Abstract:
Cutting tools with ceramic inserts are often used in the process of machining many types of superalloy, mainly due to their high strength and thermal resistance. Nevertheless, during the cutting process, the plastic flow wear generated in these inserts enhances and propagates cracks due to high temperature and high mechanical stress. This leads to a very variable failure of the cutting tool. This article explores the relationship between the continuous wear that ceramic SiAlON (solid solutions based on the Si3N4 structure) inserts experience during a high-speed machining process and the evolution of sparks created during the same process. These sparks were analysed through pictures of the cutting process recorded using an SLR camera. Features relating to the intensity and area of the cutting sparks were extracted from the individual pictures using image processing techniques. These features were then related to the ceramic insert’s crater wear area.Keywords: Ceramic cutting tools, high speed machining, image processing, tool condition monitoring, tool wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21892629 An Images Monitoring System based on Multi-Format Streaming Grid Architecture
Authors: Yi-Haur Shiau, Sun-In Lin, Shi-Wei Lo, Hsiu-Mei Chou, Yi-Hsuan Chen
Abstract:
This paper proposes a novel multi-format stream grid architecture for real-time image monitoring system. The system, based on a three-tier architecture, includes stream receiving unit, stream processor unit, and presentation unit. It is a distributed computing and a loose coupling architecture. The benefit is the amount of required servers can be adjusted depending on the loading of the image monitoring system. The stream receive unit supports multi capture source devices and multi-format stream compress encoder. Stream processor unit includes three modules; they are stream clipping module, image processing module and image management module. Presentation unit can display image data on several different platforms. We verified the proposed grid architecture with an actual test of image monitoring. We used a fast image matching method with the adjustable parameters for different monitoring situations. Background subtraction method is also implemented in the system. Experimental results showed that the proposed architecture is robust, adaptive, and powerful in the image monitoring system.Keywords: Motion detection, grid architecture, image monitoring system, and background subtraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15902628 Image Enhancement using α-Trimmed Mean ε-Filters
Authors: Mahdi Shaneh, Arash Golibagh Mahyari
Abstract:
Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.
Keywords: Image enhancement, median filter, ε-filter – α-trimmed mean filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55022627 Nature Inspired Metaheuristic Algorithms for Multilevel Thresholding Image Segmentation - A Survey
Authors: C. Deepika, J. Nithya
Abstract:
Segmentation is one of the essential tasks in image processing. Thresholding is one of the simplest techniques for performing image segmentation. Multilevel thresholding is a simple and effective technique. The primary objective of bi-level or multilevel thresholding for image segmentation is to determine a best thresholding value. To achieve multilevel thresholding various techniques has been proposed. A study of some nature inspired metaheuristic algorithms for multilevel thresholding for image segmentation is conducted. Here, we study about Particle swarm optimization (PSO) algorithm, artificial bee colony optimization (ABC), Ant colony optimization (ACO) algorithm and Cuckoo search (CS) algorithm.
Keywords: Ant colony optimization, Artificial bee colony optimization, Cuckoo search algorithm, Image segmentation, Multilevel thresholding, Particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35232626 Effectiveness of Dominant Color Descriptor Technique in Medical Image Retrieval Application
Authors: Mohd Kamir Yusof
Abstract:
This paper presents a dominant color descriptor technique for medical image retrieval. The medical image system will collect and store into medical database. The purpose of dominant color descriptor (DCD) technique is to retrieve medical image and to display similar image using queried image. First, this technique will search and retrieve medical image based on keyword entered by user. After image is found, the system will assign this image as a queried image. DCD technique will calculate the image value of dominant color. Then, system will search and retrieve again medical image based on value of dominant color query image. Finally, the system will display similar images with the queried image to user. Simple application has been developed and tested using dominant color descriptor. Result based on experiment indicates this technique is effective and can be used for medical image retrieval.Keywords: Medical Image Retrieval, Dominant ColorDescriptor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17422625 Parallel Double Splicing on Iso-Arrays
Authors: V. Masilamani, D.K. Sheena Christy, D.G. Thomas
Abstract:
Image synthesis is an important area in image processing. To synthesize images various systems are proposed in the literature. In this paper, we propose a bio-inspired system to synthesize image and to study the generating power of the system, we define the class of languages generated by our system. We call image as array in this paper. We use a primitive called iso-array to synthesize image/array. The operation is double splicing on iso-arrays. The double splicing operation is used in DNA computing and we use this to synthesize image. A comparison of the family of languages generated by the proposed self restricted double splicing systems on iso-arrays with the existing family of local iso-picture languages is made. Certain closure properties such as union, concatenation and rotation are studied for the family of languages generated by the proposed model.Keywords: DNA computing, splicing system, iso-picture languages, iso-array double splicing system, iso-array self splicing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15442624 Recursive Algorithms for Image Segmentation Based on a Discriminant Criterion
Authors: Bing-Fei Wu, Yen-Lin Chen, Chung-Cheng Chiu
Abstract:
In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1
Keywords: image segmentation, multilevel thresholding, clustering, discriminant analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20352623 A Comparative Study of Image Segmentation using Edge-Based Approach
Authors: Rajiv Kumar, Arthanariee A. M.
Abstract:
Image segmentation is the process to segment a given image into several parts so that each of these parts present in the image can be further analyzed. There are numerous techniques of image segmentation available in literature. In this paper, authors have been analyzed the edge-based approach for image segmentation. They have been implemented the different edge operators like Prewitt, Sobel, LoG, and Canny on the basis of their threshold parameter. The results of these operators have been shown for various images.
Keywords: Edge Operator, Edge-based Segmentation, Image Segmentation, Matlab 10.4.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36062622 Medical Image Segmentation Based On Vigorous Smoothing and Edge Detection Ideology
Authors: Jagadish H. Pujar, Pallavi S. Gurjal, Shambhavi D. S, Kiran S. Kunnur
Abstract:
Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.
Keywords: Image Segmentation, Image smoothing, Edge Detection, Impulsive noise, Gaussian noise, Median filter, Canny edge, Eigen values, Eigen vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19112621 A Feature-based Invariant Watermarking Scheme Using Zernike Moments
Authors: Say Wei Foo, Qi Dong
Abstract:
In this paper, a novel feature-based image watermarking scheme is proposed. Zernike moments which have invariance properties are adopted in the scheme. In the proposed scheme, feature points are first extracted from host image and several circular patches centered on these points are generated. The patches are used as carriers of watermark information because they can be regenerated to locate watermark embedding positions even when watermarked images are severely distorted. Zernike transform is then applied to the patches to calculate local Zernike moments. Dither modulation is adopted to quantize the magnitudes of the Zernike moments followed by false alarm analysis. Experimental results show that quality degradation of watermarked image is visually transparent. The proposed scheme is very robust against image processing operations and geometric attacks.Keywords: Image watermarking, Zernike moments, Featurepoint, Invariance, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18482620 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques
Authors: Hossein Nezamabadi-pour, Saeid Saryazdi
Abstract:
In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.
Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20252619 A Robust Method for Encrypted Data Hiding Technique Based on Neighborhood Pixels Information
Authors: Ali Shariq Imran, M. Younus Javed, Naveed Sarfraz Khattak
Abstract:
This paper presents a novel method for data hiding based on neighborhood pixels information to calculate the number of bits that can be used for substitution and modified Least Significant Bits technique for data embedding. The modified solution is independent of the nature of the data to be hidden and gives correct results along with un-noticeable image degradation. The technique, to find the number of bits that can be used for data hiding, uses the green component of the image as it is less sensitive to human eye and thus it is totally impossible for human eye to predict whether the image is encrypted or not. The application further encrypts the data using a custom designed algorithm before embedding bits into image for further security. The overall process consists of three main modules namely embedding, encryption and extraction cm.
Keywords: Data hiding, image processing, information security, stagonography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23412618 Detection and Pose Estimation of People in Images
Authors: Mousa Mojarrad, Amir Masoud Rahmani, Mehrab Mohebi
Abstract:
Detection, feature extraction and pose estimation of people in images and video is made challenging by the variability of human appearance, the complexity of natural scenes and the high dimensionality of articulated body models and also the important field in Image, Signal and Vision Computing in recent years. In this paper, four types of people in 2D dimension image will be tested and proposed. The system will extract the size and the advantage of them (such as: tall fat, short fat, tall thin and short thin) from image. Fat and thin, according to their result from the human body that has been extract from image, will be obtained. Also the system extract every size of human body such as length, width and shown them in output.Keywords: Analysis of Image Processing, Canny Edge Detection, Human Body Recognition, Measurement, Pose Estimation, 2D Human Dimension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23002617 Advanced Image Analysis Tools Development for the Early Stage Bronchial Cancer Detection
Authors: P. Bountris, E. Farantatos, N. Apostolou
Abstract:
Autofluorescence (AF) bronchoscopy is an established method to detect dysplasia and carcinoma in situ (CIS). For this reason the “Sotiria" Hospital uses the Karl Storz D-light system. However, in early tumor stages the visualization is not that obvious. With the help of a PC, we analyzed the color images we captured by developing certain tools in Matlab®. We used statistical methods based on texture analysis, signal processing methods based on Gabor models and conversion algorithms between devicedependent color spaces. Our belief is that we reduced the error made by the naked eye. The tools we implemented improve the quality of patients' life.Keywords: Bronchoscopy, digital image processing, lung cancer, texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14332616 Active Contours with Prior Corner Detection
Authors: U.A.A. Niroshika, Ravinda G.N. Meegama
Abstract:
Deformable active contours are widely used in computer vision and image processing applications for image segmentation, especially in biomedical image analysis. The active contour or “snake" deforms towards a target object by controlling the internal, image and constraint forces. However, if the contour initialized with a lesser number of control points, there is a high probability of surpassing the sharp corners of the object during deformation of the contour. In this paper, a new technique is proposed to construct the initial contour by incorporating prior knowledge of significant corners of the object detected using the Harris operator. This new reconstructed contour begins to deform, by attracting the snake towards the targeted object, without missing the corners. Experimental results with several synthetic images show the ability of the new technique to deal with sharp corners with a high accuracy than traditional methods.Keywords: Active Contours, Image Segmentation, Harris Operator, Snakes
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22812615 Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages
Authors: Chutimon Thitipornvanid, Siripun Sanguansintukul
Abstract:
Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images.Keywords: Metadata, Prediction, Multi-layer perceptron, Human facial image, Image mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12142614 A Smart-Visio Microphone for Audio-Visual Speech Recognition “Vmike“
Abstract:
The practical implementation of audio-video coupled speech recognition systems is mainly limited by the hardware complexity to integrate two radically different information capturing devices with good temporal synchronisation. In this paper, we propose a solution based on a smart CMOS image sensor in order to simplify the hardware integration difficulties. By using on-chip image processing, this smart sensor can calculate in real time the X/Y projections of the captured image. This on-chip projection reduces considerably the volume of the output data. This data-volume reduction permits a transmission of the condensed visual information via the same audio channel by using a stereophonic input available on most of the standard computation devices such as PC, PDA and mobile phones. A prototype called VMIKE (Visio-Microphone) has been designed and realised by using standard 0.35um CMOS technology. A preliminary experiment gives encouraged results. Its efficiency will be further investigated in a large variety of applications such as biometrics, speech recognition in noisy environments, and vocal control for military or disabled persons, etc.
Keywords: Audio-Visual Speech recognition, CMOS Smartsensor, On-Chip image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18262613 A Hybrid Approach for Color Image Quantization Using K-means and Firefly Algorithms
Authors: Parisut Jitpakdee, Pakinee Aimmanee, Bunyarit Uyyanonvara
Abstract:
Color Image quantization (CQ) is an important problem in computer graphics, image and processing. The aim of quantization is to reduce colors in an image with minimum distortion. Clustering is a widely used technique for color quantization; all colors in an image are grouped to small clusters. In this paper, we proposed a new hybrid approach for color quantization using firefly algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased algorithm that can be used for solving optimization problems. The proposed method can overcome the drawbacks of both algorithms such as the local optima converge problem in K-means and the early converge of firefly algorithm. Experiments on three commonly used images and the comparison results shows that the proposed algorithm surpasses both the base-line technique k-means clustering and original firefly algorithm.Keywords: Clustering, Color quantization, Firefly algorithm, Kmeans.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22182612 Image Dehazing Using Dark Channel Prior and Fast Guided Filter in Daubechies Lifting Wavelet Transform Domain
Authors: Harpreet Kaur, Sudipta Majumdar
Abstract:
In this paper a method for image dehazing is proposed in lifting wavelet transform domain. Lifting Daubechies (D4) wavelet has been used to obtain the approximate image and detail images. As the haze is contained in low frequency part, only the approximate image is used for further processing. This region is processed by dehazing algorithm based on dark channel prior (DCP). The dehazed approximate image is then recombined with the detail images using inverse lifting wavelet transform. Implementation of lifting wavelet transform has the advantage of auxiliary memory saving, fast implementation and simplicity. Also, the proposed method deals with near white scene problem, blue horizon issue and localized light sources in a way to enhance image quality and makes the algorithm robust. Simulation results present improvement in terms of visual quality, parameters such as root mean square (RMS) contrast, structural similarity index (SSIM), entropy and execution time.
Keywords: Dark channel prior, image dehazing, lifting wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11232611 Image Enhancement Algorithm of Photoacoustic Tomography Using Active Contour Filtering
Authors: Prasannakumar Palaniappan, Dong Ho Shin, Chul Gyu Song
Abstract:
The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.
Keywords: Contour filtering, linear array, photoacoustic tomography, universal back projection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18392610 Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases
Authors: N. W. U. D. Chathurani, Shlomo Geva, Vinod Chandran, Proboda Rajapaksha
Abstract:
Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.
Keywords: Feature fusion, image retrieval, membership function, normalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1344