Search results for: satellite images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1303

Search results for: satellite images

943 Online Optic Disk Segmentation Using Fractals

Authors: Srinivasan Aruchamy, Partha Bhattacharjee, Goutam Sanyal

Abstract:

Optic disk segmentation plays a key role in the mass screening of individuals with diabetic retinopathy and glaucoma ailments. An efficient hardware-based algorithm for optic disk localization and segmentation would aid for developing an automated retinal image analysis system for real time applications. Herein, TMS320C6416DSK DSP board pixel intensity based fractal analysis algorithm for an automatic localization and segmentation of the optic disk is reported. The experiment has been performed on color and fluorescent angiography retinal fundus images. Initially, the images were pre-processed to reduce the noise and enhance the quality. The retinal vascular tree of the image was then extracted using canny edge detection technique. Finally, a pixel intensity based fractal analysis is performed to segment the optic disk by tracing the origin of the vascular tree. The proposed method is examined on three publicly available data sets of the retinal image and also with the data set obtained from an eye clinic. The average accuracy achieved is 96.2%. To the best of the knowledge, this is the first work reporting the use of TMS320C6416DSK DSP board and pixel intensity based fractal analysis algorithm for an automatic localization and segmentation of the optic disk. This will pave the way for developing devices for detection of retinal diseases in the future.

Keywords: Color retinal fundus images, Diabetic retinopathy, Fluorescein angiography retinal fundus images, Fractal analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2464
942 Skew Detection Technique for Binary Document Images based on Hough Transform

Authors: Manjunath Aradhya V N, Hemantha Kumar G, Shivakumara P

Abstract:

Document image processing has become an increasingly important technology in the automation of office documentation tasks. During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithm for layout analysis and character recognition are generally very sensitive to the page skew. Hence, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method is presented for binary document images. The method considered the some selected characters of the text which may be subjected to thinning and Hough transform to estimate skew angle accurately. Several experiments have been conducted on various types of documents such as documents containing English Documents, Journals, Text-Book, Different Languages and Document with different fonts, Documents with different resolutions, to reveal the robustness of the proposed method. The experimental results revealed that the proposed method is accurate compared to the results of well-known existing methods.

Keywords: Optical Character Recognition, Skew angle, Thinning, Hough transform, Document processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057
941 Low Resolution Face Recognition Using Mixture of Experts

Authors: Fatemeh Behjati Ardakani, Fatemeh Khademian, Abbas Nowzari Dalini, Reza Ebrahimpour

Abstract:

Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 × 48 ORL images to 12 × 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set.

Keywords: Low resolution face recognition, Multilayered neuralnetwork, Mixture of experts neural network, Principal componentanalysis, Bicubic interpolation, Nearest neighbor interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
940 Morphing Human Faces: Automatic Control Points Selection and Color Transition

Authors: Stephen Karungaru, Minoru Fukumi, Norio Akamatsu

Abstract:

In this paper, we propose a morphing method by which face color images can be freely transformed. The main focus of this work is the transformation of one face image to another. This method is fully automatic in that it can morph two face images by automatically detecting all the control points necessary to perform the morph. A face detection neural network, edge detection and medium filters are employed to detect the face position and features. Five control points, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. Finally color interpolation is done using a color Gaussian model that calculates the color for each particular frame depending on the number of frames used. A real coded Genetic algorithm is used in both the image warping and color blending steps to assist in step size decisions and speed up the morphing. This method results in ''very smooth'' morphs and is fast to process.

Keywords: color transition, genetic algorithms morphing, warping

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2783
939 Image Classification and Accuracy Assessment Using the Confusion Matrix, Contingency Matrix, and Kappa Coefficient

Authors: F. F. Howard, C. B. Boye, I. Yakubu, J. S. Y. Kuma

Abstract:

One of the ways that could be used for the production of land use and land cover maps by a procedure known as image classification is the use of the remote sensing technique. Numerous elements ought to be taken into consideration, including the availability of highly satisfactory Landsat imagery, secondary data and a precise classification process. The goal of this study was to classify and map the land use and land cover of the study area using remote sensing and Geospatial Information System (GIS) analysis. The classification was done using Landsat 8 satellite images acquired in December 2020 covering the study area. The Landsat image was downloaded from the USGS. The Landsat image with 30 m resolution was geo-referenced to the WGS_84 datum and Universal Transverse Mercator (UTM) Zone 30N coordinate projection system. A radiometric correction was applied to the image to reduce the noise in the image. This study consists of two sections: the Land Use/Land Cover (LULC) and Accuracy Assessments using the confusion and contingency matrix and the Kappa coefficient. The LULC classifications were vegetation (agriculture) (67.87%), water bodies (0.01%), mining areas (5.24%), forest (26.02%), and settlement (0.88%). The overall accuracy of 97.87% and the kappa coefficient (K) of 97.3% were obtained for the confusion matrix. While an overall accuracy of 95.7% and a Kappa coefficient of 0.947 were obtained for the contingency matrix, the kappa coefficients were rated as substantial; hence, the classified image is fit for further research.

Keywords: Confusion Matrix, contingency matrix, kappa coefficient, land used/ land cover, accuracy assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162
938 A Novel Steganographic Method for Gray-Level Images

Authors: Ahmad T. Al-Taani, Abdullah M. AL-Issa

Abstract:

In this work we propose a novel Steganographic method for hiding information within the spatial domain of the gray scale image. The proposed approach works by dividing the cover into blocks of equal sizes and then embeds the message in the edge of the block depending on the number of ones in left four bits of the pixel. The proposed approach is tested on a database consists of 100 different images. Experimental results, compared with other methods, showed that the proposed approach hide more large information and gave a good visual quality stego-image that can be seen by human eyes.

Keywords: Data Embedding, Cryptography, Watermarking, Steganography, Least Significant Bit, Information Hiding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2226
937 Evolved Strokes in Non Photo–Realistic Rendering

Authors: Ashkan Izadi, Vic Ciesielski

Abstract:

We describe a work with an evolutionary computing algorithm for non photo–realistic rendering of a target image. The renderings are produced by genetic programming. We have used two different types of strokes: “empty triangle" and “filled triangle" in color level. We compare both empty and filled triangular strokes to find which one generates more aesthetic pleasing images. We found the filled triangular strokes have better fitness and generate more aesthetic images than empty triangular strokes.

Keywords: Artificial intelligence, Evolutionary programming, Geneticprogramming, Non photo–realistic rendering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
936 Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method

Authors: Z. Mortezaie, H. Hassanpour, S. Asadi Amiri

Abstract:

Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.

Keywords: Unsharp masking, blur image, sub-region gradient, image enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362
935 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn

Abstract:

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3372
934 Performance Improvement in the Bivariate Models by using Modified Marginal Variance of Noisy Observations for Image-Denoising Applications

Authors: R. Senthilkumar

Abstract:

Most simple nonlinear thresholding rules for wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper compares different Shrinkage functions used for image-denoising. The second part of the paper compares different bivariate models and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill, Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values are calculated for each noise level.

Keywords: BiShrink, Image-Denoising, PSNR, Shrinkage function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1304
933 A General Segmentation Scheme for Contouring Kidney Region in Ultrasound Kidney Images using Improved Higher Order Spline Interpolation

Authors: K. Bommanna Raja, M.Madheswaran, K.Thyagarajah

Abstract:

A higher order spline interpolated contour obtained with up-sampling of homogenously distributed coordinates for segmentation of kidney region in different classes of ultrasound kidney images has been developed and presented in this paper. The performance of the proposed method is measured and compared with modified snake model contour, Markov random field contour and expert outlined contour. The validation of the method is made in correspondence with expert outlined contour using maximum coordinate distance, Hausdorff distance and mean radial distance metrics. The results obtained reveal that proposed scheme provides optimum contour that agrees well with expert outlined contour. Moreover this technique helps to preserve the pixels-of-interest which in specific defines the functional characteristic of kidney. This explores various possibilities in implementing computer-aided diagnosis system exclusively for US kidney images.

Keywords: Ultrasound Kidney Image – Kidney Segmentation –Active Contour – Markov Random Field – Higher Order SplineInterpolation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
932 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example

Authors: Wang Yang

Abstract:

Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.

Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 862
931 Machine Learning Approach for Identifying Dementia from MRI Images

Authors: S. K. Aruna, S. Chitra

Abstract:

This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.

Keywords: Magnetic resonance imaging, dementia, Gabor filter, gray level co-occurrence matrix, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
930 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 350
929 Weld Defect Detection in Industrial Radiography Based Digital Image Processing

Authors: N. Nacereddine, M. Zelmat, S. S. Belaïfa, M. Tridi

Abstract:

Industrial radiography is a famous technique for the identification and evaluation of discontinuities, or defects, such as cracks, porosity and foreign inclusions found in welded joints. Although this technique has been well developed, improving both the inspection process and operating time, it does suffer from several drawbacks. The poor quality of radiographic images is due to the physical nature of radiography as well as small size of the defects and their poor orientation relatively to the size and thickness of the evaluated parts. Digital image processing techniques allow the interpretation of the image to be automated, avoiding the presence of human operators making the inspection system more reliable, reproducible and faster. This paper describes our attempt to develop and implement digital image processing algorithms for the purpose of automatic defect detection in radiographic images. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of global and local preprocessing and segmentation methods must be appropriated.

Keywords: Digital image processing, global and localapproaches, radiographic film, weld defect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4012
928 Effect of Geographical Co-Ordinates on the Parameters in the Rain Rate Model for Radio Propagation Applications

Authors: Olatinwo M. O., Oyeleke Olaosebikan, David Henry O.

Abstract:

Rain attenuation plays a lot of roles in the design of satellite and terrestrial microwave radio links, hence a good knowledge of its effect is of great interest to Engineers and scientists in that it is often required to give a high level of accuracy of the rainrate distribution that expresses rainrate from the lowest value to the highest. This study proposes a model to express rainrate parameters alpha (α) and beta (β) as a function of geographical location at 0.01% of the time. The tropical locations used in the development of the effect were Ilorin, Ile-Ife, Douala, Dar-es-Selam, Nairobi, Lusaka, and Brazilia.

This expression clearly confirms the variability of rainfall from place to place. When consistency test was carried out using the expression to generate rainrate for each location examined, the result obtained was reliable for rain intensities between 5mm/h and 200mm/h. The variability of α and β with latitude also shows that different latitudes have different cumulative rain distribution. The model proposed in this study would be one of the useful tools to Radio Engineers since the precipitation effect in the design of satellite and terrestrial microwave radio links is among the factors to consider when designing communication systems.

Keywords: Rain rate, attenuation, geographical location.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674
927 Enhancement of Stereo Video Pairs Using SDNs To Aid In 3D Reconstruction

Authors: Lewis E. Hibell, Honghai Liu, David J. Brown

Abstract:

This paper presents the results of enhancing images from a left and right stereo pair in order to increase the resolution of a 3D representation of a scene generated from that same pair. A new neural network structure known as a Self Delaying Dynamic Network (SDN) has been used to perform the enhancement. The advantage of SDNs over existing techniques such as bicubic interpolation is their ability to cope with motion and noise effects. SDNs are used to generate two high resolution images, one based on frames taken from the left view of the subject, and one based on the frames from the right. This new high resolution stereo pair is then processed by a disparity map generator. The disparity map generated is compared to two other disparity maps generated from the same scene. The first is a map generated from an original high resolution stereo pair and the second is a map generated using a stereo pair which has been enhanced using bicubic interpolation. The maps generated using the SDN enhanced pairs match more closely the target maps. The addition of extra noise into the input images is less problematic for the SDN system which is still able to out perform bicubic interpolation.

Keywords: Genetic Evolution, Image Enhancement, Neuron Networks, Stereo Vision

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376
926 Face Detection in Color Images using Color Features of Skin

Authors: Fattah Alizadeh, Saeed Nalousi, Chiman Savari

Abstract:

Because of increasing demands for security in today-s society and also due to paying much more attention to machine vision, biometric researches, pattern recognition and data retrieval in color images, face detection has got more application. In this article we present a scientific approach for modeling human skin color, and also offer an algorithm that tries to detect faces within color images by combination of skin features and determined threshold in the model. Proposed model is based on statistical data in different color spaces. Offered algorithm, using some specified color threshold, first, divides image pixels into two groups: skin pixel group and non-skin pixel group and then based on some geometric features of face decides which area belongs to face. Two main results that we received from this research are as follow: first, proposed model can be applied easily on different databases and color spaces to establish proper threshold. Second, our algorithm can adapt itself with runtime condition and its results demonstrate desirable progress in comparison with similar cases.

Keywords: face detection, skin color modeling, color, colorfulimages, face recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262
925 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner

Authors: A. Umemuro, M. Sato, M. Narita, S. Hori, S. Sakurai, T. Nakayama, A. Nakazawa, T. Ogura

Abstract:

Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.

Keywords: EEG scanner, eye-detector, mammography, observers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 270
924 Improving Similarity Search Using Clustered Data

Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong

Abstract:

This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.

Keywords: Visual search, deep learning, convolutional neural network, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
923 A Novel Prostate Segmentation Algorithm in TRUS Images

Authors: Ali Rafiee, Ahad Salimi, Ali Reza Roosta

Abstract:

Prostate cancer is one of the most frequent cancers in men and is a major cause of mortality in the most of countries. In many diagnostic and treatment procedures for prostate disease accurate detection of prostate boundaries in transrectal ultrasound (TRUS) images is required. This is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a novel method for automatic prostate segmentation in TRUS images is presented. This method involves preprocessing (edge preserving noise reduction and smoothing) and prostate segmentation. The speckle reduction has been achieved by using stick filter and top-hat transform has been implemented for smoothing. A feed forward neural network and local binary pattern together have been use to find a point inside prostate object. Finally the boundary of prostate is extracted by the inside point and an active contour algorithm. A numbers of experiments are conducted to validate this method and results showed that this new algorithm extracted the prostate boundary with MSE less than 4.6% relative to boundary provided manually by physicians.

Keywords: Prostate segmentation, stick filter, neural network, active contour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1924
922 Igbo Art: A Reflection of the Igbo’s Visual Culture

Authors: David Osa-Egonwa

Abstract:

Visual culture is the expression of the norms and social behavior of a society in visual images. A reflection simply shows you how you look when you stand before a mirror, a clear water or stream. The mirror does not alter, improve or distort your original appearance, neither does it show you a caricature of what stands before it, this is the case with visual images created by a tribe or society. The ‘uli’ is hand drawn body design done on Igbo women and speaks of a culture of body adornment which is a practice that is appreciated by that tribe. The use of pattern of the gliding python snake ‘ije eke’ or ‘ijeagwo’ for wall painting speaks of the Igbo culture as one that appreciates wall paintings based on these patterns. Modern life came and brought a lot of change to the Igbo-speaking people of Nigeria. Change cloaked in the garment of Westernization has influenced the culture of the Igbos. This has resulted in a problem which is a break in the cultural practice that has also affected art produced by the Igbos. Before the colonial masters arrived and changed the established culture practiced by the Igbos, visual images were created that retained the culture of this people. To bring this point to limelight, this paper has adopted a historical method. A large number of works produced during pre and post-colonial era which range from sculptural pieces, paintings and other artifacts, just to mention a few, were studied carefully and it was discovered that the visual images hold the culture or aspects of the culture of the Igbos in their renditions and can rightly serve as a mirror of the Igbo visual culture.

Keywords: Artistic renditions, historical method, Igbo visual culture, changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 954
921 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images

Authors: SP. Chokkalingam, K. Komathy

Abstract:

Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.

Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2437
920 An Efficient Clustering Technique for Copy-Paste Attack Detection

Authors: N. Chaitawittanun, M. Munlin

Abstract:

Due to rapid advancement of powerful image processing software, digital images are easy to manipulate and modify by ordinary people. Lots of digital images are edited for a specific purpose and more difficult to distinguish form their original ones. We propose a clustering method to detect a copy-move image forgery of JPEG, BMP, TIFF, and PNG. The process starts with reducing the color of the photos. Then, we use the clustering technique to divide information of measuring data by Hausdorff Distance. The result shows that the purposed methods is capable of inspecting the image file and correctly identify the forgery.

Keywords: Image detection, forgery image, copy-paste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280
919 3D Brain Tumor Segmentation Using Level-Sets Method and Meshes Simplification from Volumetric MR Images

Authors: K. Aloui, M. S. Naceur

Abstract:

The main objective of this paper is to provide an efficient tool for delineating brain tumors in three-dimensional magnetic resonance images. To achieve this goal, we use basically a level-sets approach to delineating three-dimensional brain tumors. Then we introduce a compression plan of 3D brain structures based for the meshes simplification, adapted for time to the specific needs of the telemedicine and to the capacities restricted by network communication. We present here the main stages of our system, and preliminary results which are very encouraging for clinical practice.

Keywords: Medical imaging, level-sets, compression, meshess implification, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2078
918 Efficient Mean Shift Clustering Using Exponential Integral Kernels

Authors: S. Sutor, R. Röhr, G. Pujolle, R. Reda

Abstract:

This paper presents a highly efficient algorithm for detecting and tracking humans and objects in video surveillance sequences. Mean shift clustering is applied on backgrounddifferenced image sequences. For efficiency, all calculations are performed on integral images. Novel corresponding exponential integral kernels are introduced to allow the application of nonuniform kernels for clustering, which dramatically increases robustness without giving up the efficiency of the integral data structures. Experimental results demonstrating the power of this approach are presented.

Keywords: Clustering, Integral Images, Kernels, Person Detection, Person Tracking, Intelligent Video Surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
917 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri

Abstract:

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
916 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
915 A New Voting Approach to Texture Defect Detection Based on Multiresolutional Decomposition

Authors: B. B. M. Moasheri, S. Azadinia

Abstract:

Wavelets have provided the researchers with significant positive results, by entering the texture defect detection domain. The weak point of wavelets is that they are one-dimensional by nature so they are not efficient enough to describe and analyze two-dimensional functions. In this paper we present a new method to detect the defect of texture images by using curvelet transform. Simulation results of the proposed method on a set of standard texture images confirm its correctness. Comparing the obtained results indicates the ability of curvelet transform in describing discontinuity in two-dimensional functions compared to wavelet transform

Keywords: Curvelet, Defect detection, Wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
914 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values

Authors: Burçin Saltık, Levent Genç

Abstract:

In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.

Keywords: Landsat 8 (OLI-TIRS), LULC, spectral indices, rice.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1261