Search results for: binary images
857 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program
Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata
Abstract:
The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868856 An Additive Watermarking Technique in Gray Scale Images Using Discrete Wavelet Transformation and Its Analysis on Watermark Strength
Authors: Kamaldeep Joshi, Rajkumar Yadav, Ashok Kumar Yadav
Abstract:
Digital Watermarking is a procedure to prevent the unauthorized access and modification of personal data. It assures that the communication between two parties remains secure and their communication should be undetected. This paper investigates the consequence of the watermark strength of the grayscale image using a Discrete Wavelet Transformation (DWT) additive technique. In this method, the gray scale host image is divided into four sub bands: LL (Low-Low), HL (High-Low), LH (Low-High), HH (High-High) and the watermark is inserted in an LL sub band using DWT technique. As the image is divided into four sub bands, a watermark of equal size of the LL sub band has been inserted and the results are discussed. LL represents the average component of the host image which contains the maximum information of the image. Two kinds of experiments are performed. In the first, the same watermark is embedded in different images and in the later on the strength of the watermark varies by a factor of s i.e. (s=10, 20, 30, 40, 50) and it is inserted in the same image.
Keywords: Watermarking, discrete wavelet transform, scaling factor, steganography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443855 A Unified Robust Algorithm for Detection of Human and Non-human Object in Intelligent Safety Application
Authors: M A Hannan, A. Hussain, S. A. Samad, K. A. Ishak, A. Mohamed
Abstract:
This paper presents a general trainable framework for fast and robust upright human face and non-human object detection and verification in static images. To enhance the performance of the detection process, the technique we develop is based on the combination of fast neural network (FNN) and classical neural network (CNN). In FNN, a useful correlation is exploited to sustain high level of detection accuracy between input image and the weight of the hidden neurons. This is to enable the use of Fourier transform that significantly speed up the time detection. The combination of CNN is responsible to verify the face region. A bootstrap algorithm is used to collect non human object, which adds the false detection to the training process of the human and non-human object. Experimental results on test images with both simple and complex background demonstrate that the proposed method has obtained high detection rate and low false positive rate in detecting both human face and non-human object.Keywords: Algorithm, detection of human and non-human object, FNN, CNN, Image training.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633854 The SAFRS System : A Case-Based Reasoning Training Tool for Capturing and Re-Using Knowledge
Authors: Souad Demigha
Abstract:
The paper aims to specify and build a system, a learning support in radiology-senology (breast radiology) dedicated to help assist junior radiologists-senologists in their radiologysenology- related activity based on experience of expert radiologistssenologists. This system is named SAFRS (i.e. system supporting the training of radiologists-senologists). It is based on the exploitation of radiologic-senologic images (primarily mammograms but also echographic images or MRI) and their related clinical files. The aim of such a system is to help breast cancer screening in education. In order to acquire this expert radiologist-senologist knowledge, we have used the CBR (case-based reasoning) approach. The SAFRS system will promote the evolution of teaching in radiology-senology by offering the “junior radiologist" trainees an advanced pedagogical product. It will permit a strengthening of knowledge together with a very elaborate presentation of results. At last, the know-how will derive from all these factors.
Keywords: Learning support, radiology-senology, training, education, CBR, accumulated experience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669853 Improved Processing Speed for Text Watermarking Algorithm in Color Images
Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari
Abstract:
Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.
Keywords: Steganography, watermarking, private keys, time complexity measurements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816852 Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery
Authors: Evans Belly, Imdad Rizvi, M. M. Kadam
Abstract:
Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed.Keywords: Building detection, shadow detection, landscape generation, label, partitioning, very high resolution satellite imagery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 837851 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots
Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar
Abstract:
Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.
Keywords: Agricultural mobile robot, image processing, path recognition, Hough transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789850 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry
Authors: Naoto Suzuki
Abstract:
Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.
Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 569849 An Optical Flow Based Segmentation Method for Objects Extraction
Abstract:
This paper describes a segmentation algorithm based on the cooperation of an optical flow estimation method with edge detection and region growing procedures. The proposed method has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. The addressed problem consists in extracting whole objects from background for producing images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The first task of the algorithm exploits the cues from motion analysis for moving area detection. Objects and background are then refined using respectively edge detection and region growing procedures. These tasks are iteratively performed until objects and background are completely resolved. The developed method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.Keywords: Motion Detection, Object Extraction, Optical Flow, Segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1894848 Land Use Change Detection Using Remote Sensing and GIS
Authors: Naser Ahmadi Sani, Karim Solaimani, Lida Razaghnia, Jalal Zandi
Abstract:
In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.
Keywords: HARAZ Basin, Change Detection, Land-use, Satellite Data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2325847 On the EM Algorithm and Bootstrap Approach Combination for Improving Satellite Image Fusion
Authors: Tijani Delleji, Mourad Zribi, Ahmed Ben Hamida
Abstract:
This paper discusses EM algorithm and Bootstrap approach combination applied for the improvement of the satellite image fusion process. This novel satellite image fusion method based on estimation theory EM algorithm and reinforced by Bootstrap approach was successfully implemented and tested. The sensor images are firstly split by a Bayesian segmentation method to determine a joint region map for the fused image. Then, we use the EM algorithm in conjunction with the Bootstrap approach to develop the bootstrap EM fusion algorithm, hence producing the fused targeted image. We proposed in this research to estimate the statistical parameters from some iterative equations of the EM algorithm relying on a reference of representative Bootstrap samples of images. Sizes of those samples are determined from a new criterion called 'hybrid criterion'. Consequently, the obtained results of our work show that using the Bootstrap EM (BEM) in image fusion improve performances of estimated parameters which involve amelioration of the fused image quality; and reduce the computing time during the fusion process.Keywords: Satellite image fusion, Bayesian segmentation, Bootstrap approach, EM algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2260846 A Hybrid Distributed Vision System for Robot Localization
Authors: Hsiang-Wen Hsieh, Chin-Chia Wu, Hung-Hsiu Yu, Shu-Fan Liu
Abstract:
Localization is one of the critical issues in the field of robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS) for robot localization is presented. The presented approach integrates odometry data from robot and images captured from overhead cameras installed in the environment to help reduce possibilities of fail localization due to effects of illumination, encoder accumulated errors, and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the presented approach could localize robots in a global world coordinate system with localization errors within 100mm.Keywords: Distributed Vision System, Localization, Measurement model, Motion model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340845 Use of Fuzzy Edge Image in Block Truncation Coding for Image Compression
Authors: Amarunnishad T.M., Govindan V.K., Abraham T. Mathew
Abstract:
An image compression method has been developed using fuzzy edge image utilizing the basic Block Truncation Coding (BTC) algorithm. The fuzzy edge image has been validated with classical edge detectors on the basis of the results of the well-known Canny edge detector prior to applying to the proposed method. The bit plane generated by the conventional BTC method is replaced with the fuzzy bit plane generated by the logical OR operation between the fuzzy edge image and the corresponding conventional BTC bit plane. The input image is encoded with the block mean and standard deviation and the fuzzy bit plane. The proposed method has been tested with test images of 8 bits/pixel and size 512×512 and found to be superior with better Peak Signal to Noise Ratio (PSNR) when compared to the conventional BTC, and adaptive bit plane selection BTC (ABTC) methods. The raggedness and jagged appearance, and the ringing artifacts at sharp edges are greatly reduced in reconstructed images by the proposed method with the fuzzy bit plane.Keywords: Image compression, Edge detection, Ground truth image, Peak signal to noise ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700844 Methods of Geodesic Distance in Two-Dimensional Face Recognition
Authors: Rachid Ahdid, Said Safi, Bouzid Manaut
Abstract:
In this paper, we present a comparative study of three methods of 2D face recognition system such as: Iso-Geodesic Curves (IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram (GIH). These approaches are based on computing of geodesic distance between points of facial surface and between facial curves. In this study we represented the image at gray level as a 2D surface in a 3D space, with the third coordinate proportional to the intensity values of pixels. In the classifying step, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The images used in our experiments are from two wellknown databases of face images ORL and YaleB. ORL data base was used to evaluate the performance of methods under conditions where the pose and sample size are varied, and the database YaleB was used to examine the performance of the systems when the facial expressions and lighting are varied.
Keywords: 2D face recognition, Geodesic distance, Iso-Geodesic Curves, Geodesic-Intensity Histogram, facial surface, Neural Networks, K-Nearest Neighbor, Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815843 Pre-Analysis of Printed Circuit Boards Based On Multispectral Imaging for Vision Based Recognition of Electronics Waste
Authors: Florian Kleber, Martin Kampel
Abstract:
The increasing demand of gallium, indium and rare-earth elements for the production of electronics, e.g. solid state-lighting, photovoltaics, integrated circuits, and liquid crystal displays, will exceed the world-wide supply according to current forecasts. Recycling systems to reclaim these materials are not yet in place, which challenges the sustainability of these technologies. This paper proposes a multispectral imaging system as a basis for a vision based recognition system for valuable components of electronics waste. Multispectral images intend to enhance the contrast of images of printed circuit boards (single components, as well as labels) for further analysis, such as optical character recognition and entire printed circuit board recognition. The results show, that a higher contrast is achieved in the near infrared compared to ultraviolett and visible light.
Keywords: Electronic Waste, Recycling, Multispectral Imaging, Printed Circuit Boards, Rare-Earth Elements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2685842 Exploiting Global Self Similarity for Head-Shoulder Detection
Authors: Lae-Jeong Park, Jung-Ho Moon
Abstract:
People detection from images has a variety of applications such as video surveillance and driver assistance system, but is still a challenging task and more difficult in crowded environments such as shopping malls in which occlusion of lower parts of human body often occurs. Lack of the full-body information requires more effective features than common features such as HOG. In this paper, new features are introduced that exploits global self-symmetry (GSS) characteristic in head-shoulder patterns. The features encode the similarity or difference of color histograms and oriented gradient histograms between two vertically symmetric blocks. The domain-specific features are rapid to compute from the integral images in Viola-Jones cascade-of-rejecters framework. The proposed features are evaluated with our own head-shoulder dataset that, in part, consists of a well-known INRIA pedestrian dataset. Experimental results show that the GSS features are effective in reduction of false alarmsmarginally and the gradient GSS features are preferred more often than the color GSS ones in the feature selection.
Keywords: Pedestrian detection, cascade of rejecters, feature extraction, self-symmetry, HOG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400841 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition
Authors: L. Hamsaveni, Navya Prakash, Suresha
Abstract:
Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155840 RoboWeedSupport-Sub Millimeter Weed Image Acquisition in Cereal Crops with Speeds up till 50 Km/H
Authors: Morten Stigaard Laursen, Rasmus Nyholm Jørgensen, Mads Dyrmann, Robert Poulsen
Abstract:
For the past three years, the Danish project, RoboWeedSupport, has sought to bridge the gap between the potential herbicide savings using a decision support system and the required weed inspections. In order to automate the weed inspections it is desired to generate a map of the weed species present within the field, to generate the map images must be captured with samples covering the field. This paper investigates the economical cost of performing this data collection based on a camera system mounted on a all-terain vehicle (ATV) able to drive and collect data at up to 50 km/h while still maintaining a image quality sufficient for identifying newly emerged grass weeds. The economical estimates are based on approximately 100 hectares recorded at three different locations in Denmark. With an average image density of 99 images per hectare the ATV had an capacity of 28 ha per hour, which is estimated to cost 6.6 EUR/ha. Alternatively relying on a boom solution for an existing tracktor it was estimated that a cost of 2.4 EUR/ha is obtainable under equal conditions.Keywords: Weed mapping, integrated weed management, weed recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465839 Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power
Authors: Padmanabhan Balasubramanian, C. Hari Narayanan
Abstract:
This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.Keywords: AOI logic, ESOP, AND-OR-EXOR, Incidencematrix, Hamming distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520838 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification
Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian
Abstract:
Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.
Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775837 Characterization of a Pure Diamond-Like Carbon Film Deposited by Nanosecond Pulsed Laser Deposition
Authors: Camilla G. Goncalves, Benedito Christ, Walter Miyakawa, Antonio J. Abdalla
Abstract:
This work aims to investigate the properties and microstructure of diamond-like carbon film deposited by pulsed laser deposition by ablation of a graphite target in a vacuum chamber on a steel substrate. The equipment was mounted to provide one laser beam. The target of high purity graphite and the steel substrate were polished. The mechanical and tribological properties of the film were characterized using Raman spectroscopy, nanoindentation test, scratch test, roughness profile, tribometer, optical microscopy and SEM images. It was concluded that the pulsed laser deposition (PLD) technique associated with the low-pressure chamber and a graphite target provides a good fraction of sp3 bonding, that the process variable as surface polishing and laser parameter have great influence in tribological properties and in adherence tests performance. The optical microscopy images are efficient to identify the metallurgical bond.
Keywords: Characterization, diamond-like carbon, DLC, mechanical properties, pulsed laser deposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703836 Optical Flow Technique for Supersonic Jet Measurements
Authors: H. D. Lim, Jie Wu, T. H. New, Shengxian Shi
Abstract:
This paper outlines the development of an experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Despite these challenges however, this supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.
Keywords: Schlieren, optical flow, supersonic jets, shock shear layer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904835 Journey on Image Clustering Based on Color Composition
Authors: Achmad Nizar Hidayanto, Elisabeth Martha Koeanan
Abstract:
Image clustering is a process of grouping images based on their similarity. The image clustering usually uses the color component, texture, edge, shape, or mixture of two components, etc. This research aims to explore image clustering using color composition. In order to complete this image clustering, three main components should be considered, which are color space, image representation (feature extraction), and clustering method itself. We aim to explore which composition of these factors will produce the best clustering results by combining various techniques from the three components. The color spaces use RGB, HSV, and L*a*b* method. The image representations use Histogram and Gaussian Mixture Model (GMM), whereas the clustering methods use KMeans and Agglomerative Hierarchical Clustering algorithm. The results of the experiment show that GMM representation is better combined with RGB and L*a*b* color space, whereas Histogram is better combined with HSV. The experiments also show that K-Means is better than Agglomerative Hierarchical for images clustering.Keywords: Image clustering, feature extraction, RGB, HSV, L*a*b*, Gaussian Mixture Model (GMM), histogram, Agglomerative Hierarchical Clustering (AHC), K-Means, Expectation-Maximization (EM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206834 SEM Image Classification Using CNN Architectures
Authors: G. Türkmen, Ö. Tekin, K. Kurtuluş, Y. Y. Yurtseven, M. Baran
Abstract:
A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.
Keywords: Convolutional Neural Networks, deep learning, image classification, scanning electron microscope.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 198833 Optimization of Solar Tracking Systems
Authors: A. Zaher, A. Traore, F. Thiéry, T. Talbert, B. Shaer
Abstract:
In this paper, an intelligent approach is proposed to optimize the orientation of continuous solar tracking systems on cloudy days. Considering the weather case, the direct sunlight is more important than the diffuse radiation in case of clear sky. Thus, the panel is always pointed towards the sun. In case of an overcast sky, the solar beam is close to zero, and the panel is placed horizontally to receive the maximum of diffuse radiation. Under partly covered conditions, the panel must be pointed towards the source that emits the maximum of solar energy and it may be anywhere in the sky dome. Thus, the idea of our approach is to analyze the images, captured by ground-based sky camera system, in order to detect the zone in the sky dome which is considered as the optimal source of energy under cloudy conditions. The proposed approach is implemented using experimental setup developed at PROMES-CNRS laboratory in Perpignan city (France). Under overcast conditions, the results were very satisfactory, and the intelligent approach has provided efficiency gains of up to 9% relative to conventional continuous sun tracking systems.
Keywords: Clouds detection, fuzzy inference systems, images processing, sun trackers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213832 Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain
Authors: Krishnamoorthi R., Sheba Kezia Malarchelvi P. D.
Abstract:
In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.Keywords: Orthogonal Polynomials based Transformation, Digital Watermarking, Copyright Protection, Visual model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696831 Comparative Survey of Object Serialization Techniques and the Programming Supports
Authors: Kazuaki Maeda
Abstract:
This paper compares six approaches of object serialization from qualitative and quantitative aspects. Those are object serialization in Java, IDL, XStream, Protocol Buffers, Apache Avro, and MessagePack. Using each approach, a common example is serialized to a file and the size of the file is measured. The qualitative comparison works are investigated in the way of checking whether schema definition is required or not, whether schema compiler is required or not, whether serialization is based on ascii or binary, and which programming languages are supported. It is clear that there is no best solution. Each solution makes good in the context it was developed.Keywords: structured data, serialization, programming
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514830 Ultrasonic Investigation of Molecular Interaction in Binary Liquid Mixture of Polyethylene Glycol with Ethanol
Authors: S. Grace Sahaya Sheba, R. Omegala Priakumari
Abstract:
Polyethylene glycol (PEG) is a condensation polymer of ethylene oxide and water. It is soluble in water and in many organic solvents. PEG is used to make emulsifying agents, detergents, soaps, plasticizers, ointments etc. Ethanol (C2H5OH) also known as ethyl alcohol is a well-known organic compound and has wide applications in chemical industry as it is used as a solvent for paint, varnish, in preserving biological specimens, used as a fuel mixed with petrol etc. Though their chemical and physical properties are already studied, still because of their uses in day to day life the authors thought it is better to study some more of their physical properties like ultrasonic velocity and hence adiabatic compressibility, free length, etc. A detailed study of such properties and some excess parameters like excess adiabatic compressibility, excess free volume and few more in the liquid mixtures of these two compounds with PEG as a solute and Ethanol as a solvent at various mole fractions may throw some light on deeper understanding of molecular interaction between the solute and the solvent supported by NMR, IR etc. Hence the present research work is on ultrasonics/allied studies on these two liquid mixtures. Ultrasonic velocity (U), density (ρ) and viscosity (η) at room temperature and at different mole fraction from 0 to 0.055 of ethanol in PEG have been experimentally carried out by the authors. Acoustical parameters such as adiabatic compressibility (β), free volume (Vf), acoustic impedance (Z), internal pressure (πi), intermolecular free length (Lf) and relaxation time (τ) were calculated from the experimental data. We have calculated excess parameters like excess adiabatic compressibility (βE), excess internal pressure (πiE) free length (LfE) and excess acoustic impedance (ZE) etc for these two chosen liquid mixtures. The excess compressibility is positive and maximum around a mole fraction 0.007 and excess internal pressure is negative and maximum at the same mole fraction and longer free length. The results are analyzed and it may be concluded that the molecular interactions between the solute and the solvent is not strong and it may be weak. Appropriate graphs are drawn.
Keywords: Adiabatic Compressibility, Binary mixture, Induce dipole, Polarizability, Ultrasonic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2784829 String Searching in Dispersed Files using MDS Convolutional Codes
Authors: A. S. Poornima, R. Aparna, B. B. Amberker, Prashant Koulgi
Abstract:
In this paper, we propose use of convolutional codes for file dispersal. The proposed method is comparable in complexity to the information Dispersal Algorithm proposed by M.Rabin and for particular choices of (non-binary) convolutional codes, is almost as efficient as that algorithm in terms of controlling expansion in the total storage. Further, our proposed dispersal method allows string search.Keywords: Convolutional codes, File dispersal, Filereconstruction, Information Dispersal Algorithm, String search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1279828 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image
Authors: Yohei Saika, Yuji Haraguchi
Abstract:
We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988