Search results for: Document image extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2348

Search results for: Document image extraction

1658 Capture Zone of a Well Field in an Aquifer Bounded by Two Parallel Streams

Authors: S. Nagheli, N. Samani, D. A. Barry

Abstract:

In this paper, the velocity potential and stream function of capture zone for a well field in an aquifer bounded by two parallel streams with or without a uniform regional flow of any directions are presented. The well field includes any number of extraction or injection wells or a combination of both types with any pumping rates. To delineate the capture envelope, the potential and streamlines equations are derived by conformal mapping method. This method can help us to release constrains of other methods. The equations can be applied as useful tools to design in-situ groundwater remediation systems, to evaluate the surface–subsurface water interaction and to manage the water resources.

Keywords: Complex potential, conformal mapping, groundwater remediation, image well theory, Laplace’s equation, superposition principle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845
1657 Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study

Authors: Negar Riazifar, Mehran Yazdi

Abstract:

Discrete Wavelet Transform (DWT) has demonstrated far superior to previous Discrete Cosine Transform (DCT) and standard JPEG in natural as well as medical image compression. Due to its localization properties both in special and transform domain, the quantization error introduced in DWT does not propagate globally as in DCT. Moreover, DWT is a global approach that avoids block artifacts as in the JPEG. However, recent reports on natural image compression have shown the superior performance of contourlet transform, a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks, compared to DWT. It is mostly due to the optimality of contourlet in representing the edges when they are smooth curves. In this work, we investigate this fact for medical images, especially for CT images, which has not been reported yet. To do that, we propose a compression scheme in transform domain and compare the performance of both DWT and contourlet transform in PSNR for different compression ratios (CR) using this scheme. The results obtained using different type of computed tomography images show that the DWT has still good performance at lower CR but contourlet transform performs better at higher CR.

Keywords: Computed Tomography (CT), DWT, Discrete Contourlet Transform, Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2775
1656 Clustering Unstructured Text Documents Using Fading Function

Authors: Pallav Roxy, Durga Toshniwal

Abstract:

Clustering unstructured text documents is an important issue in data mining community and has a number of applications such as document archive filtering, document organization and topic detection and subject tracing. In the real world, some of the already clustered documents may not be of importance while new documents of more significance may evolve. Most of the work done so far in clustering unstructured text documents overlooks this aspect of clustering. This paper, addresses this issue by using the Fading Function. The unstructured text documents are clustered. And for each cluster a statistics structure called Cluster Profile (CP) is implemented. The cluster profile incorporates the Fading Function. This Fading Function keeps an account of the time-dependent importance of the cluster. The work proposes a novel algorithm Clustering n-ary Merge Algorithm (CnMA) for unstructured text documents, that uses Cluster Profile and Fading Function. Experimental results illustrating the effectiveness of the proposed technique are also included.

Keywords: Clustering, Text Mining, Unstructured TextDocuments, Fading Function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
1655 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots

Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar

Abstract:

Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.

Keywords: Agricultural mobile robot, image processing, path recognition, Hough transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
1654 An Implicit Region-Based Deformable Model with Local Segmentation Applied to Weld Defects Extraction

Authors: Y. Boutiche, N. Ramou, M. Ben Gharsallah

Abstract:

This paper is devoted to present and discuss a model that allows a local segmentation by using statistical information of a given image. It is based on Chan-Vese model, curve evolution, partial differential equations and binary level sets method. The proposed model uses the piecewise constant approximation of Chan-Vese model to compute Signed Pressure Force (SPF) function, this one attracts the curve to the true object(s)-s boundaries. The implemented model is used to extract weld defects from weld radiographic images in the aim to calculate the perimeter and surfaces of those weld defects; encouraged resultants are obtained on synthetic and real radiographic images.

Keywords: Active contour, Chan-Vese Model, local segmentation, weld radiographic images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
1653 Implementation of RC5 Block Cipher Algorithm for Image Cryptosystems

Authors: Hossam El-din H. Ahmed, Hamdy M. Kalash, Osama S. Farag Allah

Abstract:

This paper examines the implementation of RC5 block cipher for digital images along with its detailed security analysis. A complete specification for the method of application of the RC5 block cipher to digital images is given. The security analysis of RC5 block cipher for digital images against entropy attack, bruteforce, statistical, and differential attacks is explored from strict cryptographic viewpoint. Experiments and results verify and prove that RC5 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC5 block cipher algorithm.

Keywords: Image encryption, security analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3645
1652 Text Summarization for Oil and Gas News Article

Authors: L. H. Chong, Y. Y. Chen

Abstract:

Information is increasing in volumes; companies are overloaded with information that they may lose track in getting the intended information. It is a time consuming task to scan through each of the lengthy document. A shorter version of the document which contains only the gist information is more favourable for most information seekers. Therefore, in this paper, we implement a text summarization system to produce a summary that contains gist information of oil and gas news articles. The summarization is intended to provide important information for oil and gas companies to monitor their competitor-s behaviour in enhancing them in formulating business strategies. The system integrated statistical approach with three underlying concepts: keyword occurrences, title of the news article and location of the sentence. The generated summaries were compared with human generated summaries from an oil and gas company. Precision and recall ratio are used to evaluate the accuracy of the generated summary. Based on the experimental results, the system is able to produce an effective summary with the average recall value of 83% at the compression rate of 25%.

Keywords: Information retrieval, text summarization, statistical approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
1651 Pineapple Maturity Recognition Using RGB Extraction

Authors: J. I. Asnor, S. Rosnah, Z. W. H. Wan, H. A. B. Badrul

Abstract:

Pineapples can be classified using an index with seven levels of maturity based on the green and yellow color of the skin. As the pineapple ripens, the skin will change from pale green to a golden or yellowish color. The issues that occur in agriculture nowadays are to do with farmers being unable to distinguish between the indexes of pineapple maturity correctly and effectively. There are several reasons for why farmers cannot properly follow the guideline provide by Federal Agriculture Marketing Authority (FAMA) and one of reason is that due to manual inspection done by experts, there are no specific and universal guidelines to be adopted by farmers due to the different points of view of the experts when sorting the pineapples based on their knowledge and experience. Therefore, an automatic system will help farmers to identify pineapple maturity effectively and will become a universal indicator to farmers.

Keywords: Artificial Neural Network, Image Processing, Index of Maturity, Pineapple

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3415
1650 Adaptive Block State Update Method for Separating Background

Authors: Youngsuck Ji, Youngjoon Han, Hernsoo Hahn

Abstract:

In this paper, we proposed the robust mobile object detection method for light effect in the night street image block based updating reference background model using block state analysis. Experiment image is acquired sequence color video from steady camera. When suddenly appeared artificial illumination, reference background model update this information such as street light, sign light. Generally natural illumination is change by temporal, but artificial illumination is suddenly appearance. So in this paper for exactly detect artificial illumination have 2 state process. First process is compare difference between current image and reference background by block based, it can know changed blocks. Second process is difference between current image-s edge map and reference background image-s edge map, it possible to estimate illumination at any block. This information is possible to exactly detect object, artificial illumination and it was generating reference background more clearly. Block is classified by block-state analysis. Block-state has a 4 state (i.e. transient, stationary, background, artificial illumination). Fig. 1 is show characteristic of block-state respectively [1]. Experimental results show that the presented approach works well in the presence of illumination variance.

Keywords: Block-state, Edge component, Reference backgroundi, Artificial illumination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
1649 Region Segmentation based on Gaussian Dirichlet Process Mixture Model and its Application to 3D Geometric Stricture Detection

Authors: Jonghyun Park, Soonyoung Park, Sanggyun Kim, Wanhyun Cho, Sunworl Kim

Abstract:

In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.

Keywords: Region segmentation, tensor voting, image-based 3D, geometric structure, Gaussian Dirichlet process mixture model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
1648 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
1647 Low Computational Image Compression Scheme based on Absolute Moment Block Truncation Coding

Authors: K.Somasundaram, I.Kaspar Raj

Abstract:

In this paper we have proposed three and two stage still gray scale image compressor based on BTC. In our schemes, we have employed a combination of four techniques to reduce the bit rate. They are quad tree segmentation, bit plane omission, bit plane coding using 32 visual patterns and interpolative bit plane coding. The experimental results show that the proposed schemes achieve an average bit rate of 0.46 bits per pixel (bpp) for standard gray scale images with an average PSNR value of 30.25, which is better than the results from the exiting similar methods based on BTC.

Keywords: Bit plane, Block Truncation Coding, Image compression, lossy compression, quad tree segmentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728
1646 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics

Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur

Abstract:

Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.

Keywords: Human machine interface, industrial internet of things, internet of things, optical character recognition, video analytic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 706
1645 FPGA Implementation of a Vision-Based Blind Spot Warning System

Authors: Yu Ren Lin, Yu Hong Li

Abstract:

Vision-based intelligent vehicle applications often require large amounts of memory to handle video streaming and image processing, which in turn increases complexity of hardware and software. This paper presents an FPGA implement of a vision-based blind spot warning system. Using video frames, the information of the blind spot area turns into one-dimensional information. Analysis of the estimated entropy of image allows the detection of an object in time. This idea has been implemented in the XtremeDSP video starter kit. The blind spot warning system uses only 13% of its logic resources and 95k bits block memory, and its frame rate is over 30 frames per sec (fps).

Keywords: blind-spot area, image, FPGA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
1644 Automatic Extraction of Features and Opinion-Oriented Sentences from Customer Reviews

Authors: Khairullah Khan, Baharum B. Baharudin, Aurangzeb Khan, Fazal_e_Malik

Abstract:

Opinion extraction about products from customer reviews is becoming an interesting area of research. Customer reviews about products are nowadays available from blogs and review sites. Also tools are being developed for extraction of opinion from these reviews to help the user as well merchants to track the most suitable choice of product. Therefore efficient method and techniques are needed to extract opinions from review and blogs. As reviews of products mostly contains discussion about the features, functions and services, therefore, efficient techniques are required to extract user comments about the desired features, functions and services. In this paper we have proposed a novel idea to find features of product from user review in an efficient way. Our focus in this paper is to get the features and opinion-oriented words about products from text through auxiliary verbs (AV) {is, was, are, were, has, have, had}. From the results of our experiments we found that 82% of features and 85% of opinion-oriented sentences include AVs. Thus these AVs are good indicators of features and opinion orientation in customer reviews.

Keywords: Classification, Customer Reviews, Helping Verbs, Opinion Mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
1643 Motion Detection Techniques Using Optical Flow

Authors: A. A. Shafie, Fadhlan Hafiz, M. H. Ali

Abstract:

Motion detection is very important in image processing. One way of detecting motion is using optical flow. Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. The method used for finding the optical flow in this project is assuming that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. This technique is later used in developing software for motion detection which has the capability to carry out four types of motion detection. The motion detection software presented in this project also can highlight motion region, count motion level as well as counting object numbers. Many objects such as vehicles and human from video streams can be recognized by applying optical flow technique.

Keywords: Background modeling, Motion detection, Optical flow, Velocity smoothness constant, motion trajectories.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5363
1642 Fragile Watermarking for Color Images Using Thresholding Technique

Authors: Kuo-Cheng Liu

Abstract:

In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.

Keywords: thresholding technique, tamper proofing, tamper recovery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
1641 Medical Image Segmentation and Detection of MR Images Based on Spatial Multiple-Kernel Fuzzy C-Means Algorithm

Authors: J. Mehena, M. C. Adhikary

Abstract:

In this paper, a spatial multiple-kernel fuzzy C-means (SMKFCM) algorithm is introduced for segmentation problem. A linear combination of multiples kernels with spatial information is used in the kernel FCM (KFCM) and the updating rules for the linear coefficients of the composite kernels are derived as well. Fuzzy cmeans (FCM) based techniques have been widely used in medical image segmentation problem due to their simplicity and fast convergence. The proposed SMKFCM algorithm provides us a new flexible vehicle to fuse different pixel information in medical image segmentation and detection of MR images. To evaluate the robustness of the proposed segmentation algorithm in noisy environment, we add noise in medical brain tumor MR images and calculated the success rate and segmentation accuracy. From the experimental results it is clear that the proposed algorithm has better performance than those of other FCM based techniques for noisy medical MR images.

Keywords: Clustering, fuzzy C-means, image segmentation, MR images, multiple kernels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
1640 Extracting Human Body based on Background Estimation in Modified HLS Color Space

Authors: Jang-Hee Yoo, Doosung Hwang, Jong-Wook Han, Ki-Young Moon

Abstract:

The ability to recognize humans and their activities by computer vision is a very important task, with many potential application. Study of human motion analysis is related to several research areas of computer vision such as the motion capture, detection, tracking and segmentation of people. In this paper, we describe a segmentation method for extracting human body contour in modified HLS color space. To estimate a background, the modified HLS color space is proposed, and the background features are estimated by using the HLS color components. Here, the large amount of human dataset, which was collected from DV cameras, is pre-processed. The human body and its contour is successfully extracted from the image sequences.

Keywords: Background Subtraction, Human Silhouette Extraction, HLS Color Space, and Object Segmentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2412
1639 Dual Pyramid of Agents for Image Segmentation

Authors: K. Idir, H. Merouani, Y. Tlili.

Abstract:

An effective method for the early detection of breast cancer is the mammographic screening. One of the most important signs of early breast cancer is the presence of microcalcifications. For the detection of microcalcification in a mammography image, we propose to conceive a multiagent system based on a dual irregular pyramid. An initial segmentation is obtained by an incremental approach; the result represents level zero of the pyramid. The edge information obtained by application of the Canny filter is taken into account to affine the segmentation. The edge-agents and region-agents cooper level by level of the pyramid by exploiting its various characteristics to provide the segmentation process convergence.

Keywords: Dual Pyramid, Image Segmentation, Multi-agent System, Region/Edge Cooperation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
1638 The Water Level Detection Algorithm Using the Accumulated Histogram with Band Pass Filter

Authors: Sangbum Park, Namki Lee, Youngjoon Han, Hernsoo Hahn

Abstract:

In this paper, we propose the robust water level detection method based on the accumulated histogram under small changed image which is acquired from water level surveillance camera. In general surveillance system, this is detecting and recognizing invasion from searching area which is in big change on the sequential images. However, in case of a water level detection system, these general surveillance techniques are not suitable due to small change on the water surface. Therefore the algorithm introduces the accumulated histogram which is emphasizing change of water surface in sequential images. Accumulated histogram is based on the current image frame. The histogram is cumulating differences between previous images and current image. But, these differences are also appeared in the land region. The band pass filter is able to remove noises in the accumulated histogram Finally, this algorithm clearly separates water and land regions. After these works, the algorithm converts from the water level value on the image space to the real water level on the real space using calibration table. The detected water level is sent to the host computer with current image. To evaluate the proposed algorithm, we use test images from various situations.

Keywords: accumulated histogram, water level detection, band pass filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
1637 Deficiencies of Lung Segmentation Techniques using CT Scan Images for CAD

Authors: Nisar Ahmed Memon, Anwar Majid Mirza, S.A.M. Gilani

Abstract:

Segmentation is an important step in medical image analysis and classification for radiological evaluation or computer aided diagnosis. This paper presents the problem of inaccurate lung segmentation as observed in algorithms presented by researchers working in the area of medical image analysis. The different lung segmentation techniques have been tested using the dataset of 19 patients consisting of a total of 917 images. We obtained datasets of 11 patients from Ackron University, USA and of 8 patients from AGA Khan Medical University, Pakistan. After testing the algorithms against datasets, the deficiencies of each algorithm have been highlighted.

Keywords: Computer Aided Diagnosis (CAD), MathematicalMorphology, Medical Image Analysis, Region Growing, Segmentation, Thresholding,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2317
1636 The Use of Complex Contourlet Transform on Fusion Scheme

Authors: Dipeng Chen, Qi Li

Abstract:

Image fusion aims to enhance the perception of a scene by combining important information captured by different sensors. Dual-Tree Complex Wavelet (DT-CWT) has been thouroughly investigated for image fusion, since it takes advantages of approximate shift invariance and direction selectivity. But it can only handle limited direction information. To allow a more flexible directional expansion for images, we propose a novel fusion scheme, referred to as complex contourlet transform (CCT). It successfully incorporates directional filter banks (DFB) into DT-CWT. As a result it efficiently deal with images containing contours and textures, whereas it retains the property of shift invariance. Experimental results demonstrated that the method features high quality fusion performance and can facilitate many image processing applications.

Keywords: Complex contourlet transform, Complex wavelettransform, Fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
1635 Precious and Rare Metals in Overburden Carbonaceous Rocks: Methods of Extraction

Authors: Tatyana Alexandrova, Alexandr Alexandrov, Nadezhda Nikolaeva

Abstract:

A problem of complex mineral resources development is urgent and priority, it is aimed at realization of the processes of their ecologically safe development, one of its components is revealing the influence of the forms of element compounds in raw materials and in the processing products. In view of depletion of the precious metal reserves at the traditional deposits in the XXI century the large-size open cast deposits, localized in black shale strata begin to play the leading role. Carbonaceous (black) shales carry a heightened metallogenic potential. Black shales with high content of carbon are widely distributed within the scope of Bureinsky massif. According to academician Hanchuk`s data black shales of Sutirskaya series contain generally PGEs native form. The presence of high absorptive towards carbonaceous matter gold and PGEs compounds in crude ore results in decrease of valuable components extraction because of their sorption into dissipated carbonaceous matter.

Keywords: Сarbonaceous rocks, bitumens, precious metals, concentration, extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
1634 Robust Camera Calibration using Discrete Optimization

Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck

Abstract:

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
1633 Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm

Authors: Mahmoud Saeidi, Khadijeh Saeidi, Mahmoud Khaleghi

Abstract:

In this paper, we propose a novel spatiotemporal fuzzy based algorithm for noise filtering of image sequences. Our proposed algorithm uses adaptive weights based on a triangular membership functions. In this algorithm median filter is used to suppress noise. Experimental results show when the images are corrupted by highdensity Salt and Pepper noise, our fuzzy based algorithm for noise filtering of image sequences, are much more effective in suppressing noise and preserving edges than the previously reported algorithms such as [1-7]. Indeed, assigned weights to noisy pixels are very adaptive so that they well make use of correlation of pixels. On the other hand, the motion estimation methods are erroneous and in highdensity noise they may degrade the filter performance. Therefore, our proposed fuzzy algorithm doesn-t need any estimation of motion trajectory. The proposed algorithm admissibly removes noise without having any knowledge of Salt and Pepper noise density.

Keywords: Image Sequences, Noise Reduction, fuzzy algorithm, triangular membership function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
1632 Near-Lossless Image Coding based on Orthogonal Polynomials

Authors: Krishnamoorthy R, Rajavijayalakshmi K, Punidha R

Abstract:

In this paper, a near lossless image coding scheme based on Orthogonal Polynomials Transform (OPT) has been presented. The polynomial operators and polynomials basis operators are obtained from set of orthogonal polynomials functions for the proposed transform coding. The image is partitioned into a number of distinct square blocks and the proposed transform coding is applied to each of these individually. After applying the proposed transform coding, the transformed coefficients are rearranged into a sub-band structure. The Embedded Zerotree (EZ) coding algorithm is then employed to quantize the coefficients. The proposed transform is implemented for various block sizes and the performance is compared with existing Discrete Cosine Transform (DCT) transform coding scheme.

Keywords: Near-lossless Coding, Orthogonal Polynomials Transform, Embedded Zerotree Coding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
1631 A Deep Learning Framework for Polarimetric SAR Change Detection Using Capsule Network

Authors: Sanae Attioui, Said Najah

Abstract:

The Earth's surface is constantly changing through forces of nature and human activities. Reliable, accurate, and timely change detection is critical to environmental monitoring, resource management, and planning activities. Recently, interest in deep learning algorithms, especially convolutional neural networks, has increased in the field of image change detection due to their powerful ability to extract multi-level image features automatically. However, these networks are prone to drawbacks that limit their applications, which reside in their inability to capture spatial relationships between image instances, as this necessitates a large amount of training data. As an alternative, Capsule Network has been proposed to overcome these shortcomings. Although its effectiveness in remote sensing image analysis has been experimentally verified, its application in change detection tasks remains very sparse. Motivated by its greater robustness towards improved hierarchical object representation, this study aims to apply a capsule network for PolSAR image Change Detection. The experimental results demonstrate that the proposed change detection method can yield a significantly higher detection rate compared to methods based on convolutional neural networks.

Keywords: Change detection, capsule network, deep network, Convolutional Neural Networks, polarimetric synthetic aperture radar images, PolSAR images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
1630 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: Hyperspectral image, spatial hypergraph, dimensionality reduction, semantic interpretation, band selection, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1197
1629 Bridging Quantitative and Qualitative of Glaucoma Detection

Authors: Noor Elaiza Abdul Khalid, Noorhayati Mohamed Noor, Zamalia Mahmud, Saadiah Yahya, and Norharyati Md Ariff

Abstract:

Glaucoma diagnosis involves extracting three features of the fundus image; optic cup, optic disc and vernacular. Present manual diagnosis is expensive, tedious and time consuming. A number of researches have been conducted to automate this process. However, the variability between the diagnostic capability of an automated system and ophthalmologist has yet to be established. This paper discusses the efficiency and variability between ophthalmologist opinion and digital technique; threshold. The efficiency and variability measures are based on image quality grading; poor, satisfactory or good. The images are separated into four channels; gray, red, green and blue. A scientific investigation was conducted on three ophthalmologists who graded the images based on the image quality. The images are threshold using multithresholding and graded as done by the ophthalmologist. A comparison of grade from the ophthalmologist and threshold is made. The results show there is a small variability between result of ophthalmologists and digital threshold.

Keywords: Digital Fundus Image, Glaucoma Detection, Multithresholding, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2019