Search results for: motion history image
2488 Timescape-Based Panoramic View for Historic Landmarks
Authors: H. Ali, A. Whitehead
Abstract:
Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks.
Keywords: Cultural heritage, image registration, image subset selection, registered image similarity, temporal panorama, timescapes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10512487 Labeling Method in Steganography
Authors: H. Motameni, M. Norouzi, M. Jahandar, A. Hatami
Abstract:
In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).
Keywords: Binary image, labeling, low bit, neighborhood, RGB image, steganography, threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21382486 Image Segmentation and Contour Recognition Based on Mathematical Morphology
Authors: Pinaki Pratim Acharjya, Esha Dutta
Abstract:
In image segmentation contour detection is one of the important pre-processing steps in recent days. Contours characterize boundaries and contour detection is one of the most difficult tasks in image processing. Hence it is a problem of fundamental importance in image processing. Contour detection of an image decreases the volume of data considerably and useless information is removed, but the structural properties of the image remain same. In this research, a robust and effective contour detection technique has been proposed using mathematical morphology. Three different contour detection results are obtained by using morphological dilation and erosion. The comparative analyses of three different results also have been done.Keywords: Image segmentation, contour detection, mathematical morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14272485 Parallel Image Compression and Analysis with Wavelets
Authors: M. Kutila, J. Viitanen
Abstract:
This paper presents image compression with wavelet based method. The wavelet transformation divides image to low- and high pass filtered parts. The traditional JPEG compression technique requires lower computation power with feasible losses, when only compression is needed. However, there is obvious need for wavelet based methods in certain circumstances. The methods are intended to the applications in which the image analyzing is done parallel with compression. Furthermore, high frequency bands can be used to detect changes or edges. Wavelets enable hierarchical analysis for low pass filtered sub-images. The first analysis can be done for a small image, and only if any interesting is found, the whole image is processed or reconstructed.
Keywords: image compression, jpeg, wavelet, vlc
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17842484 Seismic Time History Analysis for Cable-Stayed Bridge Considering Different Geometrical Configuration For Near Field Earthquakes
Authors: Atul K. Desai
Abstract:
To increase the maximum span of cable-stayed bridges, Uwe Starossek has developed a modified statical system. The basic idea of this new concept is the use of pairs of inclined pylon legs that spread out longitudinally from the foundation base or from the girder level. Spread-pylon cable-stayed bridge has distinct advantage like reduction of sag of cables and oscillation of cable during earthquake over traditional cable-stayed bridges. Spread-pylon also improves seismic performance of deck during strong ground motion.
Keywords: Different geometry of cable stayed bridge, seismic time history analysis, earthquake displacement ratio, response mode shape.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33062483 An Image Matching Method for Digital Images Using Morphological Approach
Authors: Pinaki Pratim Acharjya, Dibyendu Ghoshal
Abstract:
Image matching methods play a key role in deciding correspondence between two image scenes. This paper presents a method for the matching of digital images using mathematical morphology. The proposed method has been applied to real life images. The matching process has shown successful and promising results.
Keywords: Digital image, gradients, image matching, mathematical morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29292482 Using Rao-Blackwellised Particle Filter Track 3D Arm Motion based on Hierarchical Limb Model
Authors: XueSong Yu, JiaFeng Liu, XiangLong Tang, JianHua Huang
Abstract:
For improving the efficiency of human 3D tracking, we present an algorithm to track 3D Arm Motion. First, the Hierarchy Limb Model (HLM) is proposed based on the human 3D skeleton model. Second, via graph decomposition, the arm motion state space, modeled by HLM, can be discomposed into two low dimension subspaces: root nodes and leaf nodes. Finally, Rao-Blackwellised Particle Filter is used to estimate the 3D arm motion. The result of experiment shows that our algorithm can advance the computation efficiency.Keywords: Hierarchy Limb Model; Rao-Blackwellised Particle Filter; 3D tracking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15902481 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Human action recognition (HAR) modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view Football datasets. Our HAR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH Multi-view Football datasets, respectively.
Keywords: Computer vision, human motion analysis, random forest, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 382480 Integral Image-Based Differential Filters
Authors: Kohei Inoue, Kenji Hara, Kiichi Urahama
Abstract:
We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method.
Keywords: Integral images, differential images, differential filters, image fusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20992479 Evaluating Content Based Image Retrieval Techniques with the One Million Images CLIC Test Bed
Authors: Pierre-Alain Moëllic, Patrick Hède, Gr egory Grefenstette, Christophe Millet
Abstract:
Pattern recognition and image recognition methods are commonly developed and tested using testbeds, which contain known responses to a query set. Until now, testbeds available for image analysis and content-based image retrieval (CBIR) have been scarce and small-scale. Here we present the one million images CEA-List Image Collection (CLIC) testbed that we have produced, and report on our use of this testbed to evaluate image analysis merging techniques. This testbed will soon be made publicly available through the EU MUSCLE Network of Excellence.
Keywords: CBIR, CLIC, evaluation, image indexing and retrieval, testbed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13912478 Techniques for Video Mosaicing
Authors: P.Saravanan, Narayanan .C.K., P.V.S.S Prakash, Prabhakara Rao .G.V
Abstract:
Video Mosaicing is the stitching of selected frames of a video by estimating the camera motion between the frames and thereby registering successive frames of the video to arrive at the mosaic. Different techniques have been proposed in the literature for video mosaicing. Despite of the large number of papers dealing with techniques to generate mosaic, only a few authors have investigated conditions under which these techniques generate good estimate of motion parameters. In this paper, these techniques are studied under different videos, and the reasons for failures are found. We propose algorithms with incorporation of outlier removal algorithms for better estimation of motion parameters.Keywords: Motion parameters, Outlier removal algorithms, Registering , and Video Mosaicing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12602477 Effect of Fill Material Density under Structures on Ground Motion Characteristics Due to Earthquake
Authors: Ahmed T. Farid, Khaled Z. Soliman
Abstract:
Due to limited areas and excessive cost of land for projects, backfilling process has become necessary. Also, backfilling will be done to overcome the un-leveling depths or raising levels of site construction, especially near the sea region. Therefore, backfilling soil materials used under the foundation of structures should be investigated regarding its effect on ground motion characteristics, especially at regions subjected to earthquakes. In this research, 60-meter thickness of sandy fill material was used above a fixed 240-meter of natural clayey soil underlying by rock formation to predict the modified ground motion characteristics effect at the foundation level. Comparison between the effect of using three different situations of fill material compaction on the recorded earthquake is studied, i.e. peak ground acceleration, time history, and spectra acceleration values. The three different densities of the compacted fill material used in the study were very loose, medium dense and very dense sand deposits, respectively. Shake computer program was used to perform this study. Strong earthquake records, with Peak Ground Acceleration (PGA) of 0.35 g, were used in the analysis. It was found that, higher compaction of fill material thickness has a significant effect on eliminating the earthquake ground motion properties at surface layer of fill material, near foundation level. It is recommended to consider the fill material characteristics in the design of foundations subjected to seismic motions. Future studies should be analyzed for different fill and natural soil deposits for different seismic conditions.
Keywords: Fill, material, density, compaction, earthquake, PGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8832476 Prediction of Seismic Damage Using Scalar Intensity Measures Based On Integration of Spectral Values
Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou
Abstract:
A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are non structure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.
Keywords: Damage measures, Bidirectional excitation, Spectral based IMs, R/C buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23802475 Efficient CT Image Volume Rendering for Diagnosis
Authors: HaeNa Lee, Sun K. Yoo
Abstract:
Volume rendering is widely used in medical CT image visualization. Applying 3D image visualization to diagnosis application can require accurate volume rendering with high resolution. Interpolation is important in medical image processing applications such as image compression or volume resampling. However, it can distort the original image data because of edge blurring or blocking effects when image enhancement procedures were applied. In this paper, we proposed adaptive tension control method exploiting gradient information to achieve high resolution medical image enhancement in volume visualization, where restored images are similar to original images as much as possible. The experimental results show that the proposed method can improve image quality associated with the adaptive tension control efficacy.Keywords: Tension control, Interpolation, Ray-casting, Medical imaging analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23722474 RBF modeling of Incipient Motion of Plane Sand Bed Channels
Authors: Gopu Sreenivasulu, Bimlesh Kumar, Achanta Ramakrishna Rao
Abstract:
To define or predict incipient motion in an alluvial channel, most of the investigators use a standard or modified form of Shields- diagram. Shields- diagram does give a process to determine the incipient motion parameters but an iterative one. To design properly (without iteration), one should have another equation for resistance. Absence of a universal resistance equation also magnifies the difficulties in defining the model. Neural network technique, which is particularly useful in modeling a complex processes, is presented as a tool complimentary to modeling incipient motion. Present work develops a neural network model employing the RBF network to predict the average velocity u and water depth y based on the experimental data on incipient condition. Based on the model, design curves have been presented for the field application.Keywords: Incipient motion, Prediction error, Radial-Basisfunction, Sediment transport, Shields' diagram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15072473 Dispersed Error Control based on Error Filter Design for Improving Halftone Image Quality
Authors: Sang-Chul Kim, Sung-Il Chien
Abstract:
The error diffusion method generates worm artifacts, and weakens the edge of the halftone image when the continuous gray scale image is reproduced by a binary image. First, to enhance the edges, we propose the edge-enhancing filter by considering the quantization error information and gradient of the neighboring pixels. Furthermore, to remove worm artifacts often appearing in a halftone image, we add adaptively random noise into the weights of an error filter.Keywords: Artifact suppression, Edge enhancement, Error diffusion method, Halftone image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14242472 A Novel Approach to Image Compression of Colour Images by Plane Reduction Technique
Authors: K.Sowmyan, A.Siddarth, D.Menaka
Abstract:
Several methods have been proposed for color image compression but the reconstructed image had very low signal to noise ratio which made it inefficient. This paper describes a lossy compression technique for color images which overcomes the drawbacks. The technique works on spatial domain where the pixel values of RGB planes of the input color image is mapped onto two dimensional planes. The proposed technique produced better results than JPEG2000, 2DPCA and a comparative study is reported based on the image quality measures such as PSNR and MSE.Experiments on real time images are shown that compare this methodology with previous ones and demonstrate its advantages.Keywords: Color Image compression, spatial domain, planereduction, root mean square, image restoration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16332471 Extracting Human Body based on Background Estimation in Modified HLS Color Space
Authors: Jang-Hee Yoo, Doosung Hwang, Jong-Wook Han, Ki-Young Moon
Abstract:
The ability to recognize humans and their activities by computer vision is a very important task, with many potential application. Study of human motion analysis is related to several research areas of computer vision such as the motion capture, detection, tracking and segmentation of people. In this paper, we describe a segmentation method for extracting human body contour in modified HLS color space. To estimate a background, the modified HLS color space is proposed, and the background features are estimated by using the HLS color components. Here, the large amount of human dataset, which was collected from DV cameras, is pre-processed. The human body and its contour is successfully extracted from the image sequences.
Keywords: Background Subtraction, Human Silhouette Extraction, HLS Color Space, and Object Segmentation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24352470 A Trends Analysis of Image Processing in Unmanned Aerial Vehicle
Authors: Jae-Neung Lee, Keun-Chang Kwak
Abstract:
This paper describes an analysis of domestic and international trends of image processing for data in UAV (unmanned aerial vehicle) and also explains about UAV and Quadcopter. Overseas examples of image processing using UAV include image processing for totaling the total numberof vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT(scale invariant features transform) matching, and application of median filter and thresholding. In Korea, many studies are underway including visualization of new urban buildings.
Keywords: Image Processing, UAV, Quadcopter, Target detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 76752469 Robust Semi-Blind Digital Image Watermarking Technique in DT-CWT Domain
Authors: Samira Mabtoul, Elhassan Ibn Elhaj, Driss Aboutajdine
Abstract:
In this paper a new robust digital image watermarking algorithm based on the Complex Wavelet Transform is proposed. This technique embeds different parts of a watermark into different blocks of an image under the complex wavelet domain. To increase security of the method, two chaotic maps are employed, one map is used to determine the blocks of the host image for watermark embedding, and another map is used to encrypt the watermark image. Simulation results are presented to demonstrate the effectiveness of the proposed algorithm.Keywords: Image watermarking, Chaotic map, DT-CWT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16922468 Fuzzy Hyperbolization Image Enhancement and Artificial Neural Network for Anomaly Detection
Authors: Sri Hartati, 1Agus Harjoko, Brad G. Nickerson
Abstract:
A prototype of an anomaly detection system was developed to automate process of recognizing an anomaly of roentgen image by utilizing fuzzy histogram hyperbolization image enhancement and back propagation artificial neural network. The system consists of image acquisition, pre-processor, feature extractor, response selector and output. Fuzzy Histogram Hyperbolization is chosen to improve the quality of the roentgen image. The fuzzy histogram hyperbolization steps consist of fuzzyfication, modification of values of membership functions and defuzzyfication. Image features are extracted after the the quality of the image is improved. The extracted image features are input to the artificial neural network for detecting anomaly. The number of nodes in the proposed ANN layers was made small. Experimental results indicate that the fuzzy histogram hyperbolization method can be used to improve the quality of the image. The system is capable to detect the anomaly in the roentgen image.Keywords: Image processing, artificial neural network, anomaly detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21132467 On the Reduction of Side Effects in Tomography
Authors: V. Masilamani, C. Vanniarajan, Kamala Krithivasan
Abstract:
As the Computed Tomography(CT) requires normally hundreds of projections to reconstruct the image, patients are exposed to more X-ray energy, which may cause side effects such as cancer. Even when the variability of the particles in the object is very less, Computed Tomography requires many projections for good quality reconstruction. In this paper, less variability of the particles in an object has been exploited to obtain good quality reconstruction. Though the reconstructed image and the original image have same projections, in general, they need not be the same. In addition to projections, if a priori information about the image is known, it is possible to obtain good quality reconstructed image. In this paper, it has been shown by experimental results why conventional algorithms fail to reconstruct from a few projections, and an efficient polynomial time algorithm has been given to reconstruct a bi-level image from its projections along row and column, and a known sub image of unknown image with smoothness constraints by reducing the reconstruction problem to integral max flow problem. This paper also discusses the necessary and sufficient conditions for uniqueness and extension of 2D-bi-level image reconstruction to 3D-bi-level image reconstruction.Keywords: Discrete Tomography, Image Reconstruction, Projection, Computed Tomography, Integral Max Flow Problem, Smooth Binary Image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13702466 Salient Points Reduction for Content-Based Image Retrieval
Authors: Yao-Hong Tsai
Abstract:
Salient points are frequently used to represent local properties of the image in content-based image retrieval. In this paper, we present a reduction algorithm that extracts the local most salient points such that they not only give a satisfying representation of an image, but also make the image retrieval process efficiently. This algorithm recursively reduces the continuous point set by their corresponding saliency values under a top-down approach. The resulting salient points are evaluated with an image retrieval system using Hausdoff distance. In this experiment, it shows that our method is robust and the extracted salient points provide better retrieval performance comparing with other point detectors.Keywords: Barnard detector, Content-based image retrieval, Points reduction, Salient point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14692465 Medical Image Segmentation Using Deformable Model and Local Fitting Binary: Thoracic Aorta
Authors: B. Bagheri Nakhjavanlo, T. S. Ellis, P.Raoofi, Sh.ziari
Abstract:
This paper presents an application of level sets for the segmentation of abdominal and thoracic aortic aneurysms in CTA datasets. An important challenge in reliably detecting aortic is the need to overcome problems associated with intensity inhomogeneities. Level sets are part of an important class of methods that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A kernel function in the level set formulation aids the suppression of noise in the extracted regions of interest and then guides the motion of the evolving contour for the detection of weak boundaries. The speed of curve evolution has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level sets, and are shown to be more effective than other approaches in coping with intensity inhomogeneities. We have applied the Courant Friedrichs Levy (CFL) condition as stability criterion for our algorithm.Keywords: Image segmentation, Level-sets, Local fitting binary, Thoracic aorta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14562464 Blur and Ringing Artifact Measurement in Image Compression using Wavelet Transform
Authors: Madhuri Khambete, Madhuri Joshi
Abstract:
Quality evaluation of an image is an important task in image processing applications. In case of image compression, quality of decompressed image is also the criterion for evaluation of given coding scheme. In the process of compression -decompression various artifacts such as blocking artifacts, blur artifact, ringing or edge artifact are observed. However quantification of these artifacts is a difficult task. We propose here novel method to quantify blur and ringing artifact in an image.
Keywords: Blur, Compression, Objective Quality assessment, Ringing artifact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48532463 Secure Image Retrieval Based On Orthogonal Decomposition under Cloud Environment
Authors: Yanyan Xu, Lizhi Xiong, Zhengquan Xu, Li Jiang
Abstract:
In order to protect data privacy, image with sensitive or private information needs to be encrypted before being outsourced to the cloud. However, this causes difficulties in image retrieval and data management. A secure image retrieval method based on orthogonal decomposition is proposed in the paper. The image is divided into two different components, for which encryption and feature extraction are executed separately. As a result, cloud server can extract features from an encrypted image directly and compare them with the features of the queried images, so that the user can thus obtain the image. Different from other methods, the proposed method has no special requirements to encryption algorithms. Experimental results prove that the proposed method can achieve better security and better retrieval precision.
Keywords: Secure image retrieval, secure search, orthogonal decomposition, secure cloud computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21142462 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition
Authors: L. Hamsaveni, Navya Prakash, Suresha
Abstract:
Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11552461 Image Rotation Using an Augmented 2-Step Shear Transform
Authors: Hee-Choul Kwon, Heeyong Kwon
Abstract:
Image rotation is one of main pre-processing steps for image processing or image pattern recognition. It is implemented with a rotation matrix multiplication. It requires a lot of floating point arithmetic operations and trigonometric calculations, so it takes a long time to execute. Therefore, there has been a need for a high speed image rotation algorithm without two major time-consuming operations. However, the rotated image has a drawback, i.e. distortions. We solved the problem using an augmented two-step shear transform. We compare the presented algorithm with the conventional rotation with images of various sizes. Experimental results show that the presented algorithm is superior to the conventional rotation one.Keywords: High speed rotation operation, image rotation, transform matrix, image processing, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26532460 Medical Image Segmentation Using Deformable Models and Local Fitting Binary
Authors: B. Bagheri Nakhjavanlo, T. J. Ellis, P. Raoofi, J. Dehmeshki
Abstract:
This paper presents a customized deformable model for the segmentation of abdominal and thoracic aortic aneurysms in CTA datasets. An important challenge in reliably detecting aortic aneurysm is the need to overcome problems associated with intensity inhomogeneities and image noise. Level sets are part of an important class of methods that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A Gaussian kernel function in the level set formulation, which extracts the local intensity information, aids the suppression of noise in the extracted regions of interest and then guides the motion of the evolving contour for the detection of weak boundaries. The speed of curve evolution has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level sets. The results indicate the method is more effective than other approaches in coping with intensity inhomogeneities.Keywords: Abdominal and thoracic aortic aneurysms, intensityinhomogeneity, level sets, local fitting binary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18162459 Image Mapping with Cumulative Distribution Function for Quick Convergence of Counter Propagation Neural Networks in Image Compression
Authors: S. Anna Durai, E. Anna Saro
Abstract:
In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Counter Propagation Neural Network, it takes longer time to converge. The reason for this is that the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbor with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative Distribution Function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used the Counter Propagation Neural Network yield high compression ratio as well as it converges quickly.Keywords: Correlation, Counter Propagation Neural Networks, Cummulative Distribution Function, Image compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670