Search results for: Image dehazing
845 FPGA Implement of a Vision Based Lane Departure Warning System
Authors: Yu Ren Lin, Yi Feng Su
Abstract:
Using vision based solution in intelligent vehicle application often needs large memory to handle video stream and image process which increase complexity of hardware and software. In this paper, we present a FPGA implement of a vision based lane departure warning system. By taking frame of videos, the line gradient of line is estimated and the lane marks are found. By analysis the position of lane mark, departure of vehicle will be detected in time. This idea has been implemented in Xilinx Spartan6 FPGA. The lane departure warning system used 39% logic resources and no memory of the device. The average availability is 92.5%. The frame rate is more than 30 frames per second (fps).
Keywords: Lane departure warning system, image, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076844 Rotation Invariant Face Recognition Based on Hybrid LPT/DCT Features
Authors: Rehab F. Abdel-Kader, Rabab M. Ramadan, Rawya Y. Rizk
Abstract:
The recognition of human faces, especially those with different orientations is a challenging and important problem in image analysis and classification. This paper proposes an effective scheme for rotation invariant face recognition using Log-Polar Transform and Discrete Cosine Transform combined features. The rotation invariant feature extraction for a given face image involves applying the logpolar transform to eliminate the rotation effect and to produce a row shifted log-polar image. The discrete cosine transform is then applied to eliminate the row shift effect and to generate the low-dimensional feature vector. A PSO-based feature selection algorithm is utilized to search the feature vector space for the optimal feature subset. Evolution is driven by a fitness function defined in terms of maximizing the between-class separation (scatter index). Experimental results, based on the ORL face database using testing data sets for images with different orientations; show that the proposed system outperforms other face recognition methods. The overall recognition rate for the rotated test images being 97%, demonstrating that the extracted feature vector is an effective rotation invariant feature set with minimal set of selected features.Keywords: Discrete Cosine Transform, Face Recognition, Feature Extraction, Log Polar Transform, Particle SwarmOptimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873843 Study of Natural Patterns on Digital Image Correlation Using Simulation Method
Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish
Abstract:
Digital image correlation (DIC) is a contactless fullfield displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.
Keywords: Digital image correlation (DIC), Deformation simulation, Natural pattern, Subset size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2799842 Supercompression for Full-HD and 4k-3D (8k)Digital TV Systems
Authors: Mario Mastriani
Abstract:
In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.Keywords: General-Purpose computation on Graphics Processing Units, Image Compression, Interpolation, Super-resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979841 A New Algorithm for Enhanced Robustness of Copyright Mark
Authors: Harsh Vikram Singh, S. P. Singh, Anand Mohan
Abstract:
This paper discusses a new heavy tailed distribution based data hiding into discrete cosine transform (DCT) coefficients of image, which provides statistical security as well as robustness against steganalysis attacks. Unlike other data hiding algorithms, the proposed technique does not introduce much effect in the stegoimage-s DCT coefficient probability plots, thus making the presence of hidden data statistically undetectable. In addition the proposed method does not compromise on hiding capacity. When compared to the generic block DCT based data-hiding scheme, our method found more robust against a variety of image manipulating attacks such as filtering, blurring, JPEG compression etc.
Keywords: Information Security, Robust Steganography, Steganalysis, Pareto Probability Distribution function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797840 Timescape-Based Panoramic View for Historic Landmarks
Authors: H. Ali, A. Whitehead
Abstract:
Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800’s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark’s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks.
Keywords: Cultural heritage, image registration, image subset selection, registered image similarity, temporal panorama, timescapes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1051839 Adaptive Total Variation Based on Feature Scale
Authors: Jianbo Hu, Hongbao Wang
Abstract:
The widely used Total Variation de-noising algorithm can preserve sharp edge, while removing noise. However, since fixed regularization parameter over entire image, small details and textures are often lost in the process. In this paper, we propose a modified Total Variation algorithm to better preserve smaller-scaled features. This is done by allowing an adaptive regularization parameter to control the amount of de-noising in any region of image, according to relative information of local feature scale. Experimental results demonstrate the efficient of the proposed algorithm. Compared with standard Total Variation, our algorithm can better preserve smaller-scaled features and show better performance.
Keywords: Adaptive, de-noising, feature scale, regularizationparameter, Total Variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237838 An Improved Sub-Nyquist Sampling Jamming Method for Deceiving Inverse Synthetic Aperture Radar
Authors: Yanli Qi, Ning Lv, Jing Li
Abstract:
Sub-Nyquist sampling jamming method (SNSJ) is a well known deception jamming method for inverse synthetic aperture radar (ISAR). However, the anti-decoy of the SNSJ method performs easier since the amplitude of the false-target images are weaker than the real-target image; the false-target images always lag behind the real-target image, and all targets are located in the same cross-range. In order to overcome the drawbacks mentioned above, a simple modulation based on SNSJ (M-SNSJ) is presented in this paper. The method first uses amplitude modulation factor to make the amplitude of the false-target images consistent with the real-target image, then uses the down-range modulation factor and cross-range modulation factor to make the false-target images move freely in down-range and cross-range, respectively, thus the capacity of deception is improved. Finally, the simulation results on the six available combinations of three modulation factors are given to illustrate our conclusion.
Keywords: Inverse synthetic aperture radar, ISAR, deceptive jamming, Sub-Nyquist sampling jamming method, SNSJ, modulation based on Sub-Nyquist sampling jamming method, M-SNSJ.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281837 Image Thresholding for Weld Defect Extraction in Industrial Radiographic Testing
Authors: Nafaâ Nacereddine, Latifa Hamami, Djemel Ziou
Abstract:
In non destructive testing by radiography, a perfect knowledge of the weld defect shape is an essential step to appreciate the quality of the weld and make decision on its acceptability or rejection. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of thresholding methods must be done judiciously. In this paper, performance criteria are used to conduct a comparative study of thresholding methods based on gray level histogram, 2-D histogram and locally adaptive approach for weld defect extraction in radiographic images.
Keywords: 1D and 2D histogram, locally adaptive approach, performance criteria, radiographic image, thresholding, weld defect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2342836 Surface Defects Detection for Ceramic Tiles UsingImage Processing and Morphological Techniques
Authors: H. Elbehiery, A. Hefnawy, M. Elewa
Abstract:
Quality control in ceramic tile manufacturing is hard, labor intensive and it is performed in a harsh industrial environment with noise, extreme temperature and humidity. It can be divided into color analysis, dimension verification, and surface defect detection, which is the main purpose of our work. Defects detection is still based on the judgment of human operators while most of the other manufacturing activities are automated so, our work is a quality control enhancement by integrating a visual control stage using image processing and morphological operation techniques before the packing operation to improve the homogeneity of batches received by final users.
Keywords: Quality control, Defects detection, Visual control, Image processing, Morphological operation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6637835 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection
Authors: Hussin K. Ragb, Vijayan K. Asari
Abstract:
In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.Keywords: Pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489834 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map
Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo
Abstract:
Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.
Keywords: RDM, multi-source data, big data, U-City.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 805833 Edge Segmentation of Satellite Image using Phase Congruency Model
Authors: Ahmed Zaafouri, Mounir Sayadi, Farhat Fnaiech
Abstract:
In this paper, we present a method for edge segmentation of satellite images based on 2-D Phase Congruency (PC) model. The proposed approach is composed by two steps: The contextual non linear smoothing algorithm (CNLS) is used to smooth the input images. Then, the 2D stretched Gabor filter (S-G filter) based on proposed angular variation is developed in order to avoid the multiple responses in the previous work. An assessment of our proposed method performance is provided in terms of accuracy of satellite image edge segmentation. The proposed method is compared with others known approaches.Keywords: Edge segmentation, Phase congruency model, Satellite images, Stretched Gabor filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2667832 Development of Algorithms for the Study of the Image in Digital Form for Satellite Applications: Extraction of a Road Network and Its Nodes
Authors: Z. Nougrara
Abstract:
In this paper we propose a novel methodology for extracting a road network and its nodes from satellite images of Algeria country. This developed technique is a progress of our previous research works. It is founded on the information theory and the mathematical morphology; the information theory and the mathematical morphology are combined together to extract and link the road segments to form a road network and its nodes. We therefore have to define objects as sets of pixels and to study the shape of these objects and the relations that exist between them. In this approach, geometric and radiometric features of roads are integrated by a cost function and a set of selected points of a crossing road. Its performances were tested on satellite images of Algeria country.Keywords: Satellite image, road network, nodes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697831 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel
Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi
Abstract:
Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.Keywords: AWGN, BER, DCT, Fading, MAP, UEP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678830 Using Satellite Images Datasets for Road Intersection Detection in Route Planning
Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever
Abstract:
Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.
Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757829 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation
Authors: Aicha Majda, Abdelhamid El Hassani
Abstract:
Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743828 An Efficient Gaussian Noise Removal Image Enhancement Technique for Gray Scale Images
Authors: V. Murugan, R. Balasubramanian
Abstract:
Image enhancement is a challenging issue in many applications. In the last two decades, there are various filters developed. This paper proposes a novel method which removes Gaussian noise from the gray scale images. The proposed technique is compared with Enhanced Fuzzy Peer Group Filter (EFPGF) for various noise levels. Experimental results proved that the proposed filter achieves better Peak-Signal-to-Noise-Ratio PSNR than the existing techniques. The proposed technique achieves 1.736dB gain in PSNR than the EFPGF technique.
Keywords: Gaussian noise, adaptive bilateral filter, fuzzy peer group filter, switching bilateral filter, PSNR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2478827 Calibration of Parallel Multi-View Cameras
Authors: M. Ali-Bey, N. Manamanni, S. Moughamir
Abstract:
This paper focuses on the calibration problem of a multi-view shooting system designed for the production of 3D content for auto-stereoscopic visualization. The considered multiview camera is characterized by coplanar and decentered image sensors regarding to the corresponding optical axis. Based on the Faugéras and Toscani-s calibration approach, a calibration method is herein proposed for the case of multi-view camera with parallel and decentered image sensors. At first, the geometrical model of the shooting system is recalled and some industrial prototypes with some shooting simulations are presented. Next, the development of the proposed calibration method is detailed. Finally, some simulation results are presented before ending with some conclusions about this work.Keywords: Auto-stereoscopic display, camera calibration, multi-view cameras, visual servoing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698826 Detecting Subsurface Circular Objects from Low Contrast Noisy Images: Applications in Microscope Image Enhancement
Authors: Soham De, Nupur Biswas, Abhijit Sanyal, Pulak Ray, Alokmay Datta
Abstract:
Particle detection in very noisy and low contrast images is an active field of research in image processing. In this article, a method is proposed for the efficient detection and sizing of subsurface spherical particles, which is used for the processing of softly fused Au nanoparticles. Transmission Electron Microscopy is used for imaging the nanoparticles, and the proposed algorithm has been tested with the two-dimensional projected TEM images obtained. Results are compared with the data obtained by transmission optical spectroscopy, as well as with conventional circular object detection algorithms.Keywords: Transmission Electron Microscopy, Circular Hough Transform, Au Nanoparticles, Median Filter, Laplacian Sharpening Filter, Canny Edge Detection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2581825 Object Alignment for Military Optical Surveillance
Authors: Oscar J.G. Somsen, Fok Bolderheij
Abstract:
Electro-optical devices are increasingly used for military sea-, land- and air applications to detect, recognize and track objects. Typically, these devices produce video information that is presented to an operator. However, with increasing availability of electro-optical devices the data volume is becoming very large, creating a rising need for automated analysis. In a military setting, this typically involves detecting and recognizing objects at a large distance, i.e. when they are difficult to distinguish from background and noise. One may consider combining multiple images from a video stream into a single enhanced image that provides more information for the operator. In this paper we investigate a simple algorithm to enhance simulated images from a military context and investigate how the enhancement is affected by various types of disturbance.Keywords: Electro-Optics, Automated Image alignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614824 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime
Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung
Abstract:
This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2780823 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery
Authors: M. Hamad Hassan, S.A.M. Gilani
Abstract:
In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.
Keywords: Hash Collision, LSB, MD5, PSNR, SHA160
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518822 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images
Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy
Abstract:
Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.
Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3199821 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines
Authors: Mrs.K.Kavitha, S.Arivazhagan
Abstract:
A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522820 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results
Authors: C. Villegas-Quezada, J. Climent
Abstract:
Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497819 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences
Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng
Abstract:
Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.).
Keywords: Motion detection, motion tracking, trajectory analysis, video surveillance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730818 Science School Was Burned: A Case Study of Crisis Management in Thailand
Authors: Proud Arunrangsiwed
Abstract:
This study analyzes the crisis management and image repair strategies during the crisis of Mahidol Wittayanusorn School (MWIT) library burning. The library of this school was burned by a 16-year-old-male student on June 6th, 2010. This student blamed the school that the lesson was difficult, and other students were selfish. Although no one was in the building during the fire, it had caused damage to the building, books and electronic supplies around 130 million bahts (4.4 million USD). This event aroused many discourses arguing about the education system and morality. The strategies which were used during crisis were denial, shift the blame, bolstering, minimization, and uncertainty reduction. The results of using these strategies appeared after the crisis. That was the numbers of new students, who registered for the examination to get into this school in the later years, have remained the same.
Keywords: School, crisis management, violence, image repair strategies, uncertainty, burn.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4124817 Retrieval of Relevant Visual Data in Selected Machine Vision Tasks: Examples of Hardware-based and Software-based Solutions
Authors: Andrzej Śluzek
Abstract:
To illustrate diversity of methods used to extract relevant (where the concept of relevance can be differently defined for different applications) visual data, the paper discusses three groups of such methods. They have been selected from a range of alternatives to highlight how hardware and software tools can be complementarily used in order to achieve various functionalities in case of different specifications of “relevant data". First, principles of gated imaging are presented (where relevance is determined by the range). The second methodology is intended for intelligent intrusion detection, while the last one is used for content-based image matching and retrieval. All methods have been developed within projects supervised by the author.
Keywords: Relevant visual data, gated imaging, intrusion detection, image matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1395816 Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures
Abstract:
The image segmentation method described in this paper has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. This method solves the problem of whole objects extraction from background and it produces images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The segmentation algorithm is based on the cooperation among an optical flow evaluation method, edge detection and region growing procedures. The optical flow estimator belongs to the class of differential methods. It permits to detect motions ranging from a fraction of a pixel to a few pixels per frame, achieving good results in presence of noise without the need of a filtering pre-processing stage and includes a specialised model for moving object detection. The first task of the presented method exploits the cues from motion analysis for moving areas detection. Objects and background are then refined using respectively edge detection and seeded region growing procedures. All the tasks are iteratively performed until objects and background are completely resolved. The method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.Keywords: Image Segmentation, Motion Detection, Object Extraction, Optical Flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756