Search results for: Medical image registration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2058

Search results for: Medical image registration

1278 Rotation Invariant Face Recognition Based on Hybrid LPT/DCT Features

Authors: Rehab F. Abdel-Kader, Rabab M. Ramadan, Rawya Y. Rizk

Abstract:

The recognition of human faces, especially those with different orientations is a challenging and important problem in image analysis and classification. This paper proposes an effective scheme for rotation invariant face recognition using Log-Polar Transform and Discrete Cosine Transform combined features. The rotation invariant feature extraction for a given face image involves applying the logpolar transform to eliminate the rotation effect and to produce a row shifted log-polar image. The discrete cosine transform is then applied to eliminate the row shift effect and to generate the low-dimensional feature vector. A PSO-based feature selection algorithm is utilized to search the feature vector space for the optimal feature subset. Evolution is driven by a fitness function defined in terms of maximizing the between-class separation (scatter index). Experimental results, based on the ORL face database using testing data sets for images with different orientations; show that the proposed system outperforms other face recognition methods. The overall recognition rate for the rotated test images being 97%, demonstrating that the extracted feature vector is an effective rotation invariant feature set with minimal set of selected features.

Keywords: Discrete Cosine Transform, Face Recognition, Feature Extraction, Log Polar Transform, Particle SwarmOptimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
1277 Study of Natural Patterns on Digital Image Correlation Using Simulation Method

Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish

Abstract:

Digital image correlation (DIC) is a contactless fullfield displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.

Keywords: Digital image correlation (DIC), Deformation simulation, Natural pattern, Subset size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2797
1276 Supercompression for Full-HD and 4k-3D (8k)Digital TV Systems

Authors: Mario Mastriani

Abstract:

In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.

Keywords: General-Purpose computation on Graphics Processing Units, Image Compression, Interpolation, Super-resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
1275 A New Algorithm for Enhanced Robustness of Copyright Mark

Authors: Harsh Vikram Singh, S. P. Singh, Anand Mohan

Abstract:

This paper discusses a new heavy tailed distribution based data hiding into discrete cosine transform (DCT) coefficients of image, which provides statistical security as well as robustness against steganalysis attacks. Unlike other data hiding algorithms, the proposed technique does not introduce much effect in the stegoimage-s DCT coefficient probability plots, thus making the presence of hidden data statistically undetectable. In addition the proposed method does not compromise on hiding capacity. When compared to the generic block DCT based data-hiding scheme, our method found more robust against a variety of image manipulating attacks such as filtering, blurring, JPEG compression etc.

Keywords: Information Security, Robust Steganography, Steganalysis, Pareto Probability Distribution function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
1274 Adaptive Total Variation Based on Feature Scale

Authors: Jianbo Hu, Hongbao Wang

Abstract:

The widely used Total Variation de-noising algorithm can preserve sharp edge, while removing noise. However, since fixed regularization parameter over entire image, small details and textures are often lost in the process. In this paper, we propose a modified Total Variation algorithm to better preserve smaller-scaled features. This is done by allowing an adaptive regularization parameter to control the amount of de-noising in any region of image, according to relative information of local feature scale. Experimental results demonstrate the efficient of the proposed algorithm. Compared with standard Total Variation, our algorithm can better preserve smaller-scaled features and show better performance.

Keywords: Adaptive, de-noising, feature scale, regularizationparameter, Total Variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1234
1273 An Improved Sub-Nyquist Sampling Jamming Method for Deceiving Inverse Synthetic Aperture Radar

Authors: Yanli Qi, Ning Lv, Jing Li

Abstract:

Sub-Nyquist sampling jamming method (SNSJ) is a well known deception jamming method for inverse synthetic aperture radar (ISAR). However, the anti-decoy of the SNSJ method performs easier since the amplitude of the false-target images are weaker than the real-target image; the false-target images always lag behind the real-target image, and all targets are located in the same cross-range. In order to overcome the drawbacks mentioned above, a simple modulation based on SNSJ (M-SNSJ) is presented in this paper. The method first uses amplitude modulation factor to make the amplitude of the false-target images consistent with the real-target image, then uses the down-range modulation factor and cross-range modulation factor to make the false-target images move freely in down-range and cross-range, respectively, thus the capacity of deception is improved. Finally, the simulation results on the six available combinations of three modulation factors are given to illustrate our conclusion.

Keywords: Inverse synthetic aperture radar, ISAR, deceptive jamming, Sub-Nyquist sampling jamming method, SNSJ, modulation based on Sub-Nyquist sampling jamming method, M-SNSJ.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280
1272 Image Thresholding for Weld Defect Extraction in Industrial Radiographic Testing

Authors: Nafaâ Nacereddine, Latifa Hamami, Djemel Ziou

Abstract:

In non destructive testing by radiography, a perfect knowledge of the weld defect shape is an essential step to appreciate the quality of the weld and make decision on its acceptability or rejection. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of thresholding methods must be done judiciously. In this paper, performance criteria are used to conduct a comparative study of thresholding methods based on gray level histogram, 2-D histogram and locally adaptive approach for weld defect extraction in radiographic images.

Keywords: 1D and 2D histogram, locally adaptive approach, performance criteria, radiographic image, thresholding, weld defect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
1271 GPU-Based Volume Rendering for Medical Imagery

Authors: Hadjira Bentoumi, Pascal Gautron, Kadi Bouatouch

Abstract:

We present a method for fast volume rendering using graphics hardware (GPU). To our knowledge, it is the first implementation on the GPU. Based on the Shear-Warp algorithm, our GPU-based method provides real-time frame rates and outperforms the CPU-based implementation. When the number of slices is not sufficient, we add in-between slices computed by interpolation. This improves then the quality of the rendered images. We have also implemented the ray marching algorithm on the GPU. The results generated by the three algorithms (CPU-based and GPU-based Shear- Warp, GPU-based Ray Marching) for two test models has proved that the ray marching algorithm outperforms the shear-warp methods in terms of speed up and image quality.

Keywords: Volume rendering, graphics processors

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
1270 Surface Defects Detection for Ceramic Tiles UsingImage Processing and Morphological Techniques

Authors: H. Elbehiery, A. Hefnawy, M. Elewa

Abstract:

Quality control in ceramic tile manufacturing is hard, labor intensive and it is performed in a harsh industrial environment with noise, extreme temperature and humidity. It can be divided into color analysis, dimension verification, and surface defect detection, which is the main purpose of our work. Defects detection is still based on the judgment of human operators while most of the other manufacturing activities are automated so, our work is a quality control enhancement by integrating a visual control stage using image processing and morphological operation techniques before the packing operation to improve the homogeneity of batches received by final users.

Keywords: Quality control, Defects detection, Visual control, Image processing, Morphological operation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6636
1269 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection

Authors: Hussin K. Ragb, Vijayan K. Asari

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: Pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
1268 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map

Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo

Abstract:

Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.

Keywords: RDM, multi-source data, big data, U-City.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 804
1267 Differences in Enhancing Enrollment System between Web Application and Mobile Application for Rangsit University, Thailand

Authors: Thossaporn Thossansin, Auttapon Pomsathit

Abstract:

This paper presents a comparison between using a desktop web application and a mobile application for students enrolling in courses at Rangsit University, Thailand. In addition, Rangsit University has enhanced the enrollment process by leveraging its information systems, which allows students to choose to enroll in courses online. In order to use the system, students must provide their identification and personal documents for registration. The reason to have a mobile application is to support students’ ability to access the system at anytime, anywhere and anyplace. The objective of this paper was to: 1. Evaluate the success of developing a user friendly mobile device system and 2. Measure user interest in future mobile development.

Keywords: Mobile Application, Web Application and Enrollment System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2283
1266 Edge Segmentation of Satellite Image using Phase Congruency Model

Authors: Ahmed Zaafouri, Mounir Sayadi, Farhat Fnaiech

Abstract:

In this paper, we present a method for edge segmentation of satellite images based on 2-D Phase Congruency (PC) model. The proposed approach is composed by two steps: The contextual non linear smoothing algorithm (CNLS) is used to smooth the input images. Then, the 2D stretched Gabor filter (S-G filter) based on proposed angular variation is developed in order to avoid the multiple responses in the previous work. An assessment of our proposed method performance is provided in terms of accuracy of satellite image edge segmentation. The proposed method is compared with others known approaches.

Keywords: Edge segmentation, Phase congruency model, Satellite images, Stretched Gabor filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2666
1265 Development of Algorithms for the Study of the Image in Digital Form for Satellite Applications: Extraction of a Road Network and Its Nodes

Authors: Z. Nougrara

Abstract:

In this paper we propose a novel methodology for extracting a road network and its nodes from satellite images of Algeria country. This developed technique is a progress of our previous research works. It is founded on the information theory and the mathematical morphology; the information theory and the mathematical morphology are combined together to extract and link the road segments to form a road network and its nodes. We therefore have to define objects as sets of pixels and to study the shape of these objects and the relations that exist between them. In this approach, geometric and radiometric features of roads are integrated by a cost function and a set of selected points of a crossing road. Its performances were tested on satellite images of Algeria country.

Keywords: Satellite image, road network, nodes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696
1264 Soft Computing based Retrieval System for Medical Applications

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

With increasing data in medical databases, medical data retrieval is growing in popularity. Some of this analysis including inducing propositional rules from databases using many soft techniques, and then using these rules in an expert system. Diagnostic rules and information on features are extracted from clinical databases on diseases of congenital anomaly. This paper explain the latest soft computing techniques and some of the adaptive techniques encompasses an extensive group of methods that have been applied in the medical domain and that are used for the discovery of data dependencies, importance of features, patterns in sample data, and feature space dimensionality reduction. These approaches pave the way for new and interesting avenues of research in medical imaging and represent an important challenge for researchers.

Keywords: CBIR, GA, Rough sets, CBMIR, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731
1263 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel

Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi

Abstract:

Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.

Keywords: AWGN, BER, DCT, Fading, MAP, UEP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677
1262 Using Satellite Images Datasets for Road Intersection Detection in Route Planning

Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever

Abstract:

Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition  problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.

Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754
1261 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
1260 An Efficient Gaussian Noise Removal Image Enhancement Technique for Gray Scale Images

Authors: V. Murugan, R. Balasubramanian

Abstract:

Image enhancement is a challenging issue in many applications. In the last two decades, there are various filters developed. This paper proposes a novel method which removes Gaussian noise from the gray scale images. The proposed technique is compared with Enhanced Fuzzy Peer Group Filter (EFPGF) for various noise levels. Experimental results proved that the proposed filter achieves better Peak-Signal-to-Noise-Ratio PSNR than the existing techniques. The proposed technique achieves 1.736dB gain in PSNR than the EFPGF technique.

Keywords: Gaussian noise, adaptive bilateral filter, fuzzy peer group filter, switching bilateral filter, PSNR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2477
1259 Calibration of Parallel Multi-View Cameras

Authors: M. Ali-Bey, N. Manamanni, S. Moughamir

Abstract:

This paper focuses on the calibration problem of a multi-view shooting system designed for the production of 3D content for auto-stereoscopic visualization. The considered multiview camera is characterized by coplanar and decentered image sensors regarding to the corresponding optical axis. Based on the Faugéras and Toscani-s calibration approach, a calibration method is herein proposed for the case of multi-view camera with parallel and decentered image sensors. At first, the geometrical model of the shooting system is recalled and some industrial prototypes with some shooting simulations are presented. Next, the development of the proposed calibration method is detailed. Finally, some simulation results are presented before ending with some conclusions about this work.

Keywords: Auto-stereoscopic display, camera calibration, multi-view cameras, visual servoing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
1258 Detecting Subsurface Circular Objects from Low Contrast Noisy Images: Applications in Microscope Image Enhancement

Authors: Soham De, Nupur Biswas, Abhijit Sanyal, Pulak Ray, Alokmay Datta

Abstract:

Particle detection in very noisy and low contrast images is an active field of research in image processing. In this article, a method is proposed for the efficient detection and sizing of subsurface spherical particles, which is used for the processing of softly fused Au nanoparticles. Transmission Electron Microscopy is used for imaging the nanoparticles, and the proposed algorithm has been tested with the two-dimensional projected TEM images obtained. Results are compared with the data obtained by transmission optical spectroscopy, as well as with conventional circular object detection algorithms.

Keywords: Transmission Electron Microscopy, Circular Hough Transform, Au Nanoparticles, Median Filter, Laplacian Sharpening Filter, Canny Edge Detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2580
1257 Object Alignment for Military Optical Surveillance

Authors: Oscar J.G. Somsen, Fok Bolderheij

Abstract:

Electro-optical devices are increasingly used for military sea-, land- and air applications to detect, recognize and track objects. Typically, these devices produce video information that is presented to an operator. However, with increasing availability of electro-optical devices the data volume is becoming very large, creating a rising need for automated analysis. In a military setting, this typically involves detecting and recognizing objects at a large distance, i.e. when they are difficult to distinguish from background and noise. One may consider combining multiple images from a video stream into a single enhanced image that provides more information for the operator. In this paper we investigate a simple algorithm to enhance simulated images from a military context and investigate how the enhancement is affected by various types of disturbance.

Keywords: Electro-Optics, Automated Image alignment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
1256 Computer-Aided Classification of Liver Lesions Using Contrasting Features Difference

Authors: Hussein Alahmer, Amr Ahmed

Abstract:

Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate.  This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.

Keywords: CAD system, difference of feature, Fuzzy c means, Liver segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
1255 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime

Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.

Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2778
1254 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
1253 A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics

Authors: Nadir A. Carreón, Christa Sonderer, Aakarsh Rao, Roman Lysecky

Abstract:

With the advent of complex software and increased connectivity, security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact to human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices, it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which a human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which a human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we present a Medical Vulnerability Scoring System (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact to the safety of the patient if the vulnerability is exploited (e.g., potential harm, life threatening). We evaluate 15 different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring system and the foundational CVSS.

Keywords: Common vulnerability system, medical devices, medical device security, vulnerabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
1252 Development of a Secured Telemedical System Using Biometric Feature

Authors: O. Iyare, A. H. Afolayan, O. T. Oluwadare, B. K. Alese

Abstract:

Access to advanced medical services has been one of the medical challenges faced by our present society especially in distant geographical locations which may be inaccessible. Then the need for telemedicine arises through which live videos of a doctor can be streamed to a patient located anywhere in the world at any time. Patients’ medical records contain very sensitive information which should not be made accessible to unauthorized people in order to protect privacy, integrity and confidentiality. This research work focuses on a more robust security measure which is biometric (fingerprint) as a form of access control to data of patients by the medical specialist/practitioner.

Keywords: Biometrics, telemedicine, privacy, patient information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1654
1251 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images

Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy

Abstract:

Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.

Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3198
1250 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines

Authors: Mrs.K.Kavitha, S.Arivazhagan

Abstract:

A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.

Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
1249 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results

Authors: C. Villegas-Quezada, J. Climent

Abstract:

Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.

Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496