Search results for: correlation features image fusion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3857

Search results for: correlation features image fusion

3107 Study of Features for Hand-printed Recognition

Authors: Satish Kumar

Abstract:

The feature extraction method(s) used to recognize hand-printed characters play an important role in ICR applications. In order to achieve high recognition rate for a recognition system, the choice of a feature that suits for the given script is certainly an important task. Even if a new feature required to be designed for a given script, it is essential to know the recognition ability of the existing features for that script. Devanagari script is being used in various Indian languages besides Hindi the mother tongue of majority of Indians. This research examines a variety of feature extraction approaches, which have been used in various ICR/OCR applications, in context to Devanagari hand-printed script. The study is conducted theoretically and experimentally on more that 10 feature extraction methods. The various feature extraction methods have been evaluated on Devanagari hand-printed database comprising more than 25000 characters belonging to 43 alphabets. The recognition ability of the features have been evaluated using three classifiers i.e. k-NN, MLP and SVM.

Keywords: Features, Hand-printed, Devanagari, Classifier, Database

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
3106 Region Segmentation based on Gaussian Dirichlet Process Mixture Model and its Application to 3D Geometric Stricture Detection

Authors: Jonghyun Park, Soonyoung Park, Sanggyun Kim, Wanhyun Cho, Sunworl Kim

Abstract:

In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.

Keywords: Region segmentation, tensor voting, image-based 3D, geometric structure, Gaussian Dirichlet process mixture model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
3105 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer

Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved

Abstract:

Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.

Keywords: Computer-aided system, detection, image segmentation, morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 519
3104 Low Computational Image Compression Scheme based on Absolute Moment Block Truncation Coding

Authors: K.Somasundaram, I.Kaspar Raj

Abstract:

In this paper we have proposed three and two stage still gray scale image compressor based on BTC. In our schemes, we have employed a combination of four techniques to reduce the bit rate. They are quad tree segmentation, bit plane omission, bit plane coding using 32 visual patterns and interpolative bit plane coding. The experimental results show that the proposed schemes achieve an average bit rate of 0.46 bits per pixel (bpp) for standard gray scale images with an average PSNR value of 30.25, which is better than the results from the exiting similar methods based on BTC.

Keywords: Bit plane, Block Truncation Coding, Image compression, lossy compression, quad tree segmentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733
3103 FPGA Implementation of a Vision-Based Blind Spot Warning System

Authors: Yu Ren Lin, Yu Hong Li

Abstract:

Vision-based intelligent vehicle applications often require large amounts of memory to handle video streaming and image processing, which in turn increases complexity of hardware and software. This paper presents an FPGA implement of a vision-based blind spot warning system. Using video frames, the information of the blind spot area turns into one-dimensional information. Analysis of the estimated entropy of image allows the detection of an object in time. This idea has been implemented in the XtremeDSP video starter kit. The blind spot warning system uses only 13% of its logic resources and 95k bits block memory, and its frame rate is over 30 frames per sec (fps).

Keywords: blind-spot area, image, FPGA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
3102 Analyzing the Changing Pattern of Nigerian Vegetation Zones and Its Ecological and Socio-Economic Implications Using Spot-Vegetation Sensor

Authors: B. L. Gadiga

Abstract:

This study assesses the major ecological zones in Nigeria with the view to understanding the spatial pattern of vegetation zones and the implications on conservation within the period of sixteen (16) years. Satellite images used for this study were acquired from the SPOT-VEGETATION between 1998 and 2013. The annual NDVI images selected for this study were derived from SPOT-4 sensor and were acquired within the same season (November) in order to reduce differences in spectral reflectance due to seasonal variations. The images were sliced into five classes based on literatures and knowledge of the area (i.e. <0.16 Non-Vegetated areas; 0.16-0.22 Sahel Savannah; 0.22-0.40 Sudan Savannah, 0.40-0.47 Guinea Savannah and >0.47 Forest Zone). Classification of the 1998 and 2013 images into forested and non forested areas showed that forested area decrease from 511,691 km2 in 1998 to 478,360 km2 in 2013. Differencing change detection method was performed on 1998 and 2013 NDVI images to identify areas of ecological concern. The result shows that areas undergoing vegetation degradation covers an area of 73,062 km2 while areas witnessing some form restoration cover an area of 86,315 km2. The result also shows that there is a weak correlation between rainfall and the vegetation zones. The non-vegetated areas have a correlation coefficient (r) of 0.0088, Sahel Savannah belt 0.1988, Sudan Savannah belt -0.3343, Guinea Savannah belt 0.0328 and Forest belt 0.2635. The low correlation can be associated with the encroachment of the Sudan Savannah belt into the forest belt of South-eastern part of the country as revealed by the image analysis. The degradation of the forest vegetation is therefore responsible for the serious erosion problems witnessed in the South-east. The study recommends constant monitoring of vegetation and strict enforcement of environmental laws in the country.

Keywords: Vegetation, NDVI, SPOT-vegetation, ecology, degradation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 820
3101 Motion Detection Techniques Using Optical Flow

Authors: A. A. Shafie, Fadhlan Hafiz, M. H. Ali

Abstract:

Motion detection is very important in image processing. One way of detecting motion is using optical flow. Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. The method used for finding the optical flow in this project is assuming that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. This technique is later used in developing software for motion detection which has the capability to carry out four types of motion detection. The motion detection software presented in this project also can highlight motion region, count motion level as well as counting object numbers. Many objects such as vehicles and human from video streams can be recognized by applying optical flow technique.

Keywords: Background modeling, Motion detection, Optical flow, Velocity smoothness constant, motion trajectories.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5369
3100 Fragile Watermarking for Color Images Using Thresholding Technique

Authors: Kuo-Cheng Liu

Abstract:

In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.

Keywords: thresholding technique, tamper proofing, tamper recovery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
3099 Brain Image Segmentation Using Conditional Random Field Based On Modified Artificial Bee Colony Optimization Algorithm

Authors: B. Thiagarajan, R. Bremananth

Abstract:

Tumor is an uncontrolled growth of tissues in any part of the body. Tumors are of different types and they have different characteristics and treatments. Brain tumor is inherently serious and life-threatening because of its character in the limited space of the intracranial cavity (space formed inside the skull). Locating the tumor within MR (magnetic resonance) image of brain is integral part of the treatment of brain tumor. This segmentation task requires classification of each voxel as either tumor or non-tumor, based on the description of the voxel under consideration. Many studies are going on in the medical field using Markov Random Fields (MRF) in segmentation of MR images. Even though the segmentation process is better, computing the probability and estimation of parameters is difficult. In order to overcome the aforementioned issues, Conditional Random Field (CRF) is used in this paper for segmentation, along with the modified artificial bee colony optimization and modified fuzzy possibility c-means (MFPCM) algorithm. This work is mainly focused to reduce the computational complexities, which are found in existing methods and aimed at getting higher accuracy. The efficiency of this work is evaluated using the parameters such as region non-uniformity, correlation and computation time. The experimental results are compared with the existing methods such as MRF with improved Genetic Algorithm (GA) and MRF-Artificial Bee Colony (MRF-ABC) algorithm.

Keywords: Conditional random field, Magnetic resonance, Markov random field, Modified artificial bee colony.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2932
3098 Medical Image Segmentation and Detection of MR Images Based on Spatial Multiple-Kernel Fuzzy C-Means Algorithm

Authors: J. Mehena, M. C. Adhikary

Abstract:

In this paper, a spatial multiple-kernel fuzzy C-means (SMKFCM) algorithm is introduced for segmentation problem. A linear combination of multiples kernels with spatial information is used in the kernel FCM (KFCM) and the updating rules for the linear coefficients of the composite kernels are derived as well. Fuzzy cmeans (FCM) based techniques have been widely used in medical image segmentation problem due to their simplicity and fast convergence. The proposed SMKFCM algorithm provides us a new flexible vehicle to fuse different pixel information in medical image segmentation and detection of MR images. To evaluate the robustness of the proposed segmentation algorithm in noisy environment, we add noise in medical brain tumor MR images and calculated the success rate and segmentation accuracy. From the experimental results it is clear that the proposed algorithm has better performance than those of other FCM based techniques for noisy medical MR images.

Keywords: Clustering, fuzzy C-means, image segmentation, MR images, multiple kernels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2114
3097 Dual Pyramid of Agents for Image Segmentation

Authors: K. Idir, H. Merouani, Y. Tlili.

Abstract:

An effective method for the early detection of breast cancer is the mammographic screening. One of the most important signs of early breast cancer is the presence of microcalcifications. For the detection of microcalcification in a mammography image, we propose to conceive a multiagent system based on a dual irregular pyramid. An initial segmentation is obtained by an incremental approach; the result represents level zero of the pyramid. The edge information obtained by application of the Canny filter is taken into account to affine the segmentation. The edge-agents and region-agents cooper level by level of the pyramid by exploiting its various characteristics to provide the segmentation process convergence.

Keywords: Dual Pyramid, Image Segmentation, Multi-agent System, Region/Edge Cooperation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
3096 The Water Level Detection Algorithm Using the Accumulated Histogram with Band Pass Filter

Authors: Sangbum Park, Namki Lee, Youngjoon Han, Hernsoo Hahn

Abstract:

In this paper, we propose the robust water level detection method based on the accumulated histogram under small changed image which is acquired from water level surveillance camera. In general surveillance system, this is detecting and recognizing invasion from searching area which is in big change on the sequential images. However, in case of a water level detection system, these general surveillance techniques are not suitable due to small change on the water surface. Therefore the algorithm introduces the accumulated histogram which is emphasizing change of water surface in sequential images. Accumulated histogram is based on the current image frame. The histogram is cumulating differences between previous images and current image. But, these differences are also appeared in the land region. The band pass filter is able to remove noises in the accumulated histogram Finally, this algorithm clearly separates water and land regions. After these works, the algorithm converts from the water level value on the image space to the real water level on the real space using calibration table. The detected water level is sent to the host computer with current image. To evaluate the proposed algorithm, we use test images from various situations.

Keywords: accumulated histogram, water level detection, band pass filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
3095 Harmonic Parameters with HHT and Wavelet Transform for Automatic Sleep Stages Scoring

Authors: Wei-Chih Tang, Shih-Wei Lu, Chih-Mong Tsai, Cheng-Yan Kao, Hsiu-Hui Lee

Abstract:

Previously, harmonic parameters (HPs) have been selected as features extracted from EEG signals for automatic sleep scoring. However, in previous studies, only one HP parameter was used, which were directly extracted from the whole epoch of EEG signal. In this study, two different transformations were applied to extract HPs from EEG signals: Hilbert-Huang transform (HHT) and wavelet transform (WT). EEG signals are decomposed by the two transformations; and features were extracted from different components. Twelve parameters (four sets of HPs) were extracted. Some of the parameters are highly diverse among different stages. Afterward, HPs from two transformations were used to building a rough sleep stages scoring model using the classifier SVM. The performance of this model is about 78% using the features obtained by our proposed extractions. Our results suggest that these features may be useful for automatic sleep stages scoring.

Keywords: EEG, harmonic parameter, Hilbert-Huang transform, sleep stages, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865
3094 Deficiencies of Lung Segmentation Techniques using CT Scan Images for CAD

Authors: Nisar Ahmed Memon, Anwar Majid Mirza, S.A.M. Gilani

Abstract:

Segmentation is an important step in medical image analysis and classification for radiological evaluation or computer aided diagnosis. This paper presents the problem of inaccurate lung segmentation as observed in algorithms presented by researchers working in the area of medical image analysis. The different lung segmentation techniques have been tested using the dataset of 19 patients consisting of a total of 917 images. We obtained datasets of 11 patients from Ackron University, USA and of 8 patients from AGA Khan Medical University, Pakistan. After testing the algorithms against datasets, the deficiencies of each algorithm have been highlighted.

Keywords: Computer Aided Diagnosis (CAD), MathematicalMorphology, Medical Image Analysis, Region Growing, Segmentation, Thresholding,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2327
3093 Voice Features as the Diagnostic Marker of Autism

Authors: Elena Lyakso, Olga Frolova, Yuri Matveev

Abstract:

The aim of the study is to determine the acoustic features of voice and speech of children with autism spectrum disorders (ASD) as a possible additional diagnostic criterion. The participants in the study were 95 children with ASD aged 5-16 years, 150 typically development (TD) children, and 103 adults – listening to children’s speech samples. Three types of experimental methods for speech analysis were performed: spectrographic, perceptual by listeners, and automatic recognition. In the speech of children with ASD, the pitch values, pitch range, values of frequency and intensity of the third formant (emotional) leading to the “atypical” spectrogram of vowels are higher than corresponding parameters in the speech of TD children. High values of vowel articulation index (VAI) are specific for ASD children’s speech signals. These acoustic features can be considered as diagnostic marker of autism. The ability of humans and automatic recognition of the psychoneurological state of children via their speech is determined.

Keywords: Autism spectrum disorders, biomarker of autism, child speech, voice features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 588
3092 Distinguishing Innocent Murmurs from Murmurs caused by Aortic Stenosis by Recurrence Quantification Analysis

Authors: Christer Ahlstrom, Katja Höglund, Peter Hult, Jens Häggström, Clarence Kvart, Per Ask

Abstract:

It is sometimes difficult to differentiate between innocent murmurs and pathological murmurs during auscultation. In these difficult cases, an intelligent stethoscope with decision support abilities would be of great value. In this study, using a dog model, phonocardiographic recordings were obtained from 27 boxer dogs with various degrees of aortic stenosis (AS) severity. As a reference for severity assessment, continuous wave Doppler was used. The data were analyzed with recurrence quantification analysis (RQA) with the aim to find features able to distinguish innocent murmurs from murmurs caused by AS. Four out of eight investigated RQA features showed significant differences between innocent murmurs and pathological murmurs. Using a plain linear discriminant analysis classifier, the best pair of features (recurrence rate and entropy) resulted in a sensitivity of 90% and a specificity of 88%. In conclusion, RQA provide valid features which can be used for differentiation between innocent murmurs and murmurs caused by AS.

Keywords: Bioacoustics, murmur, phonocardiographic signal, recurrence quantification analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
3091 An Improved Single Point Closure Model Based on Dissipation Anisotropy for Geophysical Turbulent Flows

Authors: A. P. Joshi, H. V. Warrior, J. P. Panda

Abstract:

This paper is a continuation of the work carried out by various turbulence modelers in Oceanography on the topic of oceanic turbulent mixing. It evaluates the evolution of ocean water temperature and salinity by the appropriate modeling of turbulent mixing utilizing proper prescription of eddy viscosity. Many modelers in past have suggested including terms like shear, buoyancy and vorticity to be the parameters that decide the slow pressure strain correlation. We add to it the fact that dissipation anisotropy also modifies the correlation through eddy viscosity parameterization. This recalibrates the established correlation constants slightly and gives improved results. This anisotropization of dissipation implies that the critical Richardson’s number increases much beyond unity (to 1.66) to accommodate enhanced mixing, as is seen in reality. The model is run for a couple of test cases in the General Ocean Turbulence Model (GOTM) and the results are presented here.

Keywords: Anisotropy, GOTM, pressure-strain correlation, Richardson Critical number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
3090 Robust Camera Calibration using Discrete Optimization

Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck

Abstract:

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
3089 Implication of the Exchange-Correlation on Electromagnetic Wave Propagation in Single-Wall Carbon Nanotubes

Authors: A. Abdikian

Abstract:

Using the linearized quantum hydrodynamic model (QHD) and by considering the role of quantum parameter (Bohm’s potential) and electron exchange-correlation potential in conjunction with Maxwell’s equations, electromagnetic wave propagation in a single-walled carbon nanotubes was studied. The electronic excitations are described. By solving the mentioned equations with appropriate boundary conditions and by assuming the low-frequency electromagnetic waves, two general expressions of dispersion relations are derived for the transverse magnetic (TM) and transverse electric (TE) modes, respectively. The dispersion relations are analyzed numerically and it was found that the dependency of dispersion curves with the exchange-correlation effects (which have been ignored in previous works) in the low frequency would be limited. Moreover, it has been realized that asymptotic behaviors of the TE and TM modes are similar in single wall carbon nanotubes (SWCNTs). The results show that by adding the function of electron exchange-correlation potential lead to the phenomena and make to extend the validity range of QHD model. The results can be important in the study of collective phenomena in nanostructures.

Keywords: Transverse magnetic, transverse electric, quantum hydrodynamic model, electron exchange-correlation potential, single-wall carbon nanotubes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056
3088 Near-Lossless Image Coding based on Orthogonal Polynomials

Authors: Krishnamoorthy R, Rajavijayalakshmi K, Punidha R

Abstract:

In this paper, a near lossless image coding scheme based on Orthogonal Polynomials Transform (OPT) has been presented. The polynomial operators and polynomials basis operators are obtained from set of orthogonal polynomials functions for the proposed transform coding. The image is partitioned into a number of distinct square blocks and the proposed transform coding is applied to each of these individually. After applying the proposed transform coding, the transformed coefficients are rearranged into a sub-band structure. The Embedded Zerotree (EZ) coding algorithm is then employed to quantize the coefficients. The proposed transform is implemented for various block sizes and the performance is compared with existing Discrete Cosine Transform (DCT) transform coding scheme.

Keywords: Near-lossless Coding, Orthogonal Polynomials Transform, Embedded Zerotree Coding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
3087 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features

Authors: Rabab M. Ramadan, Elaraby A. Elgallad

Abstract:

With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.

Keywords: Iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, scale invariant feature transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 862
3086 Classification Influence Index and its Application for k-Nearest Neighbor Classifier

Authors: Sejong Oh

Abstract:

Classification is an important topic in machine learning and bioinformatics. Many datasets have been introduced for classification tasks. A dataset contains multiple features, and the quality of features influences the classification accuracy of the dataset. The power of classification for each feature differs. In this study, we suggest the Classification Influence Index (CII) as an indicator of classification power for each feature. CII enables evaluation of the features in a dataset and improved classification accuracy by transformation of the dataset. By conducting experiments using CII and the k-nearest neighbor classifier to analyze real datasets, we confirmed that the proposed index provided meaningful improvement of the classification accuracy.

Keywords: accuracy, classification, dataset, data preprocessing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476
3085 Efficient Antenna Array Beamforming with Robustness against Random Steering Mismatch

Authors: Ju-Hong Lee, Ching-Wei Liao, Kun-Che Lee

Abstract:

This paper deals with the problem of using antenna sensors for adaptive beamforming in the presence of random steering mismatch. We present an efficient adaptive array beamformer with robustness to deal with the considered problem. The robustness of the proposed beamformer comes from the efficient designation of the steering vector. Using the received array data vector, we construct an appropriate correlation matrix associated with the received array data vector and a correlation matrix associated with signal sources. Then, the eigenvector associated with the largest eigenvalue of the constructed signal correlation matrix is designated as an appropriate estimate of the steering vector. Finally, the adaptive weight vector required for adaptive beamforming is obtained by using the estimated steering vector and the constructed correlation matrix of the array data vector. Simulation results confirm the effectiveness of the proposed method.

Keywords: Adaptive beamforming, antenna array, linearly constrained minimum variance, robustness, steering vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681
3084 Robot Map Building from Sonar and Laser Information using DSmT with Discounting Theory

Authors: Xinde Li, Xinhan Huang, Min Wang

Abstract:

In this paper, a new method of information fusion – DSmT (Dezert and Smarandache Theory) is introduced to apply to managing and dealing with the uncertain information from robot map building. Here we build grid map form sonar sensors and laser range finder (LRF). The uncertainty mainly comes from sonar sensors and LRF. Aiming to the uncertainty in static environment, we propose Classic DSm (DSmC) model for sonar sensors and laser range finder, and construct the general basic belief assignment function (gbbaf) respectively. Generally speaking, the evidence sources are unreliable in physical system, so we must consider the discounting theory before we apply DSmT. At last, Pioneer II mobile robot serves as a simulation experimental platform. We build 3D grid map of belief layout, then mainly compare the effect of building map using DSmT and DST. Through this simulation experiment, it proves that DSmT is very successful and valid, especially in dealing with highly conflicting information. In short, this study not only finds a new method for building map under static environment, but also supplies with a theory foundation for us to further apply Hybrid DSmT (DSmH) to dynamic unknown environment and multi-robots- building map together.

Keywords: Map building, DSmT, DST, uncertainty, information fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926
3083 Face Recognition using Radial Basis Function Network based on LDA

Authors: Byung-Joo Oh

Abstract:

This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%

Keywords: Face recognition, linear discriminant analysis, radial basis function network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2101
3082 6D Posture Estimation of Road Vehicles from Color Images

Authors: Yoshimoto Kurihara, Tad Gonsalves

Abstract:

Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.

Keywords: AlexNet, Deep learning, image recognition, 6D posture estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 563
3081 A New Method for Detection of Artificial Objects and Materials from Long Distance Environmental Images

Authors: H. Dujmic, V. Papic, H. Turic

Abstract:

The article presents a new method for detection of artificial objects and materials from images of the environmental (non-urban) terrain. Our approach uses the hue and saturation (or Cb and Cr) components of the image as the input to the segmentation module that uses the mean shift method. The clusters obtained as the output of this stage have been processed by the decision-making module in order to find the regions of the image with the significant possibility of representing human. Although this method will detect various non-natural objects, it is primarily intended and optimized for detection of humans; i.e. for search and rescue purposes in non-urban terrain where, in normal circumstances, non-natural objects shouldn-t be present. Real world images are used for the evaluation of the method.

Keywords: Landscape surveillance, mean shift algorithm, image segmentation, target detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
3080 Automatic Moment-Based Texture Segmentation

Authors: Tudor Barbu

Abstract:

An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Then, an automatic pixel classification approach is proposed. The feature vectors are clustered using an unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.

Keywords: Image segmentation, moment-based texture analysis, automatic classification, validity indexes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2368
3079 Development of Sleep Quality Index Using Heart Rate

Authors: Dongjoo Kim, Chang-Sik Son, Won-Seok Kang

Abstract:

Adequate sleep affects various parts of one’s overall physical and mental life. As one of the methods in determining the appropriate amount of sleep, this research presents a heart rate based sleep quality index. In order to evaluate sleep quality using the heart rate, sleep data from 280 subjects taken over one month are used. Their sleep data are categorized by a three-part heart rate range. After categorizing, some features are extracted, and the statistical significances are verified for these features. The results show that some features of this sleep quality index model have statistical significance. Thus, this heart rate based sleep quality index may be a useful discriminator of sleep.

Keywords: Sleep, sleep quality, heart rate, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
3078 A New Technique for Progressive ECG Transmission using Discrete Radon Transform

Authors: Amine Naït-Ali

Abstract:

The aim of this paper is to present a new method which can be used for progressive transmission of electrocardiogram (ECG). The idea consists in transforming any ECG signal to an image, containing one beat in each row. In the first step, the beats are synchronized in order to reduce the high frequencies due to inter-beat transitions. The obtained image is then transformed using a discrete version of Radon Transform (DRT). Hence, transmitting the ECG, leads to transmit the most significant energy of the transformed image in Radon domain. For decoding purpose, the receptor needs to use the inverse Radon Transform as well as the two synchronization frames. The presented protocol can be adapted for lossy to lossless compression systems. In lossy mode we show that the compression ratio can be multiplied by an average factor of 2 for an acceptable quality of reconstructed signal. These results have been obtained on real signals from MIT database.

Keywords: Discrete Radon Transform, ECG compression, synchronization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407