Search results for: Illumination Normalization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 172

Search results for: Illumination Normalization

172 An Improved Illumination Normalization based on Anisotropic Smoothing for Face Recognition

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seongwon Cho

Abstract:

Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an improved illumination normalization method for face recognition. Illumination normalization algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing-based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing the effect of illumination variations. Due to the result of these improvements, face images preprocessed by the proposed illumination normalization method becomes to have more distinctive feature vectors (Gabor feature vectors) for face recognition. Through experiments of face recognition based on Gabor feature vector similarity, the effectiveness of the proposed illumination normalization method is verified.

Keywords: Illumination Normalization, Face Recognition, Anisotropic smoothing, Gabor feature vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543
171 Face Localization and Recognition in Varied Expressions and Illumination

Authors: Hui-Yu Huang, Shih-Hang Hsu

Abstract:

In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.

Keywords: Gabor filter, improved active shape model (IASM), principal component analysis (PCA), face alignment, face recognition, support vector machine (SVM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
170 Adaptive Score Normalization: A Novel Approach for Multimodal Biometric Systems

Authors: Anouar Ben Khalifa, Sami Gazzah, Najoua Essoukri BenAmara

Abstract:

Multimodal biometric systems integrate the data presented by multiple biometric sources, hence offering a better performance than the systems based on a single biometric modality. Although the coupling of biometric systems can be done at different levels, the fusion at the scores level is the most common since it has been proven effective than the rest of the fusion levels. However, the scores from different modalities are generally heterogeneous. A step of normalizing the scores is needed to transform these scores into a common domain before combining them. In this paper, we study the performance of several normalization techniques with various fusion methods in a context relating to the merger of three unimodal systems based on the face, the palmprint and the fingerprint. We also propose a new adaptive normalization method that takes into account the distribution of client scores and impostor scores. Experiments conducted on a database of 100 people show that the performances of a multimodal system depend on the choice of the normalization method and the fusion technique. The proposed normalization method has given the best results.

Keywords: Multibiometrics, Fusion, Score level, Score normalization, Adaptive normalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3546
169 Geometric Modeling of Illumination on the TFT-LCD Panel using Bezier Surface

Authors: Kyong-min Lee, Moon Soo Chang, PooGyeon Park

Abstract:

In this paper, we propose a geometric modeling of illumination on the patterned image containing etching transistor. This image is captured by a commercial camera during the inspection of a TFT-LCD panel. Inspection of defect is an important process in the production of LCD panel, but the regional difference in brightness, which has a negative effect on the inspection, is due to the uneven illumination environment. In order to solve this problem, we present a geometric modeling of illumination consisting of an interpolation using the least squares method and 3D modeling using bezier surface. Our computational time, by using the sampling method, is shorter than the previous methods. Moreover, it can be further used to correct brightness in every patterned image.

Keywords: Bezier, defect, geometric modeling, illumination, inspection, LCD, panel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844
168 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery

Authors: Seema Biday, Udhav Bhosle

Abstract:

Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.

Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
167 Applying Spanning Tree Graph Theory for Automatic Database Normalization

Authors: Chetneti Srisa-an

Abstract:

In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies.

Keywords: Relational Database, Functional Dependency, Automatic Normalization, Primary Key, Spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2860
166 Adaptive Block State Update Method for Separating Background

Authors: Youngsuck Ji, Youngjoon Han, Hernsoo Hahn

Abstract:

In this paper, we proposed the robust mobile object detection method for light effect in the night street image block based updating reference background model using block state analysis. Experiment image is acquired sequence color video from steady camera. When suddenly appeared artificial illumination, reference background model update this information such as street light, sign light. Generally natural illumination is change by temporal, but artificial illumination is suddenly appearance. So in this paper for exactly detect artificial illumination have 2 state process. First process is compare difference between current image and reference background by block based, it can know changed blocks. Second process is difference between current image-s edge map and reference background image-s edge map, it possible to estimate illumination at any block. This information is possible to exactly detect object, artificial illumination and it was generating reference background more clearly. Block is classified by block-state analysis. Block-state has a 4 state (i.e. transient, stationary, background, artificial illumination). Fig. 1 is show characteristic of block-state respectively [1]. Experimental results show that the presented approach works well in the presence of illumination variance.

Keywords: Block-state, Edge component, Reference backgroundi, Artificial illumination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315
165 Pose Normalization Network for Object Classification

Authors: Bingquan Shen

Abstract:

Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline.

Keywords: Convolutional neural networks, object classification, pose normalization, viewpoint invariant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1113
164 Electromagnetic Imaging of Inhomogeneous Dielectric Cylinders Buried in a Slab Mediumby TE Wave Illumination

Authors: Chung-Hsin Huang, Chien-Ching Chiu, Chun Jen Lin

Abstract:

The electromagnetic imaging of inhomogeneous dielectric cylinders buried in a slab medium by transverse electric (TE) wave illumination is investigated. Dielectric cylinders of unknown permittivities are buried in second space and scattered a group of unrelated waves incident from first space where the scattered field is recorded. By proper arrangement of the various unrelated incident fields, the difficulties of ill-posedness and nonlinearity are circumvented, and the permittivity distribution can be reconstructed through simple matrix operations. The algorithm is based on the moment method and the unrelated illumination method. Numerical results are given to demonstrate the capability of the inverse algorithm. Good reconstruction is obtained even in the presence of additive Gaussian random noise in measured data. In addition, the effect of noise on the reconstruction result is also investigated.

Keywords: Slab Medium, Unrelated Illumination Method, TEWave Illumination, Inhomogeneous Cylinders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1197
163 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

Authors: Shagufta Tabassum

Abstract:

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. Here we discuss the basic calibration and normalization procedure for TDR measurements. Our aim is to explain different types of error occur during TDR measurements and how to minimize it.

Keywords: time domain reflectometry measurement technique, cable and connector loss, oscilloscope loss, normalization technique

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 491
162 Photograph Based Pair-matching Recognition of Human Faces

Authors: Min Yao, Kota Aoki, Hiroshi Nagahashi

Abstract:

In this paper, a novel system recognition of human faces without using face different color photographs is proposed. It mainly in face detection, normalization and recognition. Foot method of combination of Haar-like face determined segmentation and region-based histogram stretchi (RHST) is proposed to achieve more accurate perf using Haar. Apart from an effective angle norm side-face (pose) normalization, which is almost a might be important and beneficial for the prepr introduced. Then histogram-based and photom normalization methods are investigated and ada retinex (ASR) is selected for its satisfactory illumin Finally, weighted multi-block local binary pattern with 3 distance measures is applied for pair-mat Experimental results show its advantageous perfo with PCA and multi-block LBP, based on a principle.

Keywords: Face detection, pair-matching rec normalization, skin color segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
161 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
160 2D Spherical Spaces for Face Relighting under Harsh Illumination

Authors: Amr Almaddah, Sadi Vural, Yasushi Mae, Kenichi Ohara, Tatsuo Arai

Abstract:

In this paper, we propose a robust face relighting technique by using spherical space properties. The proposed method is done for reducing the illumination effects on face recognition. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients. First, an internal training illumination database is generated by computing face albedo and face normal from 2D images under different lighting conditions. Based on the generated database, we analyze the target face pixels and compare them with the training bootstrap by using pre-generated tiles. In this work, practical real time processing speed and small image size were considered when designing the framework. In contrast to other works, our technique requires no 3D face models for the training process and takes a single 2D image as an input. Experimental results on publicly available databases show that the proposed technique works well under severe lighting conditions with significant improvements on the face recognition rates.

Keywords: Face synthesis and recognition, Face illumination recovery, 2D spherical spaces, Vision for graphics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
159 Maximizing Nitrate Absorption of Agricultural Waste Water in a Tubular Microalgae Reactor by Adapting the Illumination Spectrum

Authors: J. Martin, A. Dannenberg, G. Detrell, R. Ewald, S. Fasoulas

Abstract:

Microalgae-based photobioreactors (PBR) for Life Support Systems (LSS) are currently being investigated for future space missions such as a crewed base on planets or moons. Biological components may help reducing resupply masses by closing material mass flows with the help of regenerative components. Via photosynthesis, the microalgae use CO2, water, light and nutrients to provide oxygen and biomass for the astronauts. These capabilities could have synergies with Earth applications that tackle current problems and the developed technologies can be transferred. For example, a current worldwide discussed issue is the increased nitrate and phosphate pollution of ground water from agricultural waste waters. To investigate the potential use of a biological system based on the ability of the microalgae to extract and use nitrate and phosphate for the treatment of polluted ground water from agricultural applications, a scalable test stand is being developed. This test stand investigates the maximization of intake rates of nitrate and quantifies the produced biomass and oxygen. To minimize the required energy, for the uptake of nitrate from artificial waste water (AWW) the Flashing Light Effect (FLE) and the adaption of the illumination spectrum were realized. This paper describes the composition of the AWW, the development of the illumination unit and the possibility of non-invasive process optimization and control via the adaption of the illumination spectrum and illumination cycles. The findings were a doubling of the energy related growth rate by adapting the illumination setting.

Keywords: Microalgae, illumination, nitrate uptake, flashing light effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 637
158 Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks

Authors: Levente Varga, Dávid Deritei, Mária Ercsey-Ravasz, Răzvan Florian, Zsolt I. Lázár, István Papp, Ferenc Járai-Szabó

Abstract:

One of the major shortcomings of widely used scientometric indicators is that different disciplines cannot be compared with each other. The issue of cross-disciplinary normalization has been long discussed, but even the classification of publications into scientific domains poses problems. Structural properties of citation networks offer new possibilities, however, the large size and constant growth of these networks asks for precaution. Here we present a new tool that in order to perform cross-field normalization of scientometric indicators of individual publications relays on the structural properties of citation networks. Due to the large size of the networks, a systematic procedure for identifying scientific domains based on a local community detection algorithm is proposed. The algorithm is tested with different benchmark and real-world networks. Then, by the use of this algorithm, the mechanism of the scientometric indicator normalization process is shown for a few indicators like the citation number, P-index and a local version of the PageRank indicator. The fat-tail trend of the article indicator distribution enables us to successfully perform the indicator normalization process.

Keywords: Citation networks, scientometric indicator, cross-field normalization, local cluster detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 721
157 Normalization and Constrained Optimization of Measures of Fuzzy Entropy

Authors: K.C. Deshmukh, P.G. Khot, Nikhil

Abstract:

In the literature of information theory, there is necessity for comparing the different measures of fuzzy entropy and this consequently, gives rise to the need for normalizing measures of fuzzy entropy. In this paper, we have discussed this need and hence developed some normalized measures of fuzzy entropy. It is also desirable to maximize entropy and to minimize directed divergence or distance. Keeping in mind this idea, we have explained the method of optimizing different measures of fuzzy entropy.

Keywords: Fuzzy set, Uncertainty, Fuzzy entropy, Normalization, Membership function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462
156 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.

Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
155 Normalizing Logarithms of Realized Volatility in an ARFIMA Model

Authors: G. L. C. Yap

Abstract:

Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.

Keywords: Long-memory, Gaussian process, Whittle estimator, normalization, volatility, value-at-risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
154 A Robust Image Watermarking Scheme using Image Moment Normalization

Authors: Latha Parameswaran, K. Anbumani

Abstract:

Multimedia security is an incredibly significant area of concern. A number of papers on robust digital watermarking have been presented, but there are no standards that have been defined so far. Thus multimedia security is still a posing problem. The aim of this paper is to design a robust image-watermarking scheme, which can withstand a different set of attacks. The proposed scheme provides a robust solution integrating image moment normalization, content dependent watermark and discrete wavelet transformation. Moment normalization is useful to recover the watermark even in case of geometrical attacks. Content dependent watermarks are a powerful means of authentication as the data is watermarked with its own features. Discrete wavelet transforms have been used as they describe image features in a better manner. The proposed scheme finds its place in validating identification cards and financial instruments.

Keywords: Watermarking, moments, wavelets, content-based, benchmarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
153 Illumination Invariant Face Recognition using Supervised and Unsupervised Learning Algorithms

Authors: Shashank N. Mathur, Anil K. Ahlawat, Virendra P. Vishwakarma

Abstract:

In this paper, a comparative study of application of supervised and unsupervised learning algorithms on illumination invariant face recognition has been carried out. The supervised learning has been carried out with the help of using a bi-layered artificial neural network having one input, two hidden and one output layer. The gradient descent with momentum and adaptive learning rate back propagation learning algorithm has been used to implement the supervised learning in a way that both the inputs and corresponding outputs are provided at the time of training the network, thus here is an inherent clustering and optimized learning of weights which provide us with efficient results.. The unsupervised learning has been implemented with the help of a modified Counterpropagation network. The Counterpropagation network involves the process of clustering followed by application of Outstar rule to obtain the recognized face. The face recognition system has been developed for recognizing faces which have varying illumination intensities, where the database images vary in lighting with respect to angle of illumination with horizontal and vertical planes. The supervised and unsupervised learning algorithms have been implemented and have been tested exhaustively, with and without application of histogram equalization to get efficient results.

Keywords: Artificial Neural Networks, back propagation, Counterpropagation networks, face recognition, learning algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
152 Using Automated Database Reverse Engineering for Database Integration

Authors: M. R. Abbasifard, M. Rahgozar, A. Bayati, P. Pournemati

Abstract:

One important problem in today organizations is the existence of non-integrated information systems, inconsistency and lack of suitable correlations between legacy and modern systems. One main solution is to transfer the local databases into a global one. In this regards we need to extract the data structures from the legacy systems and integrate them with the new technology systems. In legacy systems, huge amounts of a data are stored in legacy databases. They require particular attention since they need more efforts to be normalized, reformatted and moved to the modern database environments. Designing the new integrated (global) database architecture and applying the reverse engineering requires data normalization. This paper proposes the use of database reverse engineering in order to integrate legacy and modern databases in organizations. The suggested approach consists of methods and techniques for generating data transformation rules needed for the data structure normalization.

Keywords: Reverse Engineering, Database Integration, System Integration, Data Structure Normalization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
151 Cartoon Effect and Ambient Illumination Based Depth Perception Assessment of 3D Video

Authors: G. Nur

Abstract:

Monitored 3-Dimensional (3D) video experience can be utilized as “feedback information” to fine tune the service parameters for providing a better service to the demanding 3D service customers. The 3D video experience which includes both video quality and depth perception is influenced by several contextual and content related factors (e.g., ambient illumination condition, content characteristics, etc) due to the complex nature of the 3D video. Therefore, effective factors on this experience should be utilized while assessing it. In this paper, structural information of the depth map sequences of the 3D video is considered as content related factor effective on the depth perception assessment. Cartoon-like filter is utilized to abstract the significant depth levels in the depth map sequences to determine the structural information. Moreover, subjective experiments are conducted using 3D videos associated with cartoon-like depth map sequences to investigate the effectiveness of ambient illumination condition, which is a contextual factor, on depth perception. Using the knowledge gained through this study, 3D video experience metrics can be developed to deliver better service to the 3D video service users. 

Keywords: 3D Video, Ambient Illumination, Cartoon Effect, Depth Perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
150 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques

Authors: C. Ardil

Abstract:

This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.

Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 571
149 Enhanced Approaches to Rectify the Noise, Illumination and Shadow Artifacts

Authors: M. Sankari, C. Meena

Abstract:

Enhancing the quality of two dimensional signals is one of the most important factors in the fields of video surveillance and computer vision. Usually in real-life video surveillance, false detection occurs due to the presence of random noise, illumination and shadow artifacts. The detection methods based on background subtraction faces several problems in accurately detecting objects in realistic environments: In this paper, we propose a noise removal algorithm using neighborhood comparison method with thresholding. The illumination variations correction is done in the detected foreground objects by using an amalgamation of techniques like homomorphic decomposition, curvelet transformation and gamma adjustment operator. Shadow is removed using chromaticity estimator with local relation estimator. Results are compared with the existing methods and prove as high robustness in the video surveillance.

Keywords: Chromaticity Estimator, Curvelet Transformation, Denoising, Gamma correction, Homomorphic, Neighborhood Assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952
148 Face Texture Reconstruction for Illumination Variant Face Recognition

Authors: Pengfei Xiong, Lei Huang, Changping Liu

Abstract:

In illumination variant face recognition, existing methods extracting face albedo as light normalized image may lead to loss of extensive facial details, with light template discarded. To improve that, a novel approach for realistic facial texture reconstruction by combining original image and albedo image is proposed. First, light subspaces of different identities are established from the given reference face images; then by projecting the original and albedo image into each light subspace respectively, texture reference images with corresponding lighting are reconstructed and two texture subspaces are formed. According to the projections in texture subspaces, facial texture with normal light can be synthesized. Due to the combination of original image, facial details can be preserved with face albedo. In addition, image partition is applied to improve the synthesization performance. Experiments on Yale B and CMUPIE databases demonstrate that this algorithm outperforms the others both in image representation and in face recognition.

Keywords: texture reconstruction, illumination, face recognition, subspaces

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474
147 A Hybrid Method for Eyes Detection in Facial Images

Authors: Muhammad Shafi, Paul W. H. Chung

Abstract:

This paper proposes a hybrid method for eyes localization in facial images. The novelty is in combining techniques that utilise colour, edge and illumination cues to improve accuracy. The method is based on the observation that eye regions have dark colour, high density of edges and low illumination as compared to other parts of face. The first step in the method is to extract connected regions from facial images using colour, edge density and illumination cues separately. Some of the regions are then removed by applying rules that are based on the general geometry and shape of eyes. The remaining connected regions obtained through these three cues are then combined in a systematic way to enhance the identification of the candidate regions for the eyes. The geometry and shape based rules are then applied again to further remove the false eye regions. The proposed method was tested using images from the PICS facial images database. The proposed method has 93.7% and 87% accuracies for initial blobs extraction and final eye detection respectively.

Keywords: Erosion, dilation, Edge-density

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041
146 A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT

Authors: Say Wei Foo, Qi Dong

Abstract:

Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.

Keywords: Image watermarking, Image normalization, Singularvalue decomposition, Discrete cosine transform, Robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2090
145 A Weighted Approach to Unconstrained Iris Recognition

Authors: Yao-Hong Tsai

Abstract:

This paper presents a weighted approach to unconstrained iris recognition. In nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.

Keywords: Authentication, iris recognition, Adaboost, local binary pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1932
144 Player Number Localization and Recognition in Soccer Video using HSV Color Space and Internal Contours

Authors: Matko Šaric, Hrvoje Dujmic, Vladan Papic, Nikola Rožic

Abstract:

Detection of player identity is challenging task in sport video content analysis. In case of soccer video player number recognition is effective and precise solution. Jersey numbers can be considered as scene text and difficulties in localization and recognition appear due to variations in orientation, size, illumination, motion etc. This paper proposed new method for player number localization and recognition. By observing hue, saturation and value for 50 different jersey examples we noticed that most often combination of low and high saturated pixels is used to separate number and jersey region. Image segmentation method based on this observation is introduced. Then, novel method for player number localization based on internal contours is proposed. False number candidates are filtered using area and aspect ratio. Before OCR processing extracted numbers are enhanced using image smoothing and rotation normalization.

Keywords: player number, soccer video, HSV color space

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
143 A Simple Adaptive Algorithm for Norm-Constrained Optimization

Authors: Hyun-Chool Shin

Abstract:

In this paper we propose a simple adaptive algorithm iteratively solving the unit-norm constrained optimization problem. Instead of conventional parameter norm based normalization, the proposed algorithm incorporates scalar normalization which is computationally much simpler. The analysis of stationary point is presented to show that the proposed algorithm indeed solves the constrained optimization problem. The simulation results illustrate that the proposed algorithm performs as good as conventional ones while being computationally simpler.

Keywords: constrained optimization, unit-norm, LMS, principle component analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122