Search results for: image decomposition.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1815

Search results for: image decomposition.

1575 Image Authenticity and Perceptual Optimization via Genetic Algorithm and a Dependence Neighborhood

Authors: Imran Usman, Asifullah Khan, Rafiullah Chamlawi, Abdul Majid

Abstract:

Information hiding for authenticating and verifying the content integrity of the multimedia has been exploited extensively in the last decade. We propose the idea of using genetic algorithm and non-deterministic dependence by involving the un-watermarkable coefficients for digital image authentication. Genetic algorithm is used to intelligently select coefficients for watermarking in a DCT based image authentication scheme, which implicitly watermark all the un-watermarkable coefficients also, in order to thwart different attacks. Experimental results show that such intelligent selection results in improvement of imperceptibility of the watermarked image, and implicit watermarking of all the coefficients improves security against attacks such as cover-up, vector quantization and transplantation.

Keywords: Digital watermarking, fragile watermarking, geneticalgorithm, Image authentication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
1574 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration

Authors: H. B. Kekre, Sudeep D. Thepade

Abstract:

The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.

Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
1573 Fast Depth Estimation with Filters

Authors: Yiming Nie, Tao Wu, Xiangjing An, Hangen He

Abstract:

Fast depth estimation from binocular vision is often desired for autonomous vehicles, but, most algorithms could not easily be put into practice because of the much time cost. We present an image-processing technique that can fast estimate depth image from binocular vision images. By finding out the lines which present the best matched area in the disparity space image, the depth can be estimated. When detecting these lines, an edge-emphasizing filter is used. The final depth estimation will be presented after the smooth filter. Our method is a compromise between local methods and global optimization.

Keywords: Depth estimation, image filters, stereo match.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1249
1572 Decomposition Method for Neural Multiclass Classification Problem

Authors: H. El Ayech, A. Trabelsi

Abstract:

In this article we are going to discuss the improvement of the multi classes- classification problem using multi layer Perceptron. The considered approach consists in breaking down the n-class problem into two-classes- subproblems. The training of each two-class subproblem is made independently; as for the phase of test, we are going to confront a vector that we want to classify to all two classes- models, the elected class will be the strongest one that won-t lose any competition with the other classes. Rates of recognition gotten with the multi class-s approach by two-class-s decomposition are clearly better that those gotten by the simple multi class-s approach.

Keywords: Artificial neural network, letter-recognition, Multi class Classification, Multi Layer Perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
1571 A Stereo Image Processing System for Visually Impaired

Authors: G. Balakrishnan, G. Sainarayanan, R. Nagarajan, Sazali Yaacob

Abstract:

This paper presents a review on vision aided systems and proposes an approach for visual rehabilitation using stereo vision technology. The proposed system utilizes stereo vision, image processing methodology and a sonification procedure to support blind navigation. The developed system includes a wearable computer, stereo cameras as vision sensor and stereo earphones, all moulded in a helmet. The image of the scene infront of visually handicapped is captured by the vision sensors. The captured images are processed to enhance the important features in the scene in front, for navigation assistance. The image processing is designed as model of human vision by identifying the obstacles and their depth information. The processed image is mapped on to musical stereo sound for the blind-s understanding of the scene infront. The developed method has been tested in the indoor and outdoor environments and the proposed image processing methodology is found to be effective for object identification.

Keywords: Blind navigation, stereo vision, image processing, object preference, music tones.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4114
1570 Single Image Defogging Method Using Variational Approach for Edge-Preserving Regularization

Authors: Wan-Hyun Cho, In-Seop Na, Seong-ChaeSeo, Sang-Kyoon Kim, Soon-Young Park

Abstract:

In this paper, we propose the variational approach to solve single image defogging problem. In the inference process of the atmospheric veil, we defined new functional for atmospheric veil that satisfy edge-preserving regularization property. By using the fundamental lemma of calculus of variations, we derive the Euler-Lagrange equation foratmospheric veil that can find the maxima of a given functional. This equation can be solved by using a gradient decent method and time parameter. Then, we can have obtained the estimated atmospheric veil, and then have conducted the image restoration by using inferred atmospheric veil. Finally we have improved the contrast of restoration image by various histogram equalization methods. The experimental results show that the proposed method achieves rather good defogging results.

Keywords: Image defogging, Image restoration, Atmospheric veil, Transmission, Variational approach, Euler-Lagrange equation, Image enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2941
1569 Subjective Assessment about Super Resolution Image Resolution

Authors: Seiichi Gohshi, Hiroyuki Sekiguchi, Yoshiyasu Shimizu, Takeshi Ikenaga

Abstract:

Super resolution (SR) technologies are now being applied to video to improve resolution. Some TV sets are now equipped with SR functions. However, it is not known if super resolution image reconstruction (SRR) for TV really works or not. Super resolution with non-linear signal processing (SRNL) has recently been proposed. SRR and SRNL are the only methods for processing video signals in real time. The results from subjective assessments of SSR and SRNL are described in this paper. SRR video was produced in simulations with quarter precision motion vectors and 100 iterations. These are ideal conditions for SRR. We found that the image quality of SRNL is better than that of SRR even though SRR was processed under ideal conditions.

Keywords: Super Resolution Image Reconstruction, Super Resolution with Non-Linear Signal Processing, Subjective Assessment, Image Quality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
1568 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan

Abstract:

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
1567 Nature Inspired Metaheuristic Algorithms for Multilevel Thresholding Image Segmentation - A Survey

Authors: C. Deepika, J. Nithya

Abstract:

Segmentation is one of the essential tasks in image processing. Thresholding is one of the simplest techniques for performing image segmentation. Multilevel thresholding is a simple and effective technique. The primary objective of bi-level or multilevel thresholding for image segmentation is to determine a best thresholding value. To achieve multilevel thresholding various techniques has been proposed. A study of some nature inspired metaheuristic algorithms for multilevel thresholding for image segmentation is conducted. Here, we study about Particle swarm optimization (PSO) algorithm, artificial bee colony optimization (ABC), Ant colony optimization (ACO) algorithm and Cuckoo search (CS) algorithm.

Keywords: Ant colony optimization, Artificial bee colony optimization, Cuckoo search algorithm, Image segmentation, Multilevel thresholding, Particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3521
1566 Spreading Japan's National Image through China during the Era of Mass Tourism: The Japan National Tourism Organization’s Use of Sina Weibo

Authors: Abigail Qian Zhou

Abstract:

Since China has entered an era of mass tourism, there has been a fundamental change in the way Chinese people approach and perceive the image of other countries. With the advent of the new media era, social networking sites such as Sina Weibo have become a tool for many foreign governmental organizations to spread and promote their national image. Among them, the Japan National Tourism Organization (JNTO) was one of the first foreign official tourism agencies to register with Sina Weibo and actively implement communication activities. Due to historical and political reasons, cognition of Japan's national image by the Chinese has always been complicated and contradictory. However, since 2015, China has become the largest source of tourists visiting Japan. This clearly indicates that the broadening of Japan's national image in China has been effective and has value worthy of reference in promoting a positive Chinese perception of Japan and encouraging Japanese tourism. Within this context and using the method of content analysis in media studies through content mining software, this study analyzed how JNTO’s Sina Weibo accounts have constructed and spread Japan's national image. This study also summarized the characteristics of its content and form, and finally revealed the strategy of JNTO in building its international image. The findings of this study not only add a tourism-based perspective to traditional national image communications research, but also provide some reference for the effective international dissemination of national image in the future.

Keywords: National image, tourism, international communication, Japan, China.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
1565 Parallel Double Splicing on Iso-Arrays

Authors: V. Masilamani, D.K. Sheena Christy, D.G. Thomas

Abstract:

Image synthesis is an important area in image processing. To synthesize images various systems are proposed in the literature. In this paper, we propose a bio-inspired system to synthesize image and to study the generating power of the system, we define the class of languages generated by our system. We call image as array in this paper. We use a primitive called iso-array to synthesize image/array. The operation is double splicing on iso-arrays. The double splicing operation is used in DNA computing and we use this to synthesize image. A comparison of the family of languages generated by the proposed self restricted double splicing systems on iso-arrays with the existing family of local iso-picture languages is made. Certain closure properties such as union, concatenation and rotation are studied for the family of languages generated by the proposed model.

Keywords: DNA computing, splicing system, iso-picture languages, iso-array double splicing system, iso-array self splicing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543
1564 Gaussian Density and HOG with Content Based Image Retrieval System – A New Approach

Authors: N. Shanmugapriya, R. Nallusamy

Abstract:

Content-based image retrieval (CBIR) uses the contents of images to characterize and contact the images. This paper focus on retrieving the image by separating images into its three color mechanism R, G and B and for that Discrete Wavelet Transformation is applied. Then Wavelet based Generalized Gaussian Density (GGD) is practical which is used for modeling the coefficients from the wavelet transforms. After that it is agreed to Histogram of Oriented Gradient (HOG) for extracting its characteristic vectors with Relevant Feedback technique is used. The performance of this approach is calculated by exactness and it confirms that this method is wellorganized for image retrieval.

Keywords: Content-Based Image Retrieval (CBIR), Relevant Feedback, Histogram of Oriented Gradient (HOG), Generalized Gaussian Density (GGD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
1563 Optimal Placement of Piezoelectric Actuators on Plate Structures for Active Vibration Control Using Modified Control Matrix and Singular Value Decomposition Approach

Authors: Deepak Chhabra, Gian Bhushan, Pankaj Chandna

Abstract:

The present work deals with the optimal placement of piezoelectric actuators on a thin plate using Modified Control Matrix and Singular Value Decomposition (MCSVD) approach. The problem has been formulated using the finite element method using ten piezoelectric actuators on simply supported plate to suppress first six modes. The sizes of ten actuators are combined to outline one actuator by adding the ten columns of control matrix to form a column matrix. The singular value of column control matrix is considered as the fitness function and optimal positions of the actuators are obtained by maximizing it with GA. Vibration suppression has been studied for simply supported plate with piezoelectric patches in optimal positions using Linear Quadratic regulator) scheme. It is observed that MCSVD approach has given the position of patches adjacent to each-other, symmetric to the centre axis and given greater vibration suppression than other previously published results on SVD. 

Keywords: Closed loop Average dB gain, Genetic Algorithm (GA), LQR Controller, MCSVD, Optimal positions, Singular Value Decomposition (SVD) Approaches.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3073
1562 A Review on Image Segmentation Techniques and Performance Measures

Authors: David Libouga Li Gwet, Marius Otesteanu, Ideal Oscar Libouga, Laurent Bitjoka, Gheorghe D. Popa

Abstract:

Image segmentation is a method to extract regions of interest from an image. It remains a fundamental problem in computer vision. The increasing diversity and the complexity of segmentation algorithms have led us firstly, to make a review and classify segmentation techniques, secondly to identify the most used measures of segmentation performance and thirdly, discuss deeply on segmentation philosophy in order to help the choice of adequate segmentation techniques for some applications. To justify the relevance of our analysis, recent algorithms of segmentation are presented through the proposed classification.

Keywords: Classification, image segmentation, measures of performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
1561 Branding Urban Spaces as an Approach for City Branding -Case study: Cairo City, Egypt

Authors: Mohammad R. M. Abdelaal, Reeman M. R. Hussein

Abstract:

With the beginning of the new century, man still faces many challenges in how to form and develop his urban environment. To meet these challenges, many cities have tried to develop its visual image. This is by transforming their urban environment into a branded visual image; this is at the level of squares, the main roads, the borders, and the landmarks. In this realm, the paper aims at activating the role of branded urban spaces as an approach for the development of visual image of cities, especially in Egypt. It concludes the need to recognize the importance of developing the visual image in Egypt, through directing the urban planners to the important role of such spaces in achieving sustainability.

Keywords: Urban branded spaces, brand image, sustainable development, Cairo.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3093
1560 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139
1559 Medical Image Registration by Minimizing Divergence Measure Based on Tsallis Entropy

Authors: Shaoyan Sun, Liwei Zhang, Chonghui Guo

Abstract:

As the use of registration packages spreads, the number of the aligned image pairs in image databases (either by manual or automatic methods) increases dramatically. These image pairs can serve as a set of training data. Correspondingly, the images that are to be registered serve as testing data. In this paper, a novel medical image registration method is proposed which is based on the a priori knowledge of the expected joint intensity distribution estimated from pre-aligned training images. The goal of the registration is to find the optimal transformation such that the distance between the observed joint intensity distribution obtained from the testing image pair and the expected joint intensity distribution obtained from the corresponding training image pair is minimized. The distance is measured using the divergence measure based on Tsallis entropy. Experimental results show that, compared with the widely-used Shannon mutual information as well as Tsallis mutual information, the proposed method is computationally more efficient without sacrificing registration accuracy.

Keywords: Multimodality images, image registration, Shannonentropy, Tsallis entropy, mutual information, Powell optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
1558 Content-Based Image Retrieval Using HSV Color Space Features

Authors: Hamed Qazanfari, Hamid Hassanpour, Kazem Qazanfari

Abstract:

In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.

Keywords: Content-based image retrieval, color difference histogram, efficient features selection, entropy, correlation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
1557 A Robust Data Hiding Technique based on LSB Matching

Authors: Emad T. Khalaf, Norrozila Sulaiman

Abstract:

Many researchers are working on information hiding techniques using different ideas and areas to hide their secrete data. This paper introduces a robust technique of hiding secret data in image based on LSB insertion and RSA encryption technique. The key of the proposed technique is to encrypt the secret data. Then the encrypted data will be converted into a bit stream and divided it into number of segments. However, the cover image will also be divided into the same number of segments. Each segment of data will be compared with each segment of image to find the best match segment, in order to create a new random sequence of segments to be inserted then in a cover image. Experimental results show that the proposed technique has a high security level and produced better stego-image quality.

Keywords: steganography; LSB Matching; RSA Encryption; data segments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
1556 A Study of Gaps in CBMIR Using Different Methods and Prospective

Authors: Pradeep Singh, Sukhwinder Singh, Gurjinder Kaur

Abstract:

In recent years, rapid advances in software and hardware in the field of information technology along with a digital imaging revolution in the medical domain facilitate the generation and storage of large collections of images by hospitals and clinics. To search these large image collections effectively and efficiently poses significant technical challenges, and it raises the necessity of constructing intelligent retrieval systems. Content-based Image Retrieval (CBIR) consists of retrieving the most visually similar images to a given query image from a database of images[5]. Medical CBIR (content-based image retrieval) applications pose unique challenges but at the same time offer many new opportunities. On one hand, while one can easily understand news or sports videos, a medical image is often completely incomprehensible to untrained eyes.

Keywords: Classification, clustering, content-based image retrieval (CBIR), relevance feedback (RF), statistical similarity matching, support vector machine (SVM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
1555 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

Authors: Yanwen Li, Shuguo Xie

Abstract:

In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.

Keywords: Gradient image, segmentation and extract, mean-shift algorithm, dictionary learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
1554 Building Facade Study in Lahijan City, Iran: The Impact of Facade's Visual Elements on Historical Image

Authors: N. Utaberta, A. Jalali, S. Johar, M. Surat, A. I. Che-Ani

Abstract:

Buildings are considered as significant part in the cities, which plays main role in organization and arrangement of city appearance, which is affects image of that building facades, as an connective between inner and outer space, have a main role in city image and they are classified as rich image and poor image by people evaluation which related to visual architectural and urban elements in building facades. the buildings in Karimi street , in Lahijan city where, lies in north of Iran, contain the variety of building's facade types which, have made a city image in Historical part of Lahijan city, while reflected the Iranian cities identity. The study attempt to identify the architectural and urban elements that impression the image of building facades in historical area, based on public evaluation. Quantitative method were used and the data was collected through questionnaire survey, the result presented architectural style, color, shape, and design evaluated by people as most important factor which should be understate in future development. in fact, the rich architectural style with strong design make strong city image as weak design make poor city image.

Keywords: Building's facade, historical area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3910
1553 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System

Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva

Abstract:

Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.

Keywords: Energy production, meteorological data, irradiance decomposition, solar photovoltaic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
1552 A New Categorization of Image Quality Metrics Based On a Model of Human Quality Perception

Authors: Maria Grazia Albanesi, Riccardo Amadeo

Abstract:

This study presents a new model of the human image quality assessment process: the aim is to highlightthe foundations of the image quality metrics proposed in literature, by identifyingthe cognitive/physiological or mathematical principles of their development and the relation with the actual human quality assessment process. The model allows to createa novel categorization of objective and subjective image quality metrics. Our work includes an overview of the most used or effectiveobjective metrics in literature, and, for each of them, we underline its main characteristics, with reference to the rationale of the proposed model and categorization. From the results of this operation, we underline a problem that affects all the presented metrics: the fact that many aspects of human biasesare not taken in account at all. We then propose a possible methodology to address this issue.

Keywords: Eye-Tracking, image quality assessment metric, MOS, quality of user experience, visual perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2448
1551 Effects of Data Correlation in a Sparse-View Compressive Sensing Based Image Reconstruction

Authors: Sajid Abbas, Joon Pyo Hong, Jung-Ryun Lee, Seungryong Cho

Abstract:

Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.

Keywords: Computed tomography, Computed laminography, Compressive sending, Low-dose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
1550 Adaptive Bidirectional Flow for Image Interpolation and Enhancement

Authors: Shujun Fu, Qiuqi Ruan, Wenqia Wang

Abstract:

Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually the effects of blurred edges and jagged artifacts in the image to some extent. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to sharpen edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (“jaggies") along the tangent directions. In order to preserve image features such as edges, corners and textures, the nonlinear diffusion coefficients are locally adjusted according to the directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.

Keywords: anisotropic diffusion, bidirectional flow, directional derivatives, edge enhancement, image interpolation, inverse flow, shock filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
1549 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities

Authors: Sumathi Poobal, G. Ravindran

Abstract:

Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.

Keywords: DCT, FIC, PIFS, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823
1548 Adomian Decomposition Method Associated with Boole-s Integration Rule for Goursat Problem

Authors: Mohd Agos Salim Nasir, Ros Fadilah Deraman, Siti Salmah Yasiran

Abstract:

The Goursat partial differential equation arises in linear and non linear partial differential equations with mixed derivatives. This equation is a second order hyperbolic partial differential equation which occurs in various fields of study such as in engineering, physics, and applied mathematics. There are many approaches that have been suggested to approximate the solution of the Goursat partial differential equation. However, all of the suggested methods traditionally focused on numerical differentiation approaches including forward and central differences in deriving the scheme. An innovation has been done in deriving the Goursat partial differential equation scheme which involves numerical integration techniques. In this paper we have developed a new scheme to solve the Goursat partial differential equation based on the Adomian decomposition (ADM) and associated with Boole-s integration rule to approximate the integration terms. The new scheme can easily be applied to many linear and non linear Goursat partial differential equations and is capable to reduce the size of computational work. The accuracy of the results reveals the advantage of this new scheme over existing numerical method.

Keywords: Goursat problem, partial differential equation, Adomian decomposition method, Boole's integration rule.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
1547 Efficient Feature Fusion for Noise Iris in Unconstrained Environment

Authors: Yao-Hong Tsai

Abstract:

This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.

Keywords: Image fusion, iris recognition, local binary pattern, wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216
1546 Image Dehazing Using Dark Channel Prior and Fast Guided Filter in Daubechies Lifting Wavelet Transform Domain

Authors: Harpreet Kaur, Sudipta Majumdar

Abstract:

In this paper a method for image dehazing is proposed in lifting wavelet transform domain. Lifting Daubechies (D4) wavelet has been used to obtain the approximate image and detail images.  As the haze is contained in low frequency part, only the approximate image is used for further processing. This region is processed by dehazing algorithm based on dark channel prior (DCP). The dehazed approximate image is then recombined with the detail images using inverse lifting wavelet transform. Implementation of lifting wavelet transform has the advantage of auxiliary memory saving, fast implementation and simplicity. Also, the proposed method deals with near white scene problem, blue horizon issue and localized light sources in a way to enhance image quality and makes the algorithm robust. Simulation results present improvement in terms of visual quality, parameters such as root mean square (RMS) contrast, structural similarity index (SSIM), entropy and execution time.

Keywords: Dark channel prior, image dehazing, lifting wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123