Search results for: gray image
2473 Open-Source YOLO CV For Detection of Dust on Solar PV Surface
Authors: Jeewan Rai, Kinzang, Yeshi Jigme Choden
Abstract:
Accumulation of dust on solar panels impacts the overall efficiency and the amount of energy they produce. While various techniques exist for detecting dust to schedule cleaning, many of these methods use MATLAB image processing tools and other licensed software, which can be financially burdensome. This study will investigate the efficiency of a free open-source computer vision library using the YOLO algorithm. The proposed approach has been tested on images of solar panels with varying dust levels through an experiment setup. The experimental findings illustrated the effectiveness of using the YOLO-based image classification method and the overall dust detection approach with an accuracy of 90% in distinguishing between clean and dusty panels. This open-source solution provides a cost effective and accessible alternative to commercial image processing tools, offering solutions for optimizing solar panel maintenance and enhancing energy production.Keywords: YOLO, openCV, dust detection, solar panels, computer vision, image processing
Procedia PDF Downloads 302472 Medical Image Augmentation Using Spatial Transformations for Convolutional Neural Network
Authors: Trupti Chavan, Ramachandra Guda, Kameshwar Rao
Abstract:
The lack of data is a pain problem in medical image analysis using a convolutional neural network (CNN). This work uses various spatial transformation techniques to address the medical image augmentation issue for knee detection and localization using an enhanced single shot detector (SSD) network. The spatial transforms like a negative, histogram equalization, power law, sharpening, averaging, gaussian blurring, etc. help to generate more samples, serve as pre-processing methods, and highlight the features of interest. The experimentation is done on the OpenKnee dataset which is a collection of knee images from the openly available online sources. The CNN called enhanced single shot detector (SSD) is utilized for the detection and localization of the knee joint from a given X-ray image. It is an enhanced version of the famous SSD network and is modified in such a way that it will reduce the number of prediction boxes at the output side. It consists of a classification network (VGGNET) and an auxiliary detection network. The performance is measured in mean average precision (mAP), and 99.96% mAP is achieved using the proposed enhanced SSD with spatial transformations. It is also seen that the localization boundary is comparatively more refined and closer to the ground truth in spatial augmentation and gives better detection and localization of knee joints.Keywords: data augmentation, enhanced SSD, knee detection and localization, medical image analysis, openKnee, Spatial transformations
Procedia PDF Downloads 1522471 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity
Procedia PDF Downloads 2692470 Foggy Image Restoration Using Neural Network
Authors: Khader S. Al-Aidmat, Venus W. Samawi
Abstract:
Blurred vision in the misty atmosphere is essential problem which needs to be resolved. To solve this problem, we developed a technique to restore foggy degraded image from its original version using Back-propagation neural network (BP-NN). The suggested technique is based on mapping between foggy scene and its corresponding original scene. Seven different approaches are suggested based on type of features used in image restoration. Features are extracted from spatial and spatial-frequency domain (using DCT). Each of these approaches comes with its own BP-NN architecture depending on type and number of used features. The weight matrix resulted from training each BP-NN represents a fog filter. The performance of these filters are evaluated empirically (using PSNR), and perceptually. By comparing the performance of these filters, the effective features that suits BP-NN technique for restoring foggy images is recognized. This system proved its effectiveness and success in restoring moderate foggy images.Keywords: artificial neural network, discrete cosine transform, feed forward neural network, foggy image restoration
Procedia PDF Downloads 3802469 Visual Intelligence: Perception, Image and Manipulation in Visual Communication
Authors: Poojitha Vemula
Abstract:
Understanding how we use image manipulation to communicate through an audience’s perceptions and conceive visual intelligence. With the use of many software and high-end skills, designers have developed a third eye to combine two different visuals and create the desired image by using photoshop and other software skills. The purpose of visual intelligence is to convey a message to the targeted audience. For instance, the images of models are retouched on their skin to make it more convincing and draw attention from the audience. There are many ways of manipulating an image, such as double exposure, retouching photography inks or paint airbrushing and piecing photos together, or enhancing the brightness and contrast. To understand visual intelligence, a questionnaire survey as well as research was conducted on how image manipulation is used by both the audience and the designers. This depends on the message that needs to be conveyed by the brands. For instance, Fair & Lovely, a brightening cream for ladies use a lot of retouching and effects to show the dramatic change the cream takes effect on dark or dusky faces. Thus the designer’s role is to use their third eye to incorporate the message into visuals. The research and questionnaire survey concludes the perceptions and manipulations used in visual communication. However this is all to make an effortless communication between the designer and the audience by using the skills of the designer and the features provided by the software. The objective of visual intelligence is to covet the message of the brands that advertise their products or services by using visuals through softwares. Conveying a message through visual intelligence requires an audiences perceptions and understanding from the visuals created by the artists or designers. Visual intelligence determines how we use our technical skills to retouch and manipulate an image for a better understanding to convey the message to the targeted audience. This also bridges the communication between the brand and the audience.Keywords: graphic design, visual communication, convey messages, photoshop, image manipulation
Procedia PDF Downloads 2162468 A Note on the Fractal Dimension of Mandelbrot Set and Julia Sets in Misiurewicz Points
Authors: O. Boussoufi, K. Lamrini Uahabi, M. Atounti
Abstract:
The main purpose of this paper is to calculate the fractal dimension of some Julia Sets and Mandelbrot Set in the Misiurewicz Points. Using Matlab to generate the Julia Sets images that match the Misiurewicz points and using a Fractal software, we were able to find different measures that characterize those fractals in textures and other features. We are actually focusing on fractal dimension and the error calculated by the software. When executing the given equation of regression or the log-log slope of image a Box Counting method is applied to the entire image, and chosen settings are available in a FracLAc Program. Finally, a comparison is done for each image corresponding to the area (boundary) where Misiurewicz Point is located.Keywords: box counting, FracLac, fractal dimension, Julia Sets, Mandelbrot Set, Misiurewicz Points
Procedia PDF Downloads 2142467 Effect of Threshold Configuration on Accuracy in Upper Airway Analysis Using Cone Beam Computed Tomography
Authors: Saba Fahham, Supak Ngamsom, Suchaya Damrongsri
Abstract:
Objective: The objective is to determine the optimal threshold of Romexis software for the airway volume and minimum cross-section area (MCA) analysis using Image J as a gold standard. Materials and Methods: A total of ten cone-beam computed tomography (CBCT) images were collected. The airway volume and MCA of each patient were analyzed using the automatic airway segmentation function in the CBCT DICOM viewer (Romexis). Airway volume and MCA measurements were conducted on each CBCT sagittal view with fifteen different threshold values from the Romexis software, Ranging from 300 to 1000. Duplicate DICOM files, in axial view, were imported into Image J for concurrent airway volume and MCA analysis as the gold standard. The airway volume and MCA measured from Romexis and Image J were compared using a t-test with Bonferroni correction, and statistical significance was set at p<0.003. Results: Concerning airway volume, thresholds of 600 to 850 as well as 1000, exhibited results that were not significantly distinct from those obtained through Image J. Regarding MCA, employing thresholds from 400 to 850 within Romexis Viewer showed no variance from Image J. Notably, within the threshold range of 600 to 850, there were no statistically significant differences observed in both airway volume and MCA analyses, in comparison to Image J. Conclusion: This study demonstrated that the utilization of Planmeca Romexis Viewer 6.4.3.3 within threshold range of 600 to 850 yields airway volume and MCA measurements that exhibit no statistically significant variance in comparison to measurements obtained through Image J. This outcome holds implications for diagnosing upper airway obstructions and post-orthodontic surgical monitoring.Keywords: airway analysis, airway segmentation, cone beam computed tomography, threshold
Procedia PDF Downloads 432466 A Gradient Orientation Based Efficient Linear Interpolation Method
Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar
Abstract:
This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing
Procedia PDF Downloads 2582465 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain
Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA
Abstract:
In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.Keywords: BER, DWT, extreme leaning machine (ELM), PSNR
Procedia PDF Downloads 3102464 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training
Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li
Abstract:
Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning
Procedia PDF Downloads 2582463 Biophysical Modeling of Anisotropic Brain Tumor Growth
Authors: Mutaz Dwairy
Abstract:
Solid tumors have high interstitial fluid pressure (IFP), high mechanical stress, and low oxygen levels. Solid stresses may induce apoptosis, stimulate the invasiveness and metastasis of cancer cells, and lower their proliferation rate, while oxygen concentration may affect the response of cancer cells to treatment. Although tumors grow in a nonhomogeneous environment, many existing theoretical models assume homogeneous growth and tissue has uniform mechanical properties. For example, the brain consists of three primary materials: white matter, gray matter, and cerebrospinal fluid (CSF). Therefore, tissue inhomogeneity should be considered in the analysis. This study established a physical model based on convection-diffusion equations and continuum mechanics principles. The model considers the geometrical inhomogeneity of the brain by including the three different matters in the analysis: white matter, gray matter, and CSF. The model also considers fluid-solid interaction and explicitly describes the effect of mechanical factors, e.g., solid stresses and IFP, chemical factors, e.g., oxygen concentration, and biological factors, e.g., cancer cell concentration, on growing tumors. In this article, we applied the model on a brain tumor positioned within the white matter, considering the brain inhomogeneity to estimate solid stresses, IFP, the cancer cell concentration, oxygen concentration, and the deformation of the tissues within the neoplasm and the surrounding. Tumor size was estimated at different time points. This model might be clinically crucial for cancer detection and treatment planning by measuring mechanical stresses, IFP, and oxygen levels in the tissue.Keywords: biomechanical model, interstitial fluid pressure, solid stress, tumor microenvironment
Procedia PDF Downloads 452462 Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric
Authors: Geetika Barman, B. S. Daya Sagar
Abstract:
In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them.Keywords: dilation distance, erosion distance, hyperspectral image classification, mathematical morphology
Procedia PDF Downloads 832461 Multiple Images Stitching Based on Gradually Changing Matrix
Authors: Shangdong Zhu, Yunzhou Zhang, Jie Zhang, Hang Hu, Yazhou Zhang
Abstract:
Image stitching is a very important branch in the field of computer vision, especially for panoramic map. In order to eliminate shape distortion, a novel stitching method is proposed based on gradually changing matrix when images are horizontal. For images captured horizontally, this paper assumes that there is only translational operation in image stitching. By analyzing each parameter of the homography matrix, the global homography matrix is gradually transferred to translation matrix so as to eliminate the effects of scaling, rotation, etc. in the image transformation. This paper adopts matrix approximation to get the minimum value of the energy function so that the shape distortion at those regions corresponding to the homography can be minimized. The proposed method can avoid multiple horizontal images stitching failure caused by accumulated shape distortion. At the same time, it can be combined with As-Projective-As-Possible algorithm to ensure precise alignment of overlapping area.Keywords: image stitching, gradually changing matrix, horizontal direction, matrix approximation, homography matrix
Procedia PDF Downloads 3152460 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots
Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar
Abstract:
Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.Keywords: agricultural mobile robot, image processing, path recognition, hough transform
Procedia PDF Downloads 1462459 Video Stabilization Using Feature Point Matching
Authors: Shamsundar Kulkarni
Abstract:
Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.Keywords: video stabilization, point feature matching, salient points, image quality measurement
Procedia PDF Downloads 3102458 Experimental Characterization of Composite Material with Non Contacting Methods
Authors: Nikolaos Papadakis, Constantinos Condaxakis, Konstantinos Savvakis
Abstract:
The aim of this paper is to determine the elastic properties (elastic modulus and Poisson ratio) of a composite material based on noncontacting imaging methods. More specifically, the significantly reduced cost of digital cameras has given the opportunity of the high reliability of low-cost strain measurement. The open source platform Ncorr is used in this paper which utilizes the method of digital image correlation (DIC). The use of digital image correlation in measuring strain uses random speckle preparation on the surface of the gauge area, image acquisition, and postprocessing the image correlation to obtain displacement and strain field on surface under study. This study discusses technical issues relating to the quality of results to be obtained are discussed. [0]8 fabric glass/epoxy composites specimens were prepared and tested at different orientations 0[o], 30[o], 45[o], 60[o], 90[o]. Each test was recorded with the camera at a constant frame rate and constant lighting conditions. The recorded images were processed through the use of the image processing software. The parameters of the test are reported. The strain map output which is obtained through strain measurement using Ncorr is validated by a) comparing the elastic properties with expected values from Classical laminate theory, b) through finite element analysis.Keywords: composites, Ncorr, strain map, videoextensometry
Procedia PDF Downloads 1432457 Large Neural Networks Learning From Scratch With Very Few Data and Without Explicit Regularization
Authors: Christoph Linse, Thomas Martinetz
Abstract:
Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.Keywords: convolutional neural networks, fine-grained image classification, generalization, image recognition, over-parameterized, small data sets
Procedia PDF Downloads 872456 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.Keywords: computer-aided system, detection, image segmentation, morphology
Procedia PDF Downloads 1482455 Prosperous Digital Image Watermarking Approach by Using DCT-DWT
Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar
Abstract:
In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacksKeywords: watermarking, digital, DCT-DWT, security
Procedia PDF Downloads 4212454 PET Image Resolution Enhancement
Authors: Krzysztof Malczewski
Abstract:
PET is widely applied scanning procedure in medical imaging based research. It delivers measurements of functioning in distinct areas of the human brain while the patient is comfortable, conscious and alert. This article presents the new compression sensing based super-resolution algorithm for improving the image resolution in clinical Positron Emission Tomography (PET) scanners. The issue of motion artifacts is well known in Positron Emission Tomography (PET) studies as its side effect. The PET images are being acquired over a limited period of time. As the patients cannot hold breath during the PET data gathering, spatial blurring and motion artefacts are the usual result. These may lead to wrong diagnosis. It is shown that the presented approach improves PET spatial resolution in cases when Compressed Sensing (CS) sequences are used. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were traditionally thought necessary. The application of CS to PET has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the goal is to combine super-resolution image enhancement algorithm with CS framework to achieve high resolution PET output. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity.Keywords: PET, super-resolution, image reconstruction, pattern recognition
Procedia PDF Downloads 3692453 Image Segmentation Using Active Contours Based on Anisotropic Diffusion
Authors: Shafiullah Soomro
Abstract:
Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.Keywords: active contours, anisotropic diffusion, level-set, partial differential equations
Procedia PDF Downloads 1572452 Mutual Information Based Image Registration of Satellite Images Using PSO-GA Hybrid Algorithm
Authors: Dipti Patra, Guguloth Uma, Smita Pradhan
Abstract:
Registration is a fundamental task in image processing. It is used to transform different sets of data into one coordinate system, where data are acquired from different times, different viewing angles, and/or different sensors. The registration geometrically aligns two images (the reference and target images). Registration techniques are used in satellite images and it is important in order to be able to compare or integrate the data obtained from these different measurements. In this work, mutual information is considered as a similarity metric for registration of satellite images. The transformation is assumed to be a rigid transformation. An attempt has been made here to optimize the transformation function. The proposed image registration technique hybrid PSO-GA incorporates the notion of Particle Swarm Optimization and Genetic Algorithm and is used for finding the best optimum values of transformation parameters. The performance comparision obtained with the experiments on satellite images found that the proposed hybrid PSO-GA algorithm outperforms the other algorithms in terms of mutual information and registration accuracy.Keywords: image registration, genetic algorithm, particle swarm optimization, hybrid PSO-GA algorithm and mutual information
Procedia PDF Downloads 4062451 Robust Medical Image Watermarking Using Frequency Domain and Least Significant Bits Algorithms
Authors: Volkan Kaya, Ersin Elbasi
Abstract:
Watermarking and stenography are getting importance recently because of copyright protection and authentication. In watermarking we embed stamp, logo, noise or image to multimedia elements such as image, video, audio, animation and text. There are several works have been done in watermarking for different purposes. In this research work, we used watermarking techniques to embed patient information into the medical magnetic resonance (MR) images. There are two methods have been used; frequency domain (Digital Wavelet Transform-DWT, Digital Cosine Transform-DCT, and Digital Fourier Transform-DFT) and spatial domain (Least Significant Bits-LSB) domain. Experimental results show that embedding in frequency domains resist against one type of attacks, and embedding in spatial domain is resist against another group of attacks. Peak Signal Noise Ratio (PSNR) and Similarity Ratio (SR) values are two measurement values for testing. These two values give very promising result for information hiding in medical MR images.Keywords: watermarking, medical image, frequency domain, least significant bits, security
Procedia PDF Downloads 2862450 Film Therapy on Adolescent Body Image: A Pilot Study
Authors: Sonia David, Uma Warrier
Abstract:
Background: Film therapy is the use of commercial or non-commercial films to enhance healing for therapeutic purposes. Objectives: The mixed-method study aims to evaluate the effect of film-based counseling on body image dissatisfaction among adolescents to precisely ascertain the cause of the alteration in body image dissatisfaction due to the said intervention. Method: The one group pre-test post-test research design study using inferential statistics and thematic analysis is based on a pre-test post-test design conducted on 44 school-going adolescents between 13 and 17. The Body Shape Questionnaire (BSQ- 34) was used as a pre-test and post-test measure. The film-based counseling intervention model was used through individual counseling sessions. The analysis involved paired sample t-test used to examine the data quantitatively, and thematic analysis was used to evaluate qualitative data. Findings: The results indicated that there is a significant difference between the pre-test and post-test means. Since t(44)= 9.042 is significant at a 99% confidence level, it is ascertained that film-based counseling intervention reduces body image dissatisfaction. The five distinct themes from the thematic analysis are “acceptance, awareness, empowered to change, empathy, and reflective.” Novelty: The paper originally contributes to the repertoire of research on film therapy as a successful counseling intervention for addressing the challenges of body image dissatisfaction. This study also opens avenues for considering alteration of teaching pedagogy to include video-based learning in various subjects.Keywords: body image dissatisfaction, adolescents, film-based counselling, film therapy, acceptance and commitment therapy
Procedia PDF Downloads 2942449 Implementation of Achterbahn-128 for Images Encryption and Decryption
Authors: Aissa Belmeguenai, Khaled Mansouri
Abstract:
In this work, an efficient implementation of Achterbahn-128 for images encryption and decryption was introduced. The implementation for this simulated project is written by MATLAB.7.5. At first two different original images are used for validate the proposed design. Then our developed program was used to transform the original images data into image digits file. Finally, we used our implemented program to encrypt and decrypt images data. Several tests are done for proving the design performance including visual tests and security analysis; we discuss the security analysis of the proposed image encryption scheme including some important ones like key sensitivity analysis, key space analysis, and statistical attacks.Keywords: Achterbahn-128, stream cipher, image encryption, security analysis
Procedia PDF Downloads 5302448 A Study on Real-Time Fluorescence-Photoacoustic Imaging System for Mouse Thrombosis Monitoring
Authors: Sang Hun Park, Moung Young Lee, Su Min Yu, Hyun Sang Jo, Ji Hyeon Kim, Chul Gyu Song
Abstract:
A near-infrared light source used as a light source in the fluorescence imaging system is suitable for use in real-time during the operation since it has no interference in surgical vision. However, fluorescence images do not have depth information. In this paper, we configured the device with the research on molecular imaging systems for monitoring thrombus imaging using fluorescence and photoacoustic. Fluorescence imaging was performed using a phantom experiment in order to search the exact location, and the Photoacoustic image was in order to detect the depth. Fluorescence image obtained when evaluated through current phantom experiments when the concentration of the contrast agent is 25μg / ml, it was confirmed that it looked sharper. The phantom experiment is has shown the possibility with the fluorescence image and photoacoustic image using an indocyanine green contrast agent. For early diagnosis of cardiovascular diseases, more active research with the fusion of different molecular imaging devices is required.Keywords: fluorescence, photoacoustic, indocyanine green, carotid artery
Procedia PDF Downloads 6002447 Reactive and Concurrency-Based Image Resource Management Module for iOS Applications
Authors: Shubham V. Kamdi
Abstract:
This paper aims to serve as an introduction to image resource caching techniques for iOS mobile applications. It will explain how developers can break down multiple image-downloading tasks concurrently using state-of-the-art iOS frameworks, namely Swift Concurrency and Combine. The paper will explain how developers can leverage SwiftUI to develop reactive view components and use declarative coding patterns. Developers will learn to bypass built-in image caching systems by curating the procedure to implement a swift-based LRU cache system. The paper will provide a full architectural overview of a system, helping readers understand how mobile applications are designed professionally. It will cover technical discussion, helping readers understand the low-level details of threads and how they can switch between them, as well as the significance of the main and background threads for requesting HTTP services via mobile applications.Keywords: main thread, background thread, reactive view components, declarative coding
Procedia PDF Downloads 222446 A Nonlocal Means Algorithm for Poisson Denoising Based on Information Geometry
Authors: Dongxu Chen, Yipeng Li
Abstract:
This paper presents an information geometry NonlocalMeans(NLM) algorithm for Poisson denoising. NLM estimates a noise-free pixel as a weighted average of image pixels, where each pixel is weighted according to the similarity between image patches in Euclidean space. In this work, every pixel is a Poisson distribution locally estimated by Maximum Likelihood (ML), all distributions consist of a statistical manifold. A NLM denoising algorithm is conducted on the statistical manifold where Fisher information matrix can be used for computing distribution geodesics referenced as the similarity between patches. This approach was demonstrated to be competitive with related state-of-the-art methods.Keywords: image denoising, Poisson noise, information geometry, nonlocal-means
Procedia PDF Downloads 2842445 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System
Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng
Abstract:
Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging
Procedia PDF Downloads 3652444 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data
Authors: Fanqiang Kong, Chending Bian
Abstract:
Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means
Procedia PDF Downloads 244