Search results for: Image Texture
1016 Beta-spline Surface Fitting to Multi-slice Images
Authors: Normi Abdul Hadi, Arsmah Ibrahim, Fatimah Yahya, Jamaludin Md. Ali
Abstract:
Beta-spline is built on G2 continuity which guarantees smoothness of generated curves and surfaces using it. This curve is preferred to be used in object design rather than reconstruction. This study however, employs the Beta-spline in reconstructing a 3- dimensional G2 image of the Stanford Rabbit. The original data consists of multi-slice binary images of the rabbit. The result is then compared with related works using other techniques.Keywords: Beta-spline, multi-slice image, rectangular surface, 3D reconstruction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18821015 Medical Image Segmentation Using Deformable Model and Local Fitting Binary: Thoracic Aorta
Authors: B. Bagheri Nakhjavanlo, T. S. Ellis, P.Raoofi, Sh.ziari
Abstract:
This paper presents an application of level sets for the segmentation of abdominal and thoracic aortic aneurysms in CTA datasets. An important challenge in reliably detecting aortic is the need to overcome problems associated with intensity inhomogeneities. Level sets are part of an important class of methods that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A kernel function in the level set formulation aids the suppression of noise in the extracted regions of interest and then guides the motion of the evolving contour for the detection of weak boundaries. The speed of curve evolution has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level sets, and are shown to be more effective than other approaches in coping with intensity inhomogeneities. We have applied the Courant Friedrichs Levy (CFL) condition as stability criterion for our algorithm.Keywords: Image segmentation, Level-sets, Local fitting binary, Thoracic aorta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14561014 Kalman-s Shrinkage for Wavelet-Based Despeckling of SAR Images
Authors: Mario Mastriani, Alberto E. Giraldez
Abstract:
In this paper, a new probability density function (pdf) is proposed to model the statistics of wavelet coefficients, and a simple Kalman-s filter is derived from the new pdf using Bayesian estimation theory. Specifically, we decompose the speckled image into wavelet subbands, we apply the Kalman-s filter to the high subbands, and reconstruct a despeckled image from the modified detail coefficients. Experimental results demonstrate that our method compares favorably to several other despeckling methods on test synthetic aperture radar (SAR) images.Keywords: Kalman's filter, shrinkage, speckle, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16061013 Digital Image Watermarking in the Wavelet Transform Domain
Authors: Kamran Hameed, Adeel Mumtaz, S.A.M. Gilani
Abstract:
In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo-s [1] technique is more robust for standard noise attacks than Dote-s [2] technique.Keywords: Digital image, Copyright protection, Watermarking, Wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26521012 Multi-Scale Gabor Feature Based Eye Localization
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho
Abstract:
Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18211011 On Musical Information Geometry with Applications to Sonified Image Analysis
Authors: Shannon Steinmetz, Ellen Gethner
Abstract:
In this paper a theoretical foundation is developed to segment, analyze and associate patterns within audio. We explore this on imagery via sonified audio applied to our segmentation framework. The approach involves a geodesic estimator within the statistical manifold, parameterized by musical centricity. We demonstrate viability by processing a database of random imagery to produce statistically significant clusters of similar imagery content.
Keywords: Sonification, musical information geometry, image content extraction, automated quantification, audio segmentation, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4261010 A New Approach for Counting Passersby Utilizing Space-Time Images
Authors: A. Elmarhomy, S. Karungaru, K. Terada
Abstract:
Understanding the number of people and the flow of the persons is useful for efficient promotion of the institution managements and company-s sales improvements. This paper introduces an automated method for counting passerby using virtualvertical measurement lines. The process of recognizing a passerby is carried out using an image sequence obtained from the USB camera. Space-time image is representing the human regions which are treated using the segmentation process. To handle the problem of mismatching, different color space are used to perform the template matching which chose automatically the best matching to determine passerby direction and speed. A relation between passerby speed and the human-pixel area is used to distinguish one or two passersby. In the experiment, the camera is fixed at the entrance door of the hall in a side viewing position. Finally, experimental results verify the effectiveness of the presented method by correctly detecting and successfully counting them in order to direction with accuracy of 97%.Keywords: counting passersby, virtual-vertical measurement line, passerby speed, space-time image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14131009 Robust Image Transmission Over Time-varying Channels using Hierarchical Joint Source Channel Coding
Authors: Hatem. Elmeddeb, Noureddine, Hamdi, Ammar. Bouallègue
Abstract:
In this paper, a joint source-channel coding (JSCC) scheme for time-varying channels is presented. The proposed scheme uses hierarchical framework for both source encoder and transmission via QAM modulation. Hierarchical joint source channel codes with hierarchical QAM constellations are designed to track the channel variations which yields to a higher throughput by adapting certain parameters of the receiver to the channel variation. We consider the problem of still image transmission over time-varying channels with channel state information (CSI) available at 1) receiver only and 2) both transmitter and receiver being informed about the state of the channel. We describe an algorithm that optimizes hierarchical source codebooks by minimizing the distortion due to source quantizer and channel impairments. Simulation results, based on image representation, show that, the proposed hierarchical system outperforms the conventional schemes based on a single-modulator and channel optimized source coding.Keywords: Channel-optimized VQ (COVQ), joint optimization, QAM, hierarchical systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14251008 A Comparison of Real Valued Transforms for Image Compression
Authors: Shivali D. Kulkarni, Ameya K. Naik, Nitin S. Nagori
Abstract:
In this paper we present simulation results for the application of a bandwidth efficient algorithm (mapping algorithm) to an image transmission system. This system considers three different real valued transforms to generate energy compact coefficients. First results are presented for gray scale and color image transmission in the absence of noise. It is seen that the system performs its best when discrete cosine transform is used. Also the performance of the system is dominated more by the size of the transform block rather than the number of coefficients transmitted or the number of bits used to represent each coefficient. Similar results are obtained in the presence of additive white Gaussian noise. The varying values of the bit error rate have very little or no impact on the performance of the algorithm. Optimum results are obtained for the system considering 8x8 transform block and by transmitting 15 coefficients from each block using 8 bits.Keywords: Additive white Gaussian noise channel, mapping algorithm, peak signal to noise ratio, transform encoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14991007 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification
Authors: Samiah Alammari, Nassim Ammour
Abstract:
When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.
Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2321006 Medical Image Segmentation Using Deformable Models and Local Fitting Binary
Authors: B. Bagheri Nakhjavanlo, T. J. Ellis, P. Raoofi, J. Dehmeshki
Abstract:
This paper presents a customized deformable model for the segmentation of abdominal and thoracic aortic aneurysms in CTA datasets. An important challenge in reliably detecting aortic aneurysm is the need to overcome problems associated with intensity inhomogeneities and image noise. Level sets are part of an important class of methods that utilize partial differential equations (PDEs) and have been extensively applied in image segmentation. A Gaussian kernel function in the level set formulation, which extracts the local intensity information, aids the suppression of noise in the extracted regions of interest and then guides the motion of the evolving contour for the detection of weak boundaries. The speed of curve evolution has been significantly improved with a resulting decrease in segmentation time compared with previous implementations of level sets. The results indicate the method is more effective than other approaches in coping with intensity inhomogeneities.Keywords: Abdominal and thoracic aortic aneurysms, intensityinhomogeneity, level sets, local fitting binary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18161005 Wrap-around View Equipped on Mobile Robot
Authors: Sun Lim, Sewoong Jun, Il-Kyun Jung
Abstract:
This paper presents a wrap-around view system with 4 smart cameras module and remote motion mobile robot control equipped with smart camera module system. The two-level scheme for remote motion control with smart-pad(IPAD) is introduced on this paper. In the low-level, the wrap-around view system is controlled or operated to keep the reference points lying around top view image plane. On the higher level, a robot image based motion controller is utilized to drive the mobile platform to reach the desired position or track the desired motion planning through image feature feedback. The design wrap-around view system equipped on presents such advantages as follows: 1) a satisfactory solution for the FOV and affine problem; 2) free of any complex and constraint with robot pose. The performance of the wrap-around view equipped on mobile robot remote control is proven by experimental results.Keywords: four smart camera, wrap-around view, remote mobile robot control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18151004 A Real-Time Tracking System Developed for an Interactive Stage Performance
Authors: S. Hu, J. Mortensen, Bernard F. Buxton
Abstract:
A real-time tracking system was built to track performers on an interactive stage. Using an ordinary, up to date, desktop workstation, the performers- silhouette was segmented from the background and parameterized by calculating the normalized central image moments. In the stage system, the silhouette moments were then sent to a parallel workstation, which used them to generate corresponding 3D virtual geometry and projected the generated graphic back onto the stage.
Keywords: Image moment, interactive stage, real-time, silhouette.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12181003 Improved Processing Speed for Text Watermarking Algorithm in Color Images
Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari
Abstract:
Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.
Keywords: Steganography, watermarking, private keys, time complexity measurements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8161002 Design of a DCT-based Image Compression with Efficient Enhancement Filter
Authors: Yen-Yu Chen, Pao-Ching Chu, Ya-Ling Tsai
Abstract:
The algorithm represents the DCT coefficients to concentrate signal energy and proposes combination and dictator to eliminate the correlation in the same level subband for encoding the DCT-based images. This work adopts DCT and modifies the SPIHT algorithm to encode DCT coefficients. The proposed algorithm also provides the enhancement function in low bit rate in order to improve the perceptual quality. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of both PSNR and the perceptual results close to JPEG2000 at the same bit rate.
Keywords: JPEG 2000, enhancement filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16931001 Persian/Arabic Document Segmentation Based On Pyramidal Image Structure
Authors: Seyyed Yasser Hashemi, Khalil Monfaredi
Abstract:
Automatic transformation of paper documents into electronic documents requires document segmentation at the first stage. However, some parameters restrictions such as variations in character font sizes, different text line spacing, and also not uniform document layout structures altogether have made it difficult to design a general-purpose document layout analysis algorithm for many years. Thus in most previously reported methods it is inevitable to include these parameters. This problem becomes excessively acute and severe, especially in Persian/Arabic documents. Since the Persian/Arabic scripts differ considerably from the English scripts, most of the proposed methods for the English scripts do not render good results for the Persian scripts. In this paper, we present a novel parameter-free method for segmenting the Persian/Arabic document images which also works well for English scripts. This method segments the document image into maximal homogeneous regions and identifies them as texts and non-texts based on a pyramidal image structure. In other words the proposed method is capable of document segmentation without considering the character font sizes, text line spacing, and document layout structures. This algorithm is examined for 150 Arabic/Persian and English documents and document segmentation process are done successfully for 96 percent of documents.
Keywords: Persian/Arabic document, document segmentation, Pyramidal Image Structure, skew detection and correction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17651000 The Effects of Applying Wash and Green-A Syrups as Substitution of Sugar on Dough and Cake Properties
Authors: Banafsheh Aghamohammadi, Masoud Honarvar, Babak Ghiassi Tarzi
Abstract:
Usage of different components has been considered to improve the quality and nutritional properties of cakes in recent years. The effects of applying some sweeteners, instead of sugar, have been evaluated in cakes and many bread formulas up to now; but there has not been any research about the usage of by-products of sugar factories such as Wash and Green-A Syrups in cake formulas. In this research, the effects of substituting 25%, 50%, 75% and 100% of sugar with Wash and Green-A Syrups on some dough and cake properties, such as pH, viscosity, density, volume, weight loss, moisture, water activity, texture, staling, color and sensory evaluations, are studied. The results of these experiments showed that the pH values were not significantly different among any of the all cake batters and also most of the cake samples. Although differences among viscosity and specific gravity of all treatments were both significant and insignificant, these two parameters resulted in higher volume in all samples than the blank one. The differences in weight loss, moisture content and water activity of samples were insignificant. Evaluating of texture showed that the softness of most of samples is increased and the staling is decreased. Crumb color and sensory evaluations of samples were also affected by the replacement of sucrose with Wash and Green-A Syrups. According to the results, we can increase the shelf life and improve the quality and nutritional values of cake by using these kinds of syrups in the formulation.Keywords: Cake, green-A syrup, quality tests, sensory evaluation, wash syrup.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967999 Extracting Road Signs using the Color Information
Authors: Wen-Yen Wu, Tsung-Cheng Hsieh, Ching-Sung Lai
Abstract:
In this paper, we propose a method to extract the road signs. Firstly, the grabbed image is converted into the HSV color space to detect the road signs. Secondly, the morphological operations are used to reduce noise. Finally, extract the road sign using the geometric property. The feature extraction of road sign is done by using the color information. The proposed method has been tested for the real situations. From the experimental results, it is seen that the proposed method can extract the road sign features effectively.Keywords: Color information, image processing, road sign.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241998 Evaluation of Mixed-Mode Stress Intensity Factor by Digital Image Correlation and Intelligent Hybrid Method
Authors: K. Machida, H. Yamada
Abstract:
Displacement measurement was conducted on compact normal and shear specimens made of acrylic homogeneous material subjected to mixed-mode loading by digital image correlation. The intelligent hybrid method proposed by Nishioka et al. was applied to the stress-strain analysis near the crack tip. The accuracy of stress-intensity factor at the free surface was discussed from the viewpoint of both the experiment and 3-D finite element analysis. The surface images before and after deformation were taken by a CMOS camera, and we developed the system which enabled the real time stress analysis based on digital image correlation and inverse problem analysis. The great portion of processing time of this system was spent on displacement analysis. Then, we tried improvement in speed of this portion. In the case of cracked body, it is also possible to evaluate fracture mechanics parameters such as the J integral, the strain energy release rate, and the stress-intensity factor of mixed-mode. The 9-points elliptic paraboloid approximation could not analyze the displacement of submicron order with high accuracy. The analysis accuracy of displacement was improved considerably by introducing the Newton-Raphson method in consideration of deformation of a subset. The stress-intensity factor was evaluated with high accuracy of less than 1% of the error.
Keywords: Digital image correlation, mixed mode, Newton-Raphson method, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703997 Robust Minutiae Watermarking in Wavelet Domain for Fingerprint Security
Authors: Rajlaxmi Chouhan, Pritee Khanna
Abstract:
In this manuscript, a wavelet-based blind watermarking scheme has been proposed as a means to provide security to authenticity of a fingerprint. The information used for identification or verification of a fingerprint mainly lies in its minutiae. By robust watermarking of the minutiae in the fingerprint image itself, the useful information can be extracted accurately even if the fingerprint is severely degraded. The minutiae are converted in a binary watermark and embedding these watermarks in the detail regions increases the robustness of watermarking, at little to no additional impact on image quality. It has been experimentally shown that when the minutiae is embedded into wavelet detail coefficients of a fingerprint image in spread spectrum fashion using a pseudorandom sequence, the robustness is observed to have a proportional response while perceptual invisibility has an inversely proportional response to amplification factor “K". The DWT-based technique has been found to be very robust against noises, geometrical distortions filtering and JPEG compression attacks and is also found to give remarkably better performance than DCT-based technique in terms of correlation coefficient and number of erroneous minutiae.Keywords: Fingerprint watermarking, minutiae, discrete wavelet transform, PN sequence
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2019996 Feature Vector Fusion for Image Based Human Age Estimation
Authors: D. Karthikeyan, G. Balakrishnan
Abstract:
Human faces, as important visual signals, express a significant amount of nonverbal info for usage in human-to-human communication. Age, specifically, is more significant among these properties. Human age estimation using facial image analysis as an automated method which has numerous potential real‐world applications. In this paper, an automated age estimation framework is presented. Support Vector Regression (SVR) strategy is utilized to investigate age prediction. This paper depicts a feature extraction taking into account Gray Level Co-occurrence Matrix (GLCM), which can be utilized for robust face recognition framework. It applies GLCM operation to remove the face's features images and Active Appearance Models (AAMs) to assess the human age based on image. A fused feature technique and SVR with GA optimization are proposed to lessen the error in age estimation.
Keywords: Support vector regression, feature extraction, gray level co-occurrence matrix, active appearance models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314995 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function
Authors: Anupama Pande, Vishik Goel
Abstract:
A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.
Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411994 Video Matting based on Background Estimation
Authors: J.-H. Moon, D.-O Kim, R.-H. Park
Abstract:
This paper presents a video matting method, which extracts the foreground and alpha matte from a video sequence. The objective of video matting is finding the foreground and compositing it with the background that is different from the one in the original image. By finding the motion vectors (MVs) using a sliced block matching algorithm (SBMA), we can extract moving regions from the video sequence under the assumption that the foreground is moving and the background is stationary. In practice, foreground areas are not moving through all frames in an image sequence, thus we accumulate moving regions through the image sequence. The boundaries of moving regions are found by Canny edge detector and the foreground region is separated in each frame of the sequence. Remaining regions are defined as background regions. Extracted backgrounds in each frame are combined and reframed as an integrated single background. Based on the estimated background, we compute the frame difference (FD) of each frame. Regions with the FD larger than the threshold are defined as foreground regions, boundaries of foreground regions are defined as unknown regions and the rest of regions are defined as backgrounds. Segmentation information that classifies an image into foreground, background, and unknown regions is called a trimap. Matting process can extract an alpha matte in the unknown region using pixel information in foreground and background regions, and estimate the values of foreground and background pixels in unknown regions. The proposed video matting approach is adaptive and convenient to extract a foreground automatically and to composite a foreground with a background that is different from the original background.
Keywords: Background estimation, Object segmentation, Blockmatching algorithm, Video matting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813993 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom
Authors: S. Sookpeng, C. J. Martin, D. J. Gentle
Abstract:
Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.
Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623992 Scatterer Density in Nonlinear Diffusion for Speckle Reduction in Ultrasound Imaging: The Isotropic Case
Authors: Ahmed Badawi
Abstract:
This paper proposes a method for speckle reduction in medical ultrasound imaging while preserving the edges with the added advantages of adaptive noise filtering and speed. A nonlinear image diffusion method that incorporates local image parameter, namely, scatterer density in addition to gradient, to weight the nonlinear diffusion process, is proposed. The method was tested for the isotropic case with a contrast detail phantom and varieties of clinical ultrasound images, and then compared to linear and some other diffusion enhancement methods. Different diffusion parameters were tested and tuned to best reduce speckle noise and preserve edges. The method showed superior performance measured both quantitatively and qualitatively when incorporating scatterer density into the diffusivity function. The proposed filter can be used as a preprocessing step for ultrasound image enhancement before applying automatic segmentation, automatic volumetric calculations, or 3D ultrasound volume rendering.Keywords: Ultrasound imaging, Nonlinear isotropic diffusion, Speckle noise, Scattering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950991 Medical Imaging Techniques in Clinical Medicine
Authors: Sharan Badiger, Prema T. Akkasaligar
Abstract:
Medical imaging technology has experienced a dramatic change in the last few years. Medical imaging refers to the techniques and processes used to create images of the human body (or parts thereof) for various clinical purposes such as medical procedures and diagnosis or medical science including the study of normal anatomy and function. With the growth of computers and image technology, medical imaging has greatly influenced the medical field. The diagnosis of a health problem is now highly dependent on the quality and the credibility of the image analysis. This paper deals with the various aspects and types of medical imaging.
Keywords: Computed Tomography, Echocardiography, Medical Imaging, Magnetic Resonance, Ultrasound Imaging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3570990 Performance Evaluation of Iris Region Detection and Localization for Biometric Identification System
Authors: Chit Su Htwe, Win Htay
Abstract:
The iris recognition technology is the most accurate, fast and less invasive one compared to other biometric techniques using for example fingerprints, face, retina, hand geometry, voice or signature patterns. The system developed in this study has the potential to play a key role in areas of high-risk security and can enable organizations with means allowing only to the authorized personnel a fast and secure way to gain access to such areas. The paper aim is to perform the iris region detection and iris inner and outer boundaries localization. The system was implemented on windows platform using Visual C# programming language. It is easy and efficient tool for image processing to get great performance accuracy. In particular, the system includes two main parts. The first is to preprocess the iris images by using Canny edge detection methods, segments the iris region from the rest of the image and determine the location of the iris boundaries by applying Hough transform. The proposed system tested on 756 iris images from 60 eyes of CASIA iris database images.Keywords: Canny, C#, hough transform, image preprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085989 Localization of Anatomical Landmarks in Head CT Images for Image to Patient Registration
Authors: M. Ovinis, D. Kerr, K. Bouazza-Marouf, M. Vloeberghs
Abstract:
The use of anatomical landmarks as a basis for image to patient registration is appealing because the registration may be performed retrospectively. We have previously proposed the use of two anatomical soft tissue landmarks of the head, the canthus (corner of the eye) and the tragus (a small, pointed, cartilaginous flap of the ear), as a registration basis for an automated CT image to patient registration system, and described their localization in patient space using close range photogrammetry. In this paper, the automatic localization of these landmarks in CT images, based on their curvature saliency and using a rule based system that incorporates prior knowledge of their characteristics, is described. Existing approaches to landmark localization in CT images are predominantly semi-automatic and primarily for localizing internal landmarks. To validate our approach, the positions of the landmarks localized automatically and manually in near isotropic CT images of 102 patients were compared. The average difference was 1.2mm (std = 0.9mm, max = 4.5mm) for the medial canthus and 0.8mm (std = 0.6mm, max = 2.6mm) for the tragus. The medial canthus and tragus can be automatically localized in CT images, with performance comparable to manual localization, based on the approach presented.
Keywords: Anatomical Landmarks, CT, Localization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3325988 RoboWeedSupport-Sub Millimeter Weed Image Acquisition in Cereal Crops with Speeds up till 50 Km/H
Authors: Morten Stigaard Laursen, Rasmus Nyholm Jørgensen, Mads Dyrmann, Robert Poulsen
Abstract:
For the past three years, the Danish project, RoboWeedSupport, has sought to bridge the gap between the potential herbicide savings using a decision support system and the required weed inspections. In order to automate the weed inspections it is desired to generate a map of the weed species present within the field, to generate the map images must be captured with samples covering the field. This paper investigates the economical cost of performing this data collection based on a camera system mounted on a all-terain vehicle (ATV) able to drive and collect data at up to 50 km/h while still maintaining a image quality sufficient for identifying newly emerged grass weeds. The economical estimates are based on approximately 100 hectares recorded at three different locations in Denmark. With an average image density of 99 images per hectare the ATV had an capacity of 28 ha per hour, which is estimated to cost 6.6 EUR/ha. Alternatively relying on a boom solution for an existing tracktor it was estimated that a cost of 2.4 EUR/ha is obtainable under equal conditions.Keywords: Weed mapping, integrated weed management, weed recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465987 Iterative Image Reconstruction for Sparse-View Computed Tomography via Total Variation Regularization and Dictionary Learning
Authors: XianYu Zhao, JinXu Guo
Abstract:
Recently, low-dose computed tomography (CT) has become highly desirable due to increasing attention to the potential risks of excessive radiation. For low-dose CT imaging, ensuring image quality while reducing radiation dose is a major challenge. To facilitate low-dose CT imaging, we propose an improved statistical iterative reconstruction scheme based on the Penalized Weighted Least Squares (PWLS) standard combined with total variation (TV) minimization and sparse dictionary learning (DL) to improve reconstruction performance. We call this method "PWLS-TV-DL". In order to evaluate the PWLS-TV-DL method, we performed experiments on digital phantoms and physical phantoms, respectively. The experimental results show that our method is in image quality and calculation. The efficiency is superior to other methods, which confirms the potential of its low-dose CT imaging.Keywords: Low dose computed tomography, penalized weighted least squares, total variation, dictionary learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834