Search results for: object-based image analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28611

Search results for: object-based image analysis

28281 The "Street Less Traveled": Body Image and Its Relationship with Eating Attitudes, Influence of Media and Self-Esteem among College Students

Authors: Aditya Soni, Nimesh Parikh, R. A. Thakrar

Abstract:

Background: A cross-sectional study looked to focus body image satisfaction, heretofore under investigated arena in our setting. This study additionally examined the relationship of body mass index, influence of media and self-esteem. Our second objective was to assess whether there was any relationship between body image dissatisfaction and gender. Methods: A cross-sectional study using body image satisfaction described in words was undertaken, which also explored relationship with body mass index (BMI), influence of media, self-esteem and other selected co-variables such as socio-demographic details, overall satisfaction in life, and particularly in academic/professional life, current health status using 5-item based Likert scale. Convenience sampling was used to select participants of both genders aged from 17 to 32 on a sample size of 303 participants. Results : The body image satisfaction had significant relationship with Body mass index (P<0.001), eating attitude (P<0.001), influence of media (P<0.001) and self-esteem (P<0.001). Students with low weight had a significantly higher prevalence of body image satisfaction while overweight students had a significantly higher prevalence of dissatisfaction (P<0.001). Females showed more concern about body image as compared to males. Conclusions: Generally, this study reveals that the eating attitude, influence of the media and self-esteem is significantly related to the body image. On an empowering note, this level needs to be saved for overall mental and sound advancement of people. Proactive preventive measures could be started in foundations on identity improvement, acknowledgement of self and individual contrasts while keeping up ideal weight and dynamic life style.

Keywords: body image, body mass index, media, self-esteem

Procedia PDF Downloads 551
28280 Detecting and Disabling Digital Cameras Using D3CIP Algorithm Based on Image Processing

Authors: S. Vignesh, K. S. Rangasamy

Abstract:

The paper deals with the device capable of detecting and disabling digital cameras. The system locates the camera and then neutralizes it. Every digital camera has an image sensor known as a CCD, which is retro-reflective and sends light back directly to its original source at the same angle. The device shines infrared LED light, which is invisible to the human eye, at a distance of about 20 feet. It then collects video of these reflections with a camcorder. Then the video of the reflections is transferred to a computer connected to the device, where it is sent through image processing algorithms that pick out infrared light bouncing back. Once the camera is detected, the device would project an invisible infrared laser into the camera's lens, thereby overexposing the photo and rendering it useless. Low levels of infrared laser neutralize digital cameras but are neither a health danger to humans nor a physical damage to cameras. We also discuss the simplified design of the above device that can used in theatres to prevent piracy. The domains being covered here are optics and image processing.

Keywords: CCD, optics, image processing, D3CIP

Procedia PDF Downloads 335
28279 Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement

Authors: Wei Zhang, Yan He, Yan Wang, Yufeng Li, Chuanpeng Hao

Abstract:

Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness.

Keywords: feature analysis, machine vision, PCA, surface roughness, SVM

Procedia PDF Downloads 186
28278 Identification of How Pre-Service Physics Teachers Understand Image Formations through Virtual Objects in the Field of Geometric Optics and Development of a New Material to Exploit Virtual Objects

Authors: Ersin Bozkurt

Abstract:

The aim of the study is to develop materials for understanding image formations through virtual objects in geometric optics. The images in physics course books are formed by using real objects. This results in mistakes in the features of images because of generalizations which leads to conceptual misunderstandings in learning. In this study it was intended to identify pre-service physics teachers misunderstandings arising from false generalizations. Focused group interview was used as a qualitative method. The findings of the study show that students have several misconceptions such as "the image in a plain mirror is always virtual". However a real image can be formed in a plain mirror. To explain a virtual object's image formation in a more understandable way an overhead projector and episcope and their design was illustrated. The illustrations are original and several computer simulations will be suggested.

Keywords: computer simulations, geometric optics, physics education, students' misconceptions in physics

Procedia PDF Downloads 377
28277 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition

Authors: Latha Subbiah, Dhanalakshmi Samiappan

Abstract:

In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.

Keywords: curvelet, decomposition, levelset, ultrasound

Procedia PDF Downloads 312
28276 Urban Land Use Type Analysis Based on Land Subsidence Areas Using X-Band Satellite Image of Jakarta Metropolitan City, Indonesia

Authors: Ratih Fitria Putri, Josaphat Tetuko Sri Sumantyo, Hiroaki Kuze

Abstract:

Jakarta Metropolitan City is located on the northwest coast of West Java province with geographical location between 106º33’ 00”-107º00’00”E longitude and 5º48’30”-6º24’00”S latitude. Jakarta urban area has been suffered from land subsidence in several land use type as trading, industry and settlement area. Land subsidence hazard is one of the consequences of urban development in Jakarta. This hazard is caused by intensive human activities in groundwater extraction and land use mismanagement. Geologically, the Jakarta urban area is mostly dominated by alluvium fan sediment. The objectives of this research are to make an analysis of Jakarta urban land use type on land subsidence zone areas. The process of producing safer land use and settlements of the land subsidence areas are very important. Spatial distributions of land subsidence detection are necessary tool for land use management planning. For this purpose, Differential Synthetic Aperture Radar Interferometry (DInSAR) method is used. The DInSAR is complementary to ground-based methods such as leveling and global positioning system (GPS) measurements, yielding information in a wide coverage area even when the area is inaccessible. The data were fine tuned by using X-Band image satellite data from 2010 to 2013 and land use mapping data. Our analysis of land use type that land subsidence movement occurred on the northern part Jakarta Metropolitan City varying from 7.5 to 17.5 cm/year as industry and settlement land use type areas.

Keywords: land use analysis, land subsidence mapping, urban area, X-band satellite image

Procedia PDF Downloads 253
28275 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 237
28274 Normalized Compression Distance Based Scene Alteration Analysis of a Video

Authors: Lakshay Kharbanda, Aabhas Chauhan

Abstract:

In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.

Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error

Procedia PDF Downloads 304
28273 X-Corner Detection for Camera Calibration Using Saddle Points

Authors: Abdulrahman S. Alturki, John S. Loomis

Abstract:

This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.

Keywords: camera calibration, corner detector, edge detector, saddle points

Procedia PDF Downloads 376
28272 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector

Procedia PDF Downloads 301
28271 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 305
28270 Binarized-Weight Bilateral Filter for Low Computational Cost Image Smoothing

Authors: Yu Zhang, Kohei Inoue, Kiichi Urahama

Abstract:

We propose a simplified bilateral filter with binarized coefficients for accelerating it. Its computational cost is further decreased by sampling pixels. This computationally low cost filter is useful for smoothing or denoising images by using mobile devices with limited computational power.

Keywords: bilateral filter, binarized-weight bilateral filter, image smoothing, image denoising, pixel sampling

Procedia PDF Downloads 448
28269 Image Processing on Geosynthetic Reinforced Layers to Evaluate Shear Strength and Variations of the Strain Profiles

Authors: S. K. Khosrowshahi, E. Güler

Abstract:

This study investigates the reinforcement function of geosynthetics on the shear strength and strain profile of sand. Conducting a series of simple shear tests, the shearing behavior of the samples under static and cyclic loads was evaluated. Three different types of geosynthetics including geotextile and geonets were used as the reinforcement materials. An image processing analysis based on the optical flow method was performed to measure the lateral displacements and estimate the shear strains. It is shown that besides improving the shear strength, the geosynthetic reinforcement leads a remarkable reduction on the shear strains. The improved layer reduces the required thickness of the soil layer to resist against shear stresses. Consequently, the geosynthetic reinforcement can be considered as a proper approach for the sustainable designs, especially in the projects with huge amount of geotechnical applications like subgrade of the pavements, roadways, and railways.

Keywords: image processing, soil reinforcement, geosynthetics, simple shear test, shear strain profile

Procedia PDF Downloads 198
28268 Review of the Software Used for 3D Volumetric Reconstruction of the Liver

Authors: P. Strakos, M. Jaros, T. Karasek, T. Kozubek, P. Vavra, T. Jonszta

Abstract:

In medical imaging, segmentation of different areas of human body like bones, organs, tissues, etc. is an important issue. Image segmentation allows isolating the object of interest for further processing that can lead for example to 3D model reconstruction of whole organs. Difficulty of this procedure varies from trivial for bones to quite difficult for organs like liver. The liver is being considered as one of the most difficult human body organ to segment. It is mainly for its complexity, shape versatility and proximity of other organs and tissues. Due to this facts usually substantial user effort has to be applied to obtain satisfactory results of the image segmentation. Process of image segmentation then deteriorates from automatic or semi-automatic to fairly manual one. In this paper, overview of selected available software applications that can handle semi-automatic image segmentation with further 3D volume reconstruction of human liver is presented. The applications are being evaluated based on the segmentation results of several consecutive DICOM images covering the abdominal area of the human body.

Keywords: image segmentation, semi-automatic, software, 3D volumetric reconstruction

Procedia PDF Downloads 268
28267 Performance Analysis of New Types of Reference Targets Based on Spaceborne and Airborne SAR Data

Authors: Y. S. Zhou, C. R. Li, L. L. Tang, C. X. Gao, D. J. Wang, Y. Y. Guo

Abstract:

Triangular trihedral corner reflector (CR) has been widely used as point target for synthetic aperture radar (SAR) calibration and image quality assessment. The additional “tip” of the triangular plate does not contribute to the reflector’s theoretical RCS and if it interacts with a perfectly reflecting ground plane, it will yield an increase of RCS at the radar bore-sight and decrease the accuracy of SAR calibration and image quality assessment. Regarding this problem, two types of CRs were manufactured. One was the hexagonal trihedral CR. It is a self-illuminating CR with relatively small plate edge length, while large edge length usually introduces unexpected edge diffraction error. The other was the triangular trihedral CR with extended bottom plate which considers the effect of ‘tip’ into the total RCS. In order to assess the performance of the two types of new CRs, flight campaign over the National Calibration and Validation Site for High Resolution Remote Sensors was carried out. Six hexagonal trihedral CRs and two bottom-extended trihedral CRs, as well as several traditional triangular trihedral CRs, were deployed. KOMPSAT-5 X-band SAR image was acquired for the performance analysis of the hexagonal trihedral CRs. C-band airborne SAR images were acquired for the performance analysis of the bottom-extended trihedral CRs. The analysis results showed that the impulse response function of both the hexagonal trihedral CRs and bottom-extended trihedral CRs were much closer to the ideal sinc-function than the traditional triangular trihedral CRs. The flight campaign results validated the advantages of new types of CRs and they might be useful in the future SAR calibration mission.

Keywords: synthetic aperture radar, calibration, corner reflector, KOMPSAT-5

Procedia PDF Downloads 252
28266 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video

Authors: Nidhal K. Azawi, John M. Gauch

Abstract:

Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.

Keywords: colonoscopy classification, feature extraction, image alignment, machine learning

Procedia PDF Downloads 228
28265 Texture-Based Image Forensics from Video Frame

Authors: Li Zhou, Yanmei Fang

Abstract:

With current technology, images and videos can be obtained more easily than ever. It is so easy to manipulate these digital multimedia information when obtained, and that the content or source of the image and video could be easily tampered. In this paper, we propose to identify the image and video frame by the texture-based approach, e.g. Markov Transition Probability (MTP), which is in space domain, DCT domain and DWT domain, respectively. In the experiment, image and video frame database is constructed, and is used to train and test the classifier Support Vector Machine (SVM). Experiment results show that the texture-based approach has good performance. In order to verify the experiment result, and testify the universality and robustness of algorithm, we build a random testing dataset, the random testing result is in keeping with above experiment.

Keywords: multimedia forensics, video frame, LBP, MTP, SVM

Procedia PDF Downloads 402
28264 A Robust System for Foot Arch Type Classification from Static Foot Pressure Distribution Data Using Linear Discriminant Analysis

Authors: R. Periyasamy, Deepak Joshi, Sneh Anand

Abstract:

Foot posture assessment is important to evaluate foot type, causing gait and postural defects in all age groups. Although different methods are used for classification of foot arch type in clinical/research examination, there is no clear approach for selecting the most appropriate measurement system. Therefore, the aim of this study was to develop a system for evaluation of foot type as clinical decision-making aids for diagnosis of flat and normal arch based on the Arch Index (AI) and foot pressure distribution parameter - Power Ratio (PR) data. The accuracy of the system was evaluated for 27 subjects with age ranging from 24 to 65 years. Foot area measurements (hind foot, mid foot, and forefoot) were acquired simultaneously from foot pressure intensity image using portable PedoPowerGraph system and analysis of the image in frequency domain to obtain foot pressure distribution parameter - PR data. From our results, we obtain 100% classification accuracy of normal and flat foot by using the linear discriminant analysis method. We observe there is no misclassification of foot types because of incorporating foot pressure distribution data instead of only arch index (AI). We found that the mid-foot pressure distribution ratio data and arch index (AI) value are well correlated to foot arch type based on visual analysis. Therefore, this paper suggests that the proposed system is accurate and easy to determine foot arch type from arch index (AI), as well as incorporating mid-foot pressure distribution ratio data instead of physical area of contact. Hence, such computational tool based system can help the clinicians for assessment of foot structure and cross-check their diagnosis of flat foot from mid-foot pressure distribution.

Keywords: arch index, computational tool, static foot pressure intensity image, foot pressure distribution, linear discriminant analysis

Procedia PDF Downloads 476
28263 Development of Intelligent Construction Management System Using Web-Camera Image and 3D Object Image

Authors: Hyeon-Seung Kim, Bit-Na Cho, Tae-Woon Jeong, Soo-Young Yoon, Leen-Seok Kang

Abstract:

Recently, a construction project has been large in the size and complicated in the site work. The web-cameras are used to manage the construction site of such a large construction project. They can be used for monitoring the construction schedule as compared to the actual work image of the planned work schedule. Specially, because the 4D CAD system that the construction appearance is continually simulated in a 3D CAD object by work schedule is widely applied to the construction project, the comparison system between the real image of actual work appearance by web-camera and the simulated image of planned work appearance by 3D CAD object can be an intelligent construction schedule management system (ICON). The delayed activities comparing with the planned schedule can be simulated by red color in the ICON as a virtual reality object. This study developed the ICON and it was verified in a real bridge construction project in Korea. To verify the developed system, a web-camera was installed and operated in a case project for a month. Because the angle and zooming of the web-camera can be operated by Internet, a project manager can easily monitor and assume the corrective action.

Keywords: 4D CAD, web-camera, ICON (intelligent construction schedule management system), 3D object image

Procedia PDF Downloads 483
28262 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: simulation, visual navigation, mobile robot, data visualization

Procedia PDF Downloads 227
28261 A Comparison between Different Segmentation Techniques Used in Medical Imaging

Authors: Ibtihal D. Mustafa, Mawia A. Hassan

Abstract:

Tumor segmentation from MRI image is important part of medical images experts. This is particularly a challenging task because of the high assorting appearance of tumor tissue among different patients. MRI images are advance of medical imaging because it is give richer information about human soft tissue. There are different segmentation techniques to detect MRI brain tumor. In this paper, different procedure segmentation methods are used to segment brain tumors and compare the result of segmentations by using correlation and structural similarity index (SSIM) to analysis and see the best technique that could be applied to MRI image.

Keywords: MRI, segmentation, correlation, structural similarity

Procedia PDF Downloads 380
28260 Lab Bench for Synthetic Aperture Radar Imaging System

Authors: Karthiyayini Nagarajan, P. V. Ramakrishna

Abstract:

Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar (SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System (Lab Bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.

Keywords: synthetic aperture radar, radio reflection model, lab bench, imaging engineering

Procedia PDF Downloads 460
28259 Review of Ultrasound Image Processing Techniques for Speckle Noise Reduction

Authors: Kwazikwenkosi Sikhakhane, Suvendi Rimer, Mpho Gololo, Khmaies Oahada, Adnan Abu-Mahfouz

Abstract:

Medical ultrasound imaging is a crucial diagnostic technique due to its affordability and non-invasiveness compared to other imaging methods. However, the presence of speckle noise, which is a form of multiplicative noise, poses a significant obstacle to obtaining clear and accurate images in ultrasound imaging. Speckle noise reduces image quality by decreasing contrast, resolution, and signal-to-noise ratio (SNR). This makes it difficult for medical professionals to interpret ultrasound images accurately. To address this issue, various techniques have been developed to reduce speckle noise in ultrasound images, which improves image quality. This paper aims to review some of these techniques, highlighting the advantages and disadvantages of each algorithm and identifying the scenarios in which they work most effectively.

Keywords: image processing, noise, speckle, ultrasound

Procedia PDF Downloads 70
28258 Design and Implementation of a Lab Bench for Synthetic Aperture Radar Imaging System

Authors: Karthiyayini Nagarajan, P. V. RamaKrishna

Abstract:

Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar(SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System(lab bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.

Keywords: synthetic aperture radar, radio reflection model, lab bench

Procedia PDF Downloads 439
28257 Large-Capacity Image Information Reduction Based on Single-Cue Saliency Map for Retinal Prosthesis System

Authors: Yili Chen, Xiaokun Liang, Zhicheng Zhang, Yaoqin Xie

Abstract:

In an effort to restore visual perception in retinal diseases, an electronic retinal prosthesis with thousands of electrodes has been developed. The image processing strategies of retinal prosthesis system converts the original images from the camera to the stimulus pattern which can be interpreted by the brain. Practically, the original images are with more high resolution (256x256) than that of the stimulus pattern (such as 25x25), which causes a technical image processing challenge to do large-capacity image information reduction. In this paper, we focus on developing an efficient image processing stimulus pattern extraction algorithm by using a single cue saliency map for extracting salient objects in the image with an optimal trimming threshold. Experimental results showed that the proposed stimulus pattern extraction algorithm performs quite well for different scenes in terms of the stimulus pattern. In the algorithm performance experiment, our proposed SCSPE algorithm have almost five times of the score compared with Boyle’s algorithm. Through experiment s we suggested that when there are salient objects in the scene (such as the blind meet people or talking with people), the trimming threshold should be set around 0.4max, in other situations, the trimming threshold values can be set between 0.2max-0.4max to give the satisfied stimulus pattern.

Keywords: retinal prosthesis, image processing, region of interest, saliency map, trimming threshold selection

Procedia PDF Downloads 220
28256 The Impact of Upward Social Media Comparisons on Body Image and the Role of Physical Appearance Perfectionism and Cognitive Coping

Authors: Lauren Currell, Gemma Hurst

Abstract:

Introduction: The present study experimentally investigated the impact of attractive Instagram images on female’s body image. It also examined whether physical appearance perfectionism and cognitive coping predicted body image following upward comparisons to idealised bodies on Instagram. Methods: One-hundred and fifty-eight females (mean age 24.35 years) were randomly assigned to an experimental (where they compared their bodies to those of Instagram models) or control condition (where they critiqued landscape painting). All participants completed measures on physical appearance perfectionism, cognitive coping, and pre- and post-measures of body image. Results: Comparing one’s body to idealised bodies on Instagram resulted in increased appearance and weight dissatisfaction and decreased confidence, compared to the control condition. Physical appearance perfectionism and cognitive coping both predicted body image outcomes for the experimental condition. Discussion: Clinical implications, such as the prevention and treatment of body dissatisfaction, are discussed. Strengths and limitations of the current study are also noted, and suggestions for future research are provided.

Keywords: perfectionism, cognitive coping, body image, social media

Procedia PDF Downloads 64
28255 Artificial Intelligence Based Analysis of Magnetic Resonance Signals for the Diagnosis of Tissue Abnormalities

Authors: Kapila Warnakulasuriya, Walimuni Janaka Mendis

Abstract:

In this study, an artificial intelligence-based approach is developed to diagnose abnormal tissues in human or animal bodies by analyzing magnetic resonance signals. As opposed to the conventional method of generating an image from the magnetic resonance signals, which are then evaluated by a radiologist for the diagnosis of abnormalities, in the discussed approach, the magnetic resonance signals are analyzed by an artificial intelligence algorithm without having to generate or analyze an image. The AI-based program compares magnetic resonance signals with millions of possible magnetic resonance waveforms which can be generated from various types of normal tissues. Waveforms generated by abnormal tissues are then identified, and images of the abnormal tissues are generated with the possible location of them in the body for further diagnostic tests.

Keywords: magnetic resonance, artificial intelligence, magnetic waveform analysis, abnormal tissues

Procedia PDF Downloads 59
28254 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting

Abstract:

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator

Procedia PDF Downloads 215
28253 A Calibration Method for Temperature Distribution Measurement of Thermochromic Liquid Crystal Based on Mathematical Morphology of Hue Image

Authors: Risti Suryantari, Flaviana

Abstract:

The aim of this research is to design calibration method of Thermochromic Liquid Crystal for temperature distribution measurement based on mathematical morphology of hue image A glass of water is placed on the surface of sample TLC R25C5W at certain temperature. We use scanner for image acquisition. The true images in RGB format is converted to HSV (hue, saturation, value) by taking of hue without saturation and value. Then the hue images is processed based on mathematical morphology using Matlab2013a software to get better images. There are differences on the final images after processing at each temperature variation based on visualization observation and the statistic value. The value of maximum and mean increase with rising temperature. It could be parameter to identify the temperature of the human body surface like hand or foot surface.

Keywords: thermochromic liquid crystal, TLC, mathematical morphology, hue image

Procedia PDF Downloads 450
28252 A Deep Learning Based Approach for Dynamically Selecting Pre-processing Technique for Images

Authors: Revoti Prasad Bora, Nikita Katyal, Saurabh Yadav

Abstract:

Pre-processing plays an important role in various image processing applications. Most of the time due to the similar nature of images, a particular pre-processing or a set of pre-processing steps are sufficient to produce the desired results. However, in the education domain, there is a wide variety of images in various aspects like images with line-based diagrams, chemical formulas, mathematical equations, etc. Hence a single pre-processing or a set of pre-processing steps may not yield good results. Therefore, a Deep Learning based approach for dynamically selecting a relevant pre-processing technique for each image is proposed. The proposed method works as a classifier to detect hidden patterns in the images and predicts the relevant pre-processing technique needed for the image. This approach experimented for an image similarity matching problem but it can be adapted to other use cases too. Experimental results showed significant improvement in average similarity ranking with the proposed method as opposed to static pre-processing techniques.

Keywords: deep-learning, classification, pre-processing, computer vision, image processing, educational data mining

Procedia PDF Downloads 120