Search results for: image processing of electrical impedance tomography
7981 An Analysis of the Relations between Aggregates’ Shape and Mechanical Properties throughout the Railway Ballast Service Life
Authors: Daianne Fernandes Diogenes
Abstract:
Railway ballast aggregates’ shape properties and size distribution can be directly affected by several factors, such as traffic, fouling, and maintenance processes, which cause breakage and wearing, leading to the fine particles’ accumulation through the ballast layer. This research aims to analyze the influence of traffic, tamping process, and sleepers’ stiffness on aggregates' shape and mechanical properties, by using traditional and digital image processing (DIP) techniques and cyclic tests, like resilient modulus (RM) and permanent deformation (PD). Aggregates were collected in different phases of the railway service life: (i) right after the crushing process; (ii) after construction, for the aggregates positioned below the sleepers and (iii) after 5 years of operation. An increase in the percentage of cubic particles was observed for the materials (ii) and (iii), providing a better interlocking, increasing stiffness and reducing axial deformation after 5 years of service, when compared to the initial conditions.Keywords: digital image processing, mechanical behavior, railway ballast, shape properties
Procedia PDF Downloads 1227980 Secure E-Pay System Using Steganography and Visual Cryptography
Authors: K. Suganya Devi, P. Srinivasan, M. P. Vaishnave, G. Arutperumjothi
Abstract:
Today’s internet world is highly prone to various online attacks, of which the most harmful attack is phishing. The attackers host the fake websites which are very similar and look alike. We propose an image based authentication using steganography and visual cryptography to prevent phishing. This paper presents a secure steganographic technique for true color (RGB) images and uses Discrete Cosine Transform to compress the images. The proposed method hides the secret data inside the cover image. The use of visual cryptography is to preserve the privacy of an image by decomposing the original image into two shares. Original image can be identified only when both qualified shares are simultaneously available. Individual share does not reveal the identity of the original image. Thus, the existence of the secret message is hard to be detected by the RS steganalysis.Keywords: image security, random LSB, steganography, visual cryptography
Procedia PDF Downloads 3307979 Noise Detection Algorithm for Skin Disease Image Identification
Authors: Minakshi Mainaji Sonawane, Bharti W. Gawali, Sudhir Mendhekar, Ramesh R. Manza
Abstract:
People's lives and health are severely impacted by skin diseases. A new study proposes an effective method for identifying the different forms of skin diseases. Image denoising is a technique for improving image quality after it has been harmed by noise. The proposed technique is based on the usage of the wavelet transform. Wavelet transform is the best method for analyzing the image due to the ability to split the image into the sub-band, which has been used to estimate the noise ratio at the noisy image. According to experimental results, the proposed method presents the best values for MSE, PSNR, and Entropy for denoised images. we can found in Also, by using different types of wavelet transform filters is make the proposed approach can obtain the best results 23.13, 20.08, 50.7 for the image denoising processKeywords: MSE, PSNR, entropy, Gaussian filter, DWT
Procedia PDF Downloads 2157978 A Comparison between Underwater Image Enhancement Techniques
Authors: Ouafa Benaida, Abdelhamid Loukil, Adda Ali Pacha
Abstract:
In recent years, the growing interest of scientists in the field of image processing and analysis of underwater images and videos has been strengthened following the emergence of new underwater exploration techniques, such as the emergence of autonomous underwater vehicles and the use of underwater image sensors facilitating the exploration of underwater mineral resources as well as the search for new species of aquatic life by biologists. Indeed, underwater images and videos have several defects and must be preprocessed before their analysis. Underwater landscapes are usually darkened due to the interaction of light with the marine environment: light is absorbed as it travels through deep waters depending on its wavelength. Additionally, light does not follow a linear direction but is scattered due to its interaction with microparticles in water, resulting in low contrast, low brightness, color distortion, and restricted visibility. The improvement of the underwater image is, therefore, more than necessary in order to facilitate its analysis. The research presented in this paper aims to implement and evaluate a set of classical techniques used in the field of improving the quality of underwater images in several color representation spaces. These methods have the particularity of being simple to implement and do not require prior knowledge of the physical model at the origin of the degradation.Keywords: underwater image enhancement, histogram normalization, histogram equalization, contrast limited adaptive histogram equalization, single-scale retinex
Procedia PDF Downloads 897977 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation
Authors: Aicha Majda, Abdelhamid El Hassani
Abstract:
Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric
Procedia PDF Downloads 1697976 Real-Time Compressive Strength Monitoring for NPP Concrete Construction Using an Embedded Piezoelectric Self-Sensing Technique
Authors: Junkyeong Kim, Seunghee Park, Ju-Won Kim, Myung-Sug Cho
Abstract:
Recently, demands for the construction of Nuclear Power Plants (NPP) using high strength concrete (HSC) has been increased. However, HSC might be susceptible to brittle fracture if the curing process is inadequate. To prevent unexpected collapse during and after the construction of HSC structures, it is essential to confirm the strength development of HSC during the curing process. However, several traditional strength-measuring methods are not effective and practical. In this study, a novel method to estimate the strength development of HSC based on electromechanical impedance (EMI) measurements using an embedded piezoelectric sensor is proposed. The EMI of NPP concrete specimen was tracked to monitor the strength development. In addition, cross-correlation coefficient was applied in sequence to examine the trend of the impedance variations more quantitatively. The results confirmed that the proposed technique can be applied successfully monitoring of the strength development during the curing process of HSC structures.Keywords: concrete curing, embedded piezoelectric sensor, high strength concrete, nuclear power plant, self-sensing impedance
Procedia PDF Downloads 5167975 Red Green Blue Image Encryption Based on Paillier Cryptographic System
Authors: Mamadou I. Wade, Henry C. Ogworonjo, Madiha Gul, Mandoye Ndoye, Mohamed Chouikha, Wayne Patterson
Abstract:
In this paper, we present a novel application of the Paillier cryptographic system to the encryption of RGB (Red Green Blue) images. In this method, an RGB image is first separated into its constituent channel images, and the Paillier encryption function is applied to each of the channels pixel intensity values. Next, the encrypted image is combined and compressed if necessary before being transmitted through an unsecured communication channel. The transmitted image is subsequently recovered by a decryption process. We performed a series of security and performance analyses to the recovered images in order to verify their robustness to security attack. The results show that the proposed image encryption scheme produces highly secured encrypted images.Keywords: image encryption, Paillier cryptographic system, RBG image encryption, Paillier
Procedia PDF Downloads 2387974 Medical Image Augmentation Using Spatial Transformations for Convolutional Neural Network
Authors: Trupti Chavan, Ramachandra Guda, Kameshwar Rao
Abstract:
The lack of data is a pain problem in medical image analysis using a convolutional neural network (CNN). This work uses various spatial transformation techniques to address the medical image augmentation issue for knee detection and localization using an enhanced single shot detector (SSD) network. The spatial transforms like a negative, histogram equalization, power law, sharpening, averaging, gaussian blurring, etc. help to generate more samples, serve as pre-processing methods, and highlight the features of interest. The experimentation is done on the OpenKnee dataset which is a collection of knee images from the openly available online sources. The CNN called enhanced single shot detector (SSD) is utilized for the detection and localization of the knee joint from a given X-ray image. It is an enhanced version of the famous SSD network and is modified in such a way that it will reduce the number of prediction boxes at the output side. It consists of a classification network (VGGNET) and an auxiliary detection network. The performance is measured in mean average precision (mAP), and 99.96% mAP is achieved using the proposed enhanced SSD with spatial transformations. It is also seen that the localization boundary is comparatively more refined and closer to the ground truth in spatial augmentation and gives better detection and localization of knee joints.Keywords: data augmentation, enhanced SSD, knee detection and localization, medical image analysis, openKnee, Spatial transformations
Procedia PDF Downloads 1547973 An Object-Based Image Resizing Approach
Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai
Abstract:
Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.Keywords: energy map, visual saliency, gradient map, seam carving
Procedia PDF Downloads 4767972 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration
Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef
Abstract:
Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab
Procedia PDF Downloads 3827971 Heuristic Spatial-Spectral Hyperspectral Image Segmentation Using Bands Quartile Box Plot Profiles
Authors: Mohamed A. Almoghalis, Osman M. Hegazy, Ibrahim F. Imam, Ali H. Elbastawessy
Abstract:
This paper presents a new hyperspectral image segmentation scheme with respect to both spatial and spectral contexts. The scheme uses the 8-pixels spatial pattern to build a weight structure that holds the number of outlier bands for each pixel among its neighborhood windows in different directions. The number of outlier bands for a pixel is obtained using bands quartile box plots profile among spatial 8-pixels pattern windows. The quartile box plot weight structure represents the spatial-spectral context in the image. Instead of starting segmentation process by single pixels, the proposed methodology starts by pixels groups that proved to share the same spectral features with respect to their spatial context. As a result, the segmentation scheme starts with Jigsaw pieces that build a mosaic image. The following step builds a model for each Jigsaw piece in the mosaic image. Each Jigsaw piece will be merged with another Jigsaw piece using KNN applied to their bands' quartile box plots profiles. The scheme iterates till required number of segments reached. Experiments use two data sets obtained from Earth Observer 1 (EO-1) sensor for Egypt and France. Initial results qualitative analysis showed encouraging results compared with ground truth. Quantitative analysis for the results will be included in the final paper.Keywords: hyperspectral image segmentation, image processing, remote sensing, box plot
Procedia PDF Downloads 6057970 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 887969 Mobile Microscope for the Detection of Pathogenic Cells Using Image Processing
Authors: P. S. Surya Meghana, K. Lingeshwaran, C. Kannan, V. Raghavendran, C. Priya
Abstract:
One of the most basic and powerful tools in all of science and medicine is the light microscope, the fundamental device for laboratory as well as research purposes. With the improving technology, the need for portable, economic and user-friendly instruments is in high demand. The conventional microscope fails to live up to the emerging trend. Also, adequate access to healthcare is not widely available, especially in developing countries. The most basic step towards the curing of a malady is the diagnosis of the disease itself. The main aim of this paper is to diagnose Malaria with the most common device, cell phones, which prove to be the immediate solution for most of the modern day needs with the development of wireless infrastructure allowing to compute and communicate on the move. This opened up the opportunity to develop novel imaging, sensing, and diagnostics platforms using mobile phones as an underlying platform to address the global demand for accurate, sensitive, cost-effective, and field-portable measurement devices for use in remote and resource-limited settings around the world.Keywords: cellular, hand-held, health care, image processing, malarial parasites, microscope
Procedia PDF Downloads 2677968 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model
Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka
Abstract:
The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing
Procedia PDF Downloads 3007967 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models
Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel
Abstract:
In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids
Procedia PDF Downloads 3797966 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy
Authors: Chhabi Nigam, S. Ramakrishnan
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR
Procedia PDF Downloads 2187965 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences
Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng
Abstract:
Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.).Keywords: motion detection, motion tracking, trajectory analysis, video surveillance
Procedia PDF Downloads 5487964 Review on Quaternion Gradient Operator with Marginal and Vector Approaches for Colour Edge Detection
Authors: Nadia Ben Youssef, Aicha Bouzid
Abstract:
Gradient estimation is one of the most fundamental tasks in the field of image processing in general, and more particularly for color images since that the research in color image gradient remains limited. The widely used gradient method is Di Zenzo’s gradient operator, which is based on the measure of squared local contrast of color images. The proposed gradient mechanism, presented in this paper, is based on the principle of the Di Zenzo’s approach using quaternion representation. This edge detector is compared to a marginal approach based on multiscale product of wavelet transform and another vector approach based on quaternion convolution and vector gradient approach. The experimental results indicate that the proposed color gradient operator outperforms marginal approach, however, it is less efficient then the second vector approach.Keywords: gradient, edge detection, color image, quaternion
Procedia PDF Downloads 2347963 A Polyimide Based Split-Ring Neural Interface Electrode for Neural Signal Recording
Authors: Ning Xue, Srinivas Merugu, Ignacio Delgado Martinez, Tao Sun, John Tsang, Shih-Cheng Yen
Abstract:
We have developed a polyimide based neural interface electrode to record nerve signals from the sciatic nerve of a rat. The neural interface electrode has a split-ring shape, with four protruding gold electrodes for recording, and two reference gold electrodes around the split-ring. The split-ring electrode can be opened up to encircle the sciatic nerve. The four electrodes can be bent to sit on top of the nerve and hold the device in position, while the split-ring frame remains flat. In comparison, while traditional cuff electrodes can only fit certain sizes of the nerve, the developed device can fit a variety of rat sciatic nerve dimensions from 0.6 mm to 1.0 mm, and adapt to the chronic changes in the nerve as the electrode tips are bendable. The electrochemical impedance spectroscopy measurement was conducted. The gold electrode impedance is on the order of 10 kΩ, showing excellent charge injection capacity to record neural signals.Keywords: impedance, neural interface, split-ring electrode, neural signal recording
Procedia PDF Downloads 3767962 Experimental Characterization of Composite Material with Non Contacting Methods
Authors: Nikolaos Papadakis, Constantinos Condaxakis, Konstantinos Savvakis
Abstract:
The aim of this paper is to determine the elastic properties (elastic modulus and Poisson ratio) of a composite material based on noncontacting imaging methods. More specifically, the significantly reduced cost of digital cameras has given the opportunity of the high reliability of low-cost strain measurement. The open source platform Ncorr is used in this paper which utilizes the method of digital image correlation (DIC). The use of digital image correlation in measuring strain uses random speckle preparation on the surface of the gauge area, image acquisition, and postprocessing the image correlation to obtain displacement and strain field on surface under study. This study discusses technical issues relating to the quality of results to be obtained are discussed. [0]8 fabric glass/epoxy composites specimens were prepared and tested at different orientations 0[o], 30[o], 45[o], 60[o], 90[o]. Each test was recorded with the camera at a constant frame rate and constant lighting conditions. The recorded images were processed through the use of the image processing software. The parameters of the test are reported. The strain map output which is obtained through strain measurement using Ncorr is validated by a) comparing the elastic properties with expected values from Classical laminate theory, b) through finite element analysis.Keywords: composites, Ncorr, strain map, videoextensometry
Procedia PDF Downloads 1447961 Validation of an Impedance-Based Flow Cytometry Technique for High-Throughput Nanotoxicity Screening
Authors: Melanie Ostermann, Eivind Birkeland, Ying Xue, Alexander Sauter, Mihaela R. Cimpan
Abstract:
Background: New reliable and robust techniques to assess biological effects of nanomaterials (NMs) in vitro are needed to speed up safety analysis and to identify key physicochemical parameters of NMs, which are responsible for their acute cytotoxicity. The central aim of this study was to validate and evaluate the applicability and reliability of an impedance-based flow cytometry (IFC) technique for the high-throughput screening of NMs. Methods: Eight inorganic NMs from the European Commission Joint Research Centre Repository were used: NM-302 and NM-300k (Ag: 200 nm rods and 16.7 nm spheres, respectively), NM-200 and NM- 203 (SiO₂: 18.3 nm and 24.7 nm amorphous, respectively), NM-100 and NM-101 (TiO₂: 100 nm and 6 nm anatase, respectively), and NM-110 and NM-111 (ZnO: 147 nm and 141 nm, respectively). The aim was to assess the biological effects of these materials on human monoblastoid (U937) cells. Dispersions of NMs were prepared as described in the NANOGENOTOX dispersion protocol and cells were exposed to NMs at relevant concentrations (2, 10, 20, 50, and 100 µg/mL) for 24 hrs. The change in electrical impedance was measured at 0.5, 2, 6, and 12 MHz using the IFC AmphaZ30 (Amphasys AG, Switzerland). A traditional toxicity assay, Trypan Blue Dye Exclusion assay, and dark-field microscopy were used to validate the IFC method. Results: Spherical Ag particles (NM-300K) showed the highest toxic effect on U937 cells followed by ZnO (NM-111 ≥ NM-110) particles. Silica particles were moderate to non-toxic at all used concentrations under these conditions. A higher toxic effect was seen with smaller sized TiO2 particles (NM-101) compared to their larger analogues (NM-100). No interferences between the IFC and the used NMs were seen. Uptake and internalization of NMs were observed after 24 hours exposure, confirming actual NM-cell interactions. Conclusion: Results collected with the IFC demonstrate the applicability of this method for rapid nanotoxicity assessment, which proved to be less prone to nano-related interference issues compared to some traditional toxicity assays. Furthermore, this label-free and novel technique shows good potential for up-scaling in directions of an automated high-throughput screening and for future NM toxicity assessment. This work was supported by the EC FP7 NANoREG (Grant Agreement NMP4-LA-2013-310584), the Research Council of Norway, project NorNANoREG (239199/O70), the EuroNanoMed II 'GEMN' project (246672), and the UH-Nett Vest project.Keywords: cytotoxicity, high-throughput, impedance, nanomaterials
Procedia PDF Downloads 3627960 Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy
Authors: Shu-Min Tsao, Chung-Ming Lo, Shao-Chun Chen
Abstract:
Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient.Keywords: computer-aided diagnosis, diabetic retinopathy, exudate, image processing
Procedia PDF Downloads 2717959 Adaptive Dehazing Using Fusion Strategy
Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha
Abstract:
The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map
Procedia PDF Downloads 4657958 Digital Image Steganography with Multilayer Security
Authors: Amar Partap Singh Pharwaha, Balkrishan Jindal
Abstract:
In this paper, a new method is developed for hiding image in a digital image with multilayer security. In the proposed method, the secret image is encrypted in the first instance using a flexible matrix based symmetric key to add first layer of security. Then another layer of security is added to the secret data by encrypting the ciphered data using Pythagorean Theorem method. The ciphered data bits (4 bits) produced after double encryption are then embedded within digital image in the spatial domain using Least Significant Bits (LSBs) substitution. To improve the image quality of the stego-image, an improved form of pixel adjustment process is proposed. To evaluate the effectiveness of the proposed method, image quality metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), entropy, correlation, mean value and Universal Image Quality Index (UIQI) are measured. It has been found experimentally that the proposed method provides higher security as well as robustness. In fact, the results of this study are quite promising.Keywords: Pythagorean theorem, pixel adjustment, ciphered data, image hiding, least significant bit, flexible matrix
Procedia PDF Downloads 3377957 Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping
Authors: Adnan A. Y. Mustafa
Abstract:
In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.Keywords: big images, binary images, image matching, image similarity
Procedia PDF Downloads 1977956 Toward Subtle Change Detection and Quantification in Magnetic Resonance Neuroimaging
Authors: Mohammad Esmaeilpour
Abstract:
One of the important open problems in the field of medical image processing is detection and quantification of small changes. In this poster, we try to investigate that, how the algebraic decomposition techniques can be used for semiautomatically detecting and quantifying subtle changes in Magnetic Resonance (MR) neuroimaging volumes. We mostly focus on the low-rank values of the matrices achieved from decomposing MR image pairs during a period of time. Besides, a skillful neuroradiologist will help the algorithm to distinguish between noises and small changes.Keywords: magnetic resonance neuroimaging, subtle change detection and quantification, algebraic decomposition, basis functions
Procedia PDF Downloads 4747955 Medical Image Compression Based on Region of Interest: A Review
Authors: Sudeepti Dayal, Neelesh Gupta
Abstract:
In terms of transmission, bigger the size of any image, longer the time the channel takes for transmission. It is understood that the bandwidth of the channel is fixed. Therefore, if the size of an image is reduced, a larger number of data or images can be transmitted over the channel. Compression is the technique used to reduce the size of an image. In terms of storage, compression reduces the file size which it occupies on the disk. Any image is based on two parameters, region of interest and non-region of interest. There are several algorithms of compression that compress the data more economically. In this paper we have reviewed region of interest and non-region of interest based compression techniques and the algorithms which compress the image most efficiently.Keywords: compression ratio, region of interest, DCT, DWT
Procedia PDF Downloads 3757954 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 3897953 An Efficient Encryption Scheme Using DWT and Arnold Transforms
Authors: Ali Abdrhman M. Ukasha
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. The color image is decomposed into red, green, and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using a key image that has same original size and is generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours of color image recovery can be obtained with accepted level of distortion using Canny edge detector. Experiments have demonstrated that proposed algorithm can fully encrypt 2D color image and completely reconstructed without any distortion. It has shown that the color image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: color image, wavelet transform, edge detector, Arnold transform, lossy image encryption
Procedia PDF Downloads 4857952 Medical Experience: Usability Testing of Displaying Computed Tomography Scans and Magnetic Resonance Imaging in Virtual and Augmented Reality for Accurate Diagnosis
Authors: Alyona Gencheva
Abstract:
The most common way to study diagnostic results is using specialized programs at a stationary workplace. Magnetic Resonance Imaging is presented in a two-dimensional (2D) format, and Computed Tomography sometimes looks like a three-dimensional (3D) model that can be interacted with. The main idea of the research is to compare ways of displaying diagnostic results in virtual reality that can help a surgeon during or before an operation in augmented reality. During the experiment, the medical staff examined liver vessels in the abdominal area and heart boundaries. The search time and detection accuracy were measured on black-and-white and coloured scans. Usability testing in virtual reality shows convenient ways of interaction like hand input, voice activation, displaying risk to the patient, and the required number of scans. The results of the experiment will be used in the new C# program based on Magic Leap technology.Keywords: augmented reality, computed tomography, magic leap, magnetic resonance imaging, usability testing, VTE risk
Procedia PDF Downloads 112