Search results for: image entropy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3080

Search results for: image entropy

2960 Image Classification with Localization Using Convolutional Neural Networks

Authors: Bhuyain Mobarok Hossain

Abstract:

Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN).

Keywords: image classification, object detection, localization, particle filter

Procedia PDF Downloads 305
2959 Detecting the Edge of Multiple Images in Parallel

Authors: Prakash K. Aithal, U. Dinesh Acharya, Rajesh Gopakumar

Abstract:

Edge is variation of brightness in an image. Edge detection is useful in many application areas such as finding forests, rivers from a satellite image, detecting broken bone in a medical image etc. The paper discusses about finding edge of multiple aerial images in parallel .The proposed work tested on 38 images 37 colored and one monochrome image. The time taken to process N images in parallel is equivalent to time taken to process 1 image in sequential. The proposed method achieves pixel level parallelism as well as image level parallelism.

Keywords: edge detection, multicore, gpu, opencl, mpi

Procedia PDF Downloads 478
2958 Speeding-up Gray-Scale FIC by Moments

Authors: Eman A. Al-Hilo, Hawraa H. Al-Waelly

Abstract:

In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image.

Keywords: fractal gray level image, fractal compression technique, iterated function system, moments feature, zero-mean range-domain block

Procedia PDF Downloads 492
2957 Excellent Combination of Tensile Strength and Elongation of Novel Reverse Rolled TaNbHfZrTi Refractory High Entropy Alloy

Authors: Mokali Veeresham

Abstract:

In this work, the high-entropy alloy TaNbHfZrTi was processed at room temperature by each step novel reverse rolling up to a 90% reduction in thickness. The reverse rolled 90% samples subsequently used for annealing at 800°C and 1000°C temperatures for 1h to understand phase stability, microstructure, texture, and mechanical properties. The reverse rolled 90% condition contains BCC single-phase; upon annealing at 800°C temperature, the formation of secondary phase BCC-2 prevailed. The partial recrystallization and complete recrystallization microstructures were developed for annealed at 800°C and 1000°C temperatures, respectively. The reverse rolled condition, and 1000°C annealed temperature exhibit extraordinary room temperature tensile properties with high tensile strength (UTS) 1430MPa and 1556 MPa without compromising loss of ductility consists of an appreciable amount of 21% and 20% elongation, respectively.

Keywords: refractory high entropy alloys, reverse rolling, recrystallization, microstructure, tensile properties

Procedia PDF Downloads 144
2956 Digital Image Forensics: Discovering the History of Digital Images

Authors: Gurinder Singh, Kulbir Singh

Abstract:

Digital multimedia contents such as image, video, and audio can be tampered easily due to the availability of powerful editing softwares. Multimedia forensics is devoted to analyze these contents by using various digital forensic techniques in order to validate their authenticity. Digital image forensics is dedicated to investigate the reliability of digital images by analyzing the integrity of data and by reconstructing the historical information of an image related to its acquisition phase. In this paper, a survey is carried out on the forgery detection by considering the most recent and promising digital image forensic techniques.

Keywords: Computer Forensics, Multimedia Forensics, Image Ballistics, Camera Source Identification, Forgery Detection

Procedia PDF Downloads 247
2955 Data Hiding in Gray Image Using ASCII Value and Scanning Technique

Authors: R. K. Pateriya, Jyoti Bharti

Abstract:

This paper presents an approach for data hiding methods which provides a secret communication between sender and receiver. The data is hidden in gray-scale images and the boundary of gray-scale image is used to store the mapping information. In this an approach data is in ASCII format and the mapping is in between ASCII value of hidden message and pixel value of cover image, since pixel value of an image as well as ASCII value is in range of 0 to 255 and this mapping information is occupying only 1 bit per character of hidden message as compared to 8 bit per character thus maintaining good quality of stego image.

Keywords: ASCII value, cover image, PSNR, pixel value, stego image, secret message

Procedia PDF Downloads 414
2954 Nonlinear Analysis in Investigating the Complexity of Neurophysiological Data during Reflex Behavior

Authors: Juliana A. Knocikova

Abstract:

Methods of nonlinear signal analysis are based on finding that random behavior can arise in deterministic nonlinear systems with a few degrees of freedom. Considering the dynamical systems, entropy is usually understood as a rate of information production. Changes in temporal dynamics of physiological data are indicating evolving of system in time, thus a level of new signal pattern generation. During last decades, many algorithms were introduced to assess some patterns of physiological responses to external stimulus. However, the reflex responses are usually characterized by short periods of time. This characteristic represents a great limitation for usual methods of nonlinear analysis. To solve the problems of short recordings, parameter of approximate entropy has been introduced as a measure of system complexity. Low value of this parameter is reflecting regularity and predictability in analyzed time series. On the other side, increasing of this parameter means unpredictability and a random behavior, hence a higher system complexity. Reduced neurophysiological data complexity has been observed repeatedly when analyzing electroneurogram and electromyogram activities during defence reflex responses. Quantitative phrenic neurogram changes are also obvious during severe hypoxia, as well as during airway reflex episodes. Concluding, the approximate entropy parameter serves as a convenient tool for analysis of reflex behavior characterized by short lasting time series.

Keywords: approximate entropy, neurophysiological data, nonlinear dynamics, reflex

Procedia PDF Downloads 300
2953 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks

Authors: Amal Khalifa, Nicolas Vana Santos

Abstract:

Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.

Keywords: deep learning, steganography, image, discrete wavelet transform, fusion

Procedia PDF Downloads 90
2952 Image Segmentation Techniques: Review

Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo

Abstract:

Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.

Keywords: clustering-based, convolution-network, edge-based, region-growing

Procedia PDF Downloads 96
2951 Improvement Image Summarization using Image Processing and Particle swarm optimization Algorithm

Authors: Hooman Torabifard

Abstract:

In the last few years, with the progress of technology and computers and artificial intelligence entry into all kinds of scientific and industrial fields, the lifestyles of human life have changed and in general, the way of humans live on earth has many changes and development. Until now, some of the changes has occurred in the context of digital images and image processing and still continues. However, besides all the benefits, there have been disadvantages. One of these disadvantages is the multiplicity of images with high volume and data; the focus of this paper is on improving and developing a method for summarizing and enhancing the productivity of these images. The general method used for this purpose in this paper consists of a set of methods based on data obtained from image processing and using the PSO (Particle swarm optimization) algorithm. In the remainder of this paper, the method used is elaborated in detail.

Keywords: image summarization, particle swarm optimization, image threshold, image processing

Procedia PDF Downloads 133
2950 Improved Super-Resolution Using Deep Denoising Convolutional Neural Network

Authors: Pawan Kumar Mishra, Ganesh Singh Bisht

Abstract:

Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image.

Keywords: resolution, deep-learning, neural network, de-blurring

Procedia PDF Downloads 517
2949 Mathematical and Numerical Analysis of a Nonlinear Cross Diffusion System

Authors: Hassan Al Salman

Abstract:

We consider a nonlinear parabolic cross diffusion model arising in applied mathematics. A fully practical piecewise linear finite element approximation of the model is studied. By using entropy-type inequalities and compactness arguments, existence of a global weak solution is proved. Providing further regularity of the solution of the model, some uniqueness results and error estimates are established. Finally, some numerical experiments are performed.

Keywords: cross diffusion model, entropy-type inequality, finite element approximation, numerical analysis

Procedia PDF Downloads 383
2948 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: steganography, stego, LSB, crop

Procedia PDF Downloads 269
2947 Effect of Aging on the Second Law Efficiency, Exergy Destruction and Entropy Generation in the Skeletal Muscles during Exercise

Authors: Jale Çatak, Bayram Yılmaz, Mustafa Ozilgen

Abstract:

The second law muscle work efficiency is obtained by multiplying the metabolic and mechanical work efficiencies. Thermodynamic analyses are carried out with 19 sets of arms and legs exercise data which were obtained from the healthy young people. These data are used to simulate the changes occurring during aging. The muscle work efficiency decreases with aging as a result of the reduction of the metabolic energy generation in the mitochondria. The reduction of the mitochondrial energy efficiency makes it difficult to carry out the maintenance of the muscle tissue, which in turn causes a decline of the muscle work efficiency. When the muscle attempts to produce more work, entropy generation and exergy destruction increase. Increasing exergy destruction may be regarded as the result of the deterioration of the muscles. When the exergetic efficiency is 0.42, exergy destruction becomes 1.49 folds of the work performance. This proportionality becomes 2.50 and 5.21 folds when the exergetic efficiency decreases to 0.30 and 0.17 respectively.

Keywords: aging mitochondria, entropy generation, exergy destruction, muscle work performance, second law efficiency

Procedia PDF Downloads 427
2946 Secure E-Pay System Using Steganography and Visual Cryptography

Authors: K. Suganya Devi, P. Srinivasan, M. P. Vaishnave, G. Arutperumjothi

Abstract:

Today’s internet world is highly prone to various online attacks, of which the most harmful attack is phishing. The attackers host the fake websites which are very similar and look alike. We propose an image based authentication using steganography and visual cryptography to prevent phishing. This paper presents a secure steganographic technique for true color (RGB) images and uses Discrete Cosine Transform to compress the images. The proposed method hides the secret data inside the cover image. The use of visual cryptography is to preserve the privacy of an image by decomposing the original image into two shares. Original image can be identified only when both qualified shares are simultaneously available. Individual share does not reveal the identity of the original image. Thus, the existence of the secret message is hard to be detected by the RS steganalysis.

Keywords: image security, random LSB, steganography, visual cryptography

Procedia PDF Downloads 330
2945 Red Green Blue Image Encryption Based on Paillier Cryptographic System

Authors: Mamadou I. Wade, Henry C. Ogworonjo, Madiha Gul, Mandoye Ndoye, Mohamed Chouikha, Wayne Patterson

Abstract:

In this paper, we present a novel application of the Paillier cryptographic system to the encryption of RGB (Red Green Blue) images. In this method, an RGB image is first separated into its constituent channel images, and the Paillier encryption function is applied to each of the channels pixel intensity values. Next, the encrypted image is combined and compressed if necessary before being transmitted through an unsecured communication channel. The transmitted image is subsequently recovered by a decryption process. We performed a series of security and performance analyses to the recovered images in order to verify their robustness to security attack. The results show that the proposed image encryption scheme produces highly secured encrypted images.

Keywords: image encryption, Paillier cryptographic system, RBG image encryption, Paillier

Procedia PDF Downloads 238
2944 An Object-Based Image Resizing Approach

Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai

Abstract:

Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.

Keywords: energy map, visual saliency, gradient map, seam carving

Procedia PDF Downloads 476
2943 Rough Neural Networks in Adapting Cellular Automata Rule for Reducing Image Noise

Authors: Yasser F. Hassan

Abstract:

The reduction or removal of noise in a color image is an essential part of image processing, whether the final information is used for human perception or for an automatic inspection and analysis. This paper describes the modeling system based on the rough neural network model to adaptive cellular automata for various image processing tasks and noise remover. In this paper, we consider the problem of object processing in colored image using rough neural networks to help deriving the rules which will be used in cellular automata for noise image. The proposed method is compared with some classical and recent methods. The results demonstrate that the new model is capable of being trained to perform many different tasks, and that the quality of these results is comparable or better than established specialized algorithms.

Keywords: rough sets, rough neural networks, cellular automata, image processing

Procedia PDF Downloads 439
2942 Entropy in a Field of Emergence in an Aspect of Linguo-Culture

Authors: Nurvadi Albekov

Abstract:

Communicative situation is a basis, which designates potential models of ‘constructed forms’, a motivated basis of a text, for a text can be assumed as a product of the communicative situation. It is within the field of emergence the models of text, that can be potentially prognosticated in a certain communicative situation, are designated. Every text can be assumed as conceptual system structured on the base of certain communicative situation. However in the process of ‘structuring’ of a certain model of ‘conceptual system’ consciousness of a recipient is able act only within the border of the field of emergence for going out of this border indicates misunderstanding of the communicative situation. On the base of communicative situation we can witness the increment of meaning where the synergizing of the informative model of communication, formed by using of the invariant units of a language system, is a result of verbalization of the communicative situation. The potential of the models of a text, prognosticated within the field of emergence, also depends on the communicative situation. The conception ‘the field of emergence’ is interpreted as a unit of the language system, having poly-directed universal structure, implying the presence of the core, the center and the periphery, including different levels of means of a functioning system of language, both in terms of linguistic resources, and in terms of extra linguistic factors interaction of which results increment of a text. The conception ‘field of emergence’ is considered as the most promising in the analysis of texts: oral, written, printed and electronic. As a unit of the language system field of emergence has several properties that predict its use during the study of a text in different levels. This work is an attempt analysis of entropy in a text in the aspect of lingua-cultural code, prognosticated within the model of the field of emergence. The article describes the problem of entropy in the field of emergence, caused by influence of the extra-linguistic factors. The increasing of entropy is caused not only by the fact of intrusion of the language resources but by influence of the alien culture in a whole, and by appearance of non-typical for this very culture symbols in the field of emergence. The borrowing of alien lingua-cultural symbols into the lingua-culture of the author is a reason of increasing the entropy when constructing a text both in meaning and in structuring level. It is nothing but artificial formatting of lexical units that violate stylistic unity of a phrase. It is marked that one of the important characteristics descending the entropy in the field of emergence is a typical similarity of lexical and semantic resources of the different lingua-cultures in aspects of extra linguistic factors.

Keywords: communicative situation, field of emergence, lingua-culture, entropy

Procedia PDF Downloads 362
2941 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining

Procedia PDF Downloads 437
2940 Automatic Classification Using Dynamic Fuzzy C Means Algorithm and Mathematical Morphology: Application in 3D MRI Image

Authors: Abdelkhalek Bakkari

Abstract:

Image segmentation is a critical step in image processing and pattern recognition. In this paper, we proposed a new robust automatic image classification based on a dynamic fuzzy c-means algorithm and mathematical morphology. The proposed segmentation algorithm (DFCM_MM) has been applied to MR perfusion images. The obtained results show the validity and robustness of the proposed approach.

Keywords: segmentation, classification, dynamic, fuzzy c-means, MR image

Procedia PDF Downloads 478
2939 A Survey on Types of Noises and De-Noising Techniques

Authors: Amandeep Kaur

Abstract:

Digital Image processing is a fundamental tool to perform various operations on the digital images for pattern recognition, noise removal and feature extraction. In this paper noise removal technique has been described for various types of noises. This paper comprises discussion about various noises available in the image due to different environmental, accidental factors. In this paper, various de-noising approaches have been discussed that utilize different wavelets and filters for de-noising. By analyzing various papers on image de-noising we extract that wavelet based de-noise approaches are much effective as compared to others.

Keywords: de-noising techniques, edges, image, image processing

Procedia PDF Downloads 336
2938 Detect Circles in Image: Using Statistical Image Analysis

Authors: Fathi M. O. Hamed, Salma F. Elkofhaifee

Abstract:

The aim of this work is to detect geometrical shape objects in an image. In this paper, the object is considered to be as a circle shape. The identification requires find three characteristics, which are number, size, and location of the object. To achieve the goal of this work, this paper presents an algorithm that combines from some of statistical approaches and image analysis techniques. This algorithm has been implemented to arrive at the major objectives in this paper. The algorithm has been evaluated by using simulated data, and yields good results, and then it has been applied to real data.

Keywords: image processing, median filter, projection, scale-space, segmentation, threshold

Procedia PDF Downloads 432
2937 Adaptive Dehazing Using Fusion Strategy

Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha

Abstract:

The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.

Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map

Procedia PDF Downloads 464
2936 Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping

Authors: Adnan A. Y. Mustafa

Abstract:

In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.

Keywords: big images, binary images, image matching, image similarity

Procedia PDF Downloads 196
2935 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images

Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat

Abstract:

The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.

Keywords: image segmentation, clustering, GUI, 2D MRI

Procedia PDF Downloads 377
2934 Medical Image Compression Based on Region of Interest: A Review

Authors: Sudeepti Dayal, Neelesh Gupta

Abstract:

In terms of transmission, bigger the size of any image, longer the time the channel takes for transmission. It is understood that the bandwidth of the channel is fixed. Therefore, if the size of an image is reduced, a larger number of data or images can be transmitted over the channel. Compression is the technique used to reduce the size of an image. In terms of storage, compression reduces the file size which it occupies on the disk. Any image is based on two parameters, region of interest and non-region of interest. There are several algorithms of compression that compress the data more economically. In this paper we have reviewed region of interest and non-region of interest based compression techniques and the algorithms which compress the image most efficiently.

Keywords: compression ratio, region of interest, DCT, DWT

Procedia PDF Downloads 374
2933 Image Enhancement of Histological Slides by Using Nonlinear Transfer Function

Authors: D. Suman, B. Nikitha, J. Sarvani, V. Archana

Abstract:

Histological slides provide clinical diagnostic information about the subjects from the ancient times. Even with the advent of high resolution imaging cameras the image tend to have some background noise which makes the analysis complex. A study of the histological slides is done by using a nonlinear transfer function based image enhancement method. The method processes the raw, color images acquired from the biological microscope, which, in general, is associated with background noise. The images usually appearing blurred does not convey the intended information. In this regard, an enhancement method is proposed and implemented on 50 histological slides of human tissue by using nonlinear transfer function method. The histological image is converted into HSV color image. The luminance value of the image is enhanced (V component) because change in the H and S components could change the color balance between HSV components. The HSV image is divided into smaller blocks for carrying out the dynamic range compression by using a linear transformation function. Each pixel in the block is enhanced based on the contrast of the center pixel and its neighborhood. After the processing the V component, the HSV image is transformed into a colour image. The study has shown improvement of the characteristics of the image so that the significant details of the histological images were improved.

Keywords: HSV space, histology, enhancement, image

Procedia PDF Downloads 329
2932 An Efficient Encryption Scheme Using DWT and Arnold Transforms

Authors: Ali Abdrhman M. Ukasha

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. The color image is decomposed into red, green, and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using a key image that has same original size and is generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours of color image recovery can be obtained with accepted level of distortion using Canny edge detector. Experiments have demonstrated that proposed algorithm can fully encrypt 2D color image and completely reconstructed without any distortion. It has shown that the color image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: color image, wavelet transform, edge detector, Arnold transform, lossy image encryption

Procedia PDF Downloads 482
2931 GIS Based Spatial Modeling for Selecting New Hospital Sites Using APH, Entropy-MAUT and CRITIC-MAUT: A Study in Rural West Bengal, India

Authors: Alokananda Ghosh, Shraban Sarkar

Abstract:

The study aims to identify suitable sites for new hospitals with critical obstetric care facilities in Birbhum, one of the vulnerable and underserved districts of Eastern India, considering six main and 14 sub-criteria, using GIS-based Analytic Hierarchy Process (AHP) and Multi-Attribute Utility Theory (MAUT) approach. The criteria were identified through field surveys and previous literature. After collecting expert decisions, a pairwise comparison matrix was prepared using the Saaty scale to calculate the weights through AHP. On the contrary, objective weighting methods, i.e., Entropy and Criteria Importance through Interaction Correlation (CRITIC), were used to perform the MAUT. Finally, suitability maps were prepared by weighted sum analysis. Sensitivity analyses of AHP were performed to explore the effect of dominant criteria. Results from AHP reveal that ‘maternal death in transit’ followed by ‘accessibility and connectivity’, ‘maternal health care service (MHCS) coverage gap’ were three important criteria with comparatively higher weighted values. Whereas ‘accessibility and connectivity’ and ‘maternal death in transit’ were observed to have more imprint in entropy and CRITIC, respectively. While comparing the predictive suitable classes of these three models with the layer of existing hospitals, except Entropy-MAUT, the other two are pointing towards the left-over underserved areas of existing facilities. Only 43%-67% of existing hospitals were in the moderate to lower suitable class. Therefore, the results of the predictive models might bring valuable input in future planning.

Keywords: hospital site suitability, analytic hierarchy process, multi-attribute utility theory, entropy, criteria importance through interaction correlation, multi-criteria decision analysis

Procedia PDF Downloads 66