Search results for: ambiguous images
794 Wavelet-Based Despeckling of Synthetic Aperture Radar Images Using Adaptive and Mean Filters
Authors: Syed Musharaf Ali, Muhammad Younus Javed, Naveed Sarfraz Khattak
Abstract:
In this paper we introduced new wavelet based algorithm for speckle reduction of synthetic aperture radar images, which uses combination of undecimated wavelet transformation, wiener filter (which is an adaptive filter) and mean filter. Further more instead of using existing thresholding techniques such as sure shrinkage, Bayesian shrinkage, universal thresholding, normal thresholding, visu thresholding, soft and hard thresholding, we use brute force thresholding, which iteratively run the whole algorithm for each possible candidate value of threshold and saves each result in array and finally selects the value for threshold that gives best possible results. That is why it is slow as compared to existing thresholding techniques but gives best results under the given algorithm for speckle reduction.
Keywords: Brute force thresholding, directional smoothing, direction dependent mask, undecimated wavelet transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2883793 Elimination Noise by Adaptive Wavelet Threshold
Authors: Iman Elyasi, Sadegh Zarmehi
Abstract:
Due to some reasons, observed images are degraded which are mainly caused by noise. Recently image denoising using the wavelet transform has been attracting much attention. Waveletbased approach provides a particularly useful method for image denoising when the preservation of edges in the scene is of importance because the local adaptivity is based explicitly on the values of the wavelet detail coefficients. In this paper, we propose several methods of noise removal from degraded images with Gaussian noise by using adaptive wavelet threshold (Bayes Shrink, Modified Bayes Shrink and Normal Shrink). The proposed thresholds are simple and adaptive to each subband because the parameters required for estimating the threshold depend on subband data. Experimental results show that the proposed thresholds remove noise significantly and preserve the edges in the scene.
Keywords: Image denoising, Bayes Shrink, Modified Bayes Shrink, Normal Shrink.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2474792 Multichannel Image Mosaicing of Stem Cells
Authors: Alessandro Bevilacqua, Alessandro Gherardi, Filippo Piccinini
Abstract:
Image mosaicing techniques are usually employed to offer researchers a wider field of view of microscopic image of biological samples. a mosaic is commonly achieved using automated microscopes and often with one “color" channel, whether it refers to natural or fluorescent analysis. In this work we present a method to achieve three subsequent mosaics of the same part of a stem cell culture analyzed in phase contrast and in fluorescence, with a common non-automated inverted microscope. The mosaics obtained are then merged together to mark, in the original contrast phase images, nuclei and cytoplasm of the cells referring to a mosaic of the culture, rather than to single images. The experiments carried out prove the effectiveness of our approach with cultures of cells stained with calcein (green/cytoplasm and nuclei) and hoechst (blue/nuclei) probes.
Keywords: Microscopy, image mosaicing, fluorescence, stem cells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488791 A DCT-Based Secure JPEG Image Authentication Scheme
Authors: Mona F. M. Mursi, Ghazy M.R. Assassa, Hatim A. Aboalsamh, Khaled Alghathbar
Abstract:
The challenge in the case of image authentication is that in many cases images need to be subjected to non malicious operations like compression, so the authentication techniques need to be compression tolerant. In this paper we propose an image authentication system that is tolerant to JPEG lossy compression operations. A scheme for JPEG grey scale images is proposed based on a data embedding method that is based on a secret key and a secret mapping vector in the frequency domain. An encrypted feature vector extracted from the image DCT coefficients, is embedded redundantly, and invisibly in the marked image. On the receiver side, the feature vector from the received image is derived again and compared against the extracted watermark to verify the image authenticity. The proposed scheme is robust against JPEG compression up to a maximum compression of approximately 80%,, but sensitive to malicious attacks such as cutting and pasting.
Keywords: Authentication, DCT, JPEG, Watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747790 Computer Aided Classification of Architectural Distortion in Mammograms Using Texture Features
Authors: Birmohan Singh, V. K. Jain
Abstract:
Computer aided diagnosis systems provide vital opinion to radiologists in the detection of early signs of breast cancer from mammogram images. Architectural distortions, masses and microcalcifications are the major abnormalities. In this paper, a computer aided diagnosis system has been proposed for distinguishing abnormal mammograms with architectural distortion from normal mammogram. Four types of texture features GLCM texture, GLRLM texture, fractal texture and spectral texture features for the regions of suspicion are extracted. Support vector machine has been used as classifier in this study. The proposed system yielded an overall sensitivity of 96.47% and an accuracy of 96% for mammogram images collected from digital database for screening mammography database.Keywords: Architecture Distortion, GLCM Texture features, GLRLM Texture Features, Mammograms, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2264789 Multi-Criteria Based Robust Markowitz Model under Box Uncertainty
Authors: Pulak Swain, A. K. Ojha
Abstract:
Portfolio optimization is based on dealing with the problems of efficient asset allocation. Risk and Expected return are two conflicting criteria in such problems, where the investor prefers the return to be high and the risk to be low. Using multi-objective approach we can solve those type of problems. However the information which we have for the input parameters are generally ambiguous and the input values can fluctuate around some nominal values. We can not ignore the uncertainty in input values, as they can affect the asset allocation drastically. So we use Robust Optimization approach to the problems where the input parameters comes under box uncertainty. In this paper, we solve the multi criteria robust problem with the help of E- constraint method.Keywords: Portfolio optimization, multi-objective optimization, E-constraint method, box uncertainty, robust optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624788 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks
Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin
Abstract:
Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3305787 Heavy Metals Estimation in Coastal Areas Using Remote Sensing, Field Sampling and Classical and Robust Statistic
Authors: Elena Castillo-López, Raúl Pereda, Julio Manuel de Luis, Rubén Pérez, Felipe Piña
Abstract:
Sediments are an important source of accumulation of toxic contaminants within the aquatic environment. Bioassays are a powerful tool for the study of sediments in relation to their toxicity, but they can be expensive. This article presents a methodology to estimate the main physical property of intertidal sediments in coastal zones: heavy metals concentration. This study, which was developed in the Bay of Santander (Spain), applies classical and robust statistic to CASI-2 hyperspectral images to estimate heavy metals presence and ecotoxicity (TOC). Simultaneous fieldwork (radiometric and chemical sampling) allowed an appropriate atmospheric correction to CASI-2 images.
Keywords: Remote sensing, intertidal sediment, airborne sensors, heavy metals, ecotoxicity, robust statistic, estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1256786 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening
Authors: Ksheeraj Sai Vepuri, Nada Attar
Abstract:
We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.Keywords: Facial expression recognition, image pre-processing, deep learning, CNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547785 Rejuvenate: Face and Body Retouching Using Image Inpainting
Authors: H. AbdelRahman, S. Rostom, Y. Lotfy, S. Salah Eldeen, R. Yassein, N. Awny
Abstract:
People are growing more concerned with their appearance in today's society. But they are terrified of what they will look like after a plastic surgery. People's mental health suffers when they have accidents, burns, or genetic issues that cause them to cleave certain body parts, which makes them feel uncomfortable and unappreciated. The method provides an innovative deep learning-based technique for image inpainting that analyzes different picture structures and fixes damaged images. This study proposes a model based on the Stable Diffusion Inpainting method for in-painting medical images. One significant advancement made possible by deep neural networks is image inpainting, which is the process of reconstructing damaged and missing portions of an image. The patient can see the outcome more easily since the system uses the user's input of an image to identify a problem. It then modifies the image and outputs a fixed image.
Keywords: Generative Adversarial Network, GAN, Large Mask Inpainting, LAMA, Stable Diffusion Inpainting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112784 A Dynamic RGB Intensity Based Steganography Scheme
Authors: Mandep Kaur, Surbhi Gupta, Parvinder S. Sandhu, Jagdeep Kaur
Abstract:
Steganography meaning covered writing. Steganography includes the concealment of information within computer files [1]. In other words, it is the Secret communication by hiding the existence of message. In this paper, we will refer to cover image, to indicate the images that do not yet contain a secret message, while we will refer to stego images, to indicate an image with an embedded secret message. Moreover, we will refer to the secret message as stego-message or hidden message. In this paper, we proposed a technique called RGB intensity based steganography model as RGB model is the technique used in this field to hide the data. The methods used here are based on the manipulation of the least significant bits of pixel values [3][4] or the rearrangement of colors to create least significant bit or parity bit patterns, which correspond to the message being hidden. The proposed technique attempts to overcome the problem of the sequential fashion and the use of stego-key to select the pixels.
Keywords: Steganography, Stego Image, RGB Image, Cryptography, LSB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2114783 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration
Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith
Abstract:
Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814782 Single Frame Supercompression of Still Images,Video, High Definition TV and Digital Cinema
Authors: Mario Mastriani
Abstract:
Super-resolution is nowadays used for a high-resolution image produced from several low-resolution noisy frames. In this work, we consider the problem of high-quality interpolation of a single noise-free image. Such images may come from different sources, i.e., they may be frames of videos, individual pictures, etc. On the other hand, in the encoder we apply a downsampling via bidimen-sional interpolation of each frame, and in the decoder we apply a upsampling by which we restore the original size of the image. If the compression ratio is very high, then we use a convolutive mask that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. In fact, the mentioned mask is coded inside texture memory of a GPGPU.Keywords: General-Purpose computation on Graphics ProcessingUnits, Image Compression, Interpolation, Super-resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001781 Color Image Segmentation and Multi-Level Thresholding by Maximization of Conditional Entropy
Authors: R.Sukesh Kumar, Abhisek Verma, Jasprit Singh
Abstract:
In this work a novel approach for color image segmentation using higher order entropy as a textural feature for determination of thresholds over a two dimensional image histogram is discussed. A similar approach is applied to achieve multi-level thresholding in both grayscale and color images. The paper discusses two methods of color image segmentation using RGB space as the standard processing space. The threshold for segmentation is decided by the maximization of conditional entropy in the two dimensional histogram of the color image separated into three grayscale images of R, G and B. The features are first developed independently for the three ( R, G, B ) spaces, and combined to get different color component segmentation. By considering local maxima instead of the maximum of conditional entropy yields multiple thresholds for the same image which forms the basis for multilevel thresholding.Keywords: conditional entropy, multi-level thresholding, segmentation, two dimensional image histogram
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3000780 Retrieving Similar Segmented Objects Using Motion Descriptors
Authors: Konstantinos C. Kartsakalis, Angeliki Skoura, Vasileios Megalooikonomou
Abstract:
The fuzzy composition of objects depicted in images acquired through MR imaging or the use of bio-scanners has often been a point of controversy for field experts attempting to effectively delineate between the visualized objects. Modern approaches in medical image segmentation tend to consider fuzziness as a characteristic and inherent feature of the depicted object, instead of an undesirable trait. In this paper, a novel technique for efficient image retrieval in the context of images in which segmented objects are either crisp or fuzzily bounded is presented. Moreover, the proposed method is applied in the case of multiple, even conflicting, segmentations from field experts. Experimental results demonstrate the efficiency of the suggested method in retrieving similar objects from the aforementioned categories while taking into account the fuzzy nature of the depicted data.
Keywords: Fuzzy Object, Fuzzy Image Segmentation, Motion Descriptors, MRI Imaging, Object-Based Image Retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2303779 Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)
Authors: M.Heenaye-Mamode Khan, R.K. Subramanian, N. A. Mamode Khan
Abstract:
The quest of providing more secure identification system has led to a rise in developing biometric systems. Dorsal hand vein pattern is an emerging biometric which has attracted the attention of many researchers, of late. Different approaches have been used to extract the vein pattern and match them. In this work, Principle Component Analysis (PCA) which is a method that has been successfully applied on human faces and hand geometry is applied on the dorsal hand vein pattern. PCA has been used to obtain eigenveins which is a low dimensional representation of vein pattern features. Low cost CCD cameras were used to obtain the vein images. The extraction of the vein pattern was obtained by applying morphology. We have applied noise reduction filters to enhance the vein patterns. The system has been successfully tested on a database of 200 images using a threshold value of 0.9. The results obtained are encouraging.Keywords: Biometric, Dorsal vein pattern, PCA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898778 Sub-Image Detection Using Fast Neural Processors and Image Decomposition
Authors: Hazem M. El-Bakry, Qiangfu Zhao
Abstract:
In this paper, an approach to reduce the computation steps required by fast neural networksfor the searching process is presented. The principle ofdivide and conquer strategy is applied through imagedecomposition. Each image is divided into small in sizesub-images and then each one is tested separately usinga fast neural network. The operation of fast neuralnetworks based on applying cross correlation in thefrequency domain between the input image and theweights of the hidden neurons. Compared toconventional and fast neural networks, experimentalresults show that a speed up ratio is achieved whenapplying this technique to locate human facesautomatically in cluttered scenes. Furthermore, fasterface detection is obtained by using parallel processingtechniques to test the resulting sub-images at the sametime using the same number of fast neural networks. Incontrast to using only fast neural networks, the speed upratio is increased with the size of the input image whenusing fast neural networks and image decomposition.
Keywords: Fast Neural Networks, 2D-FFT, CrossCorrelation, Image decomposition, Parallel Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182777 Boundary Segmentation of Microcalcification using Parametric Active Contours
Authors: Abdul Kadir Jumaat, Siti Salmah Yasiran, Wan Eny Zarina Wan Abd Rahman, Aminah Abdul Malek
Abstract:
A mammography image is composed of low contrast area where the breast tissues and the breast abnormalities such as microcalcification can hardly be differentiated by the medical practitioner. This paper presents the application of active contour models (Snakes) for the segmentation of microcalcification in mammography images. Comparison on the microcalcifiation areas segmented by the Balloon Snake, Gradient Vector Flow (GVF) Snake, and Distance Snake is done against the true value of the microcalcification area. The true area value is the average microcalcification area in the original mammography image traced by the expert radiologists. From fifty images tested, the result obtained shows that the accuracy of the Balloon Snake, GVF Snake, and Distance Snake in segmenting boundaries of microcalcification are 96.01%, 95.74%, and 95.70% accuracy respectively. This implies that the Balloon Snake is a better segmentation method to locate the exact boundary of a microcalcification region.
Keywords: Balloon Snake, GVF Snake, Distance Snake, Mammogram, Microcalcifications, Segmentation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728776 Wavelet - Based Classification of Outdoor Natural Scenes by Resilient Neural Network
Authors: Amitabh Wahi, Sundaramurthy S.
Abstract:
Natural outdoor scene classification is active and promising research area around the globe. In this study, the classification is carried out in two phases. In the first phase, the features are extracted from the images by wavelet decomposition method and stored in a database as feature vectors. In the second phase, the neural classifiers such as back-propagation neural network (BPNN) and resilient back-propagation neural network (RPNN) are employed for the classification of scenes. Four hundred color images are considered from MIT database of two classes as forest and street. A comparative study has been carried out on the performance of the two neural classifiers BPNN and RPNN on the increasing number of test samples. RPNN showed better classification results compared to BPNN on the large test samples.
Keywords: BPNN, Classification, Feature extraction, RPNN, Wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946775 Impulse Noise Reduction in Brain Magnetic Resonance Imaging Using Fuzzy Filters
Authors: Benjamin Y. M. Kwan, Hon Keung Kwan
Abstract:
Noise contamination in a magnetic resonance (MR) image could occur during acquisition, storage, and transmission in which effective filtering is required to avoid repeating the MR procedure. In this paper, an iterative asymmetrical triangle fuzzy filter with moving average center (ATMAVi filter) is used to reduce different levels of salt and pepper noise in a brain MR image. Besides visual inspection on filtered images, the mean squared error (MSE) is used as an objective measurement. When compared with the median filter, simulation results indicate that the ATMAVi filter is effective especially for filtering a higher level noise (such as noise density = 0.45) using a smaller window size (such as 3x3) when operated iteratively or using a larger window size (such as 5x5) when operated non-iteratively.Keywords: Brain images, Fuzzy filters, Magnetic resonance imaging, Salt and pepper noise reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216774 Object Speed Estimation by using Fuzzy Set
Authors: Hossein Pazhoumand-Dar, Amir Mohsen Toliyat Abolhassani, Ehsan Saeedi
Abstract:
Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Keywords: Blur Analysis, Fuzzy sets, Speed estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881773 Assessment of Urban Heat Island through Remote Sensing in Nagpur Urban Area Using Landsat 7 ETM+ Satellite Images
Authors: Meenal Surawar, Rajashree Kotharkar
Abstract:
Urban Heat Island (UHI) is found more pronounced as a prominent urban environmental concern in developing cities. To study the UHI effect in the Indian context, the Nagpur urban area has been explored in this paper using Landsat 7 ETM+ satellite images through Remote Sensing and GIS techniques. This paper intends to study the effect of LU/LC pattern on daytime Land Surface Temperature (LST) variation, contributing UHI formation within the Nagpur Urban area. Supervised LU/LC area classification was carried to study urban Change detection using ENVI 5. Change detection has been studied by carrying Normalized Difference Vegetation Index (NDVI) to understand the proportion of vegetative cover with respect to built-up ratio. Detection of spectral radiance from the thermal band of satellite images was processed to calibrate LST. Specific representative areas on the basis of urban built-up and vegetation classification were selected for observation of point LST. The entire Nagpur urban area shows that, as building density increases with decrease in vegetation cover, LST increases, thereby causing the UHI effect. UHI intensity has gradually increased by 0.7°C from 2000 to 2006; however, a drastic increase has been observed with difference of 1.8°C during the period 2006 to 2013. Within the Nagpur urban area, the UHI effect was formed due to increase in building density and decrease in vegetative cover.
Keywords: Land use, land cover, land surface temperature, remote sensing, urban heat island.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2614772 Re-Presenting the Egyptian Informal Urbanism in Films between 1994 and 2014
Authors: R. Mofeed, N. Elgendy
Abstract:
Cinema constructs mind-spaces that reflect inherent human thoughts and emotions. As a representational art, Cinema would introduce comprehensive images of life phenomena in different ways. The term “represent” suggests verity of meanings; bring into presence, replace or typify. In that sense, Cinema may present a phenomenon through direct embodiment, or introduce a substitute image that replaces the original phenomena, or typify it by relating the produced image to a more general category through a process of abstraction. This research is interested in questioning the type of images that Egyptian Cinema introduces to informal urbanism and how these images were conditioned and reshaped in the last twenty years. The informalities/slums phenomenon first appeared in Egypt and, particularly, Cairo in the early sixties, however, this phenomenon was completely ignored by the state and society until the eighties, and furthermore, its evident representation in Cinema was by the mid-nineties. The Informal City represents the illegal housing developments, and it is a fast growing form of urbanization in Cairo. Yet, this expanding phenomenon is still depicted as the minority, exceptional and marginal through the Cinematic lenses. This paper aims at tracing the forms of representations of the urban informalities in the Egyptian Cinema between 1994 and 2014, and how did that affect the popular mind and its perception of these areas. The paper runs two main lines of inquiry; the first traces the phenomena through a chronological and geographical mapping of the informal urbanism has been portrayed in films. This analysis is based on an academic research work at Cairo University in Fall 2014. The visual tracing through maps and timelines allowed a reading of the phases of ignorance, presence, typifying and repetition in the representation of this huge sector of the city through more than 50 films that has been investigated. The analysis clearly revealed the “portrayed image” of informality by the Cinema through the examined period. However, the second part of the paper explores the “perceived image”. A designed questionnaire is applied to highlight the main features of that image that is perceived by both inhabitants of informalities and other Cairenes based on watching selected films. The questionnaire covers the different images of informalities proposed in the Cinema whether in a comic or a melodramatic background and highlight the descriptive terms used, to see which of them resonate with the mass perceptions and affected their mental images. The two images; “portrayed” and “perceived” are then to be encountered to reflect on issues of repetitions, stereotyping and reality. The formulated stereotype of informal urbanism is finally outlined and justified in relation to both production consumption mechanisms of films and the State official vision of informalities.
Keywords: Cairo, cinema, informal urbanism, representation, stereotype.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450771 A Middleware Transparent Framework for Applying MDA to SOA
Authors: Ali Taee Zade, Siamak Rasulzadeh, Reza Torkashvan
Abstract:
Although Model Driven Architecture has taken successful steps toward model-based software development, this approach still faces complex situations and ambiguous questions while applying to real world software systems. One of these questions - which has taken the most interest and focus - is how model transforms between different abstraction levels, MDA proposes. In this paper, we propose an approach based on Story Driven Modeling and Aspect Oriented Programming to ease these transformations. Service Oriented Architecture is taken as the target model to test the proposed mechanism in a functional system. Service Oriented Architecture and Model Driven Architecture [1] are both considered as the frontiers of their own domain in the software world. Following components - which was the greatest step after object oriented - SOA is introduced, focusing on more integrated and automated software solutions. On the other hand - and from the designers' point of view - MDA is just initiating another evolution. MDA is considered as the next big step after UML in designing domain.Keywords: SOA, MDA, SDM, Model Transformation, Middleware Transparency, Aspects and Jini.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1512770 Face Detection using Variance based Haar-Like feature and SVM
Authors: Cuong Nguyen Khac, Ju H. Park, Ho-Youl Jung
Abstract:
This paper proposes a new approach to perform the problem of real-time face detection. The proposed method combines primitive Haar-Like feature and variance value to construct a new feature, so-called Variance based Haar-Like feature. Face in image can be represented with a small quantity of features using this new feature. We used SVM instead of AdaBoost for training and classification. We made a database containing 5,000 face samples and 10,000 non-face samples extracted from real images for learning purposed. The 5,000 face samples contain many images which have many differences of light conditions. And experiments showed that face detection system using Variance based Haar-Like feature and SVM can be much more efficient than face detection system using primitive Haar-Like feature and AdaBoost. We tested our method on two Face databases and one Non-Face database. We have obtained 96.17% of correct detection rate on YaleB face database, which is higher 4.21% than that of using primitive Haar-Like feature and AdaBoost.Keywords: AdaBoost, Haar-Like feature, SVM, variance, Variance based Haar-Like feature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3737769 Computer Aided Detection on Mammography
Authors: Giovanni Luca Masala
Abstract:
A typical definition of the Computer Aided Diagnosis (CAD), found in literature, can be: A diagnosis made by a radiologist using the output of a computerized scheme for automated image analysis as a diagnostic aid. Often it is possible to find the expression Computer Aided Detection (CAD or CADe): this definition emphasizes the intent of CAD to support rather than substitute the human observer in the analysis of radiographic images. In this article we will illustrate the application of CAD systems and the aim of these definitions. Commercially available CAD systems use computerized algorithms for identifying suspicious regions of interest. In this paper are described the general CAD systems as an expert system constituted of the following components: segmentation / detection, feature extraction, and classification / decision making. As example, in this work is shown the realization of a Computer- Aided Detection system that is able to assist the radiologist in identifying types of mammary tumor lesions. Furthermore this prototype of station uses a GRID configuration to work on a large distributed database of digitized mammographic images.Keywords: Computer Aided Detection, Computer Aided Diagnosis, mammography, GRID.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929768 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135767 Probabilistic Bayesian Framework for Infrared Face Recognition
Authors: Moulay A. Akhloufi, Abdelhakim Bendada
Abstract:
Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA and LDA. In this work, we introduce a probabilistic Bayesian framework for face recognition in the infrared spectrum. In the infrared spectrum, variations can occur between face images of the same individual due to pose, metabolic, time changes, etc. Bayesian approaches permit to reduce intrapersonal variation, thus making them very interesting for infrared face recognition. This framework is compared with classical linear techniques. Non linear techniques we developed recently for infrared face recognition are also presented and compared to the Bayesian face recognition framework. A new approach for infrared face extraction based on SVM is introduced. Experimental results show that the Bayesian technique is promising and lead to interesting results in the infrared spectrum when a sufficient number of face images is used in an intrapersonal learning process.
Keywords: Face recognition, biometrics, probabilistic imageprocessing, infrared imaging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883766 An Improved C-Means Model for MRI Segmentation
Authors: Ying Shen, Weihua Zhu
Abstract:
Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.
Keywords: Magnetic Resonance Image, C-means model, image segmentation, information entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 923765 A Novel Reversible Watermarking Method based on Adaptive Thresholding and Companding Technique
Authors: Nisar Ahmed Memon
Abstract:
Embedding and extraction of a secret information as well as the restoration of the original un-watermarked image is highly desirable in sensitive applications like military, medical, and law enforcement imaging. This paper presents a novel reversible data-hiding method for digital images using integer to integer wavelet transform and companding technique which can embed and recover the secret information as well as can restore the image to its pristine state. The novel method takes advantage of block based watermarking and iterative optimization of threshold for companding which avoids histogram pre and post-processing. Consequently, it reduces the associated overhead usually required in most of the reversible watermarking techniques. As a result, it keeps the distortion small between the marked and the original images. Experimental results show that the proposed method outperforms the existing reversible data hiding schemes reported in the literature.Keywords: Adaptive Thresholding, Companding Technique, Integer Wavelet Transform, Reversible Watermarking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871