Search results for: Analysis of image processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10707

Search results for: Analysis of image processing

9807 Radio Technology Frequency Identification Applied in High-Voltage Power Transmission- Line for Sag Measurement

Authors: Tlotlollo Sidwell Hlalele, Shengzhi Du

Abstract:

High-voltage power transmission lines are the back bone of electrical power utilities. The stability and continuous monitoring of this critical infrastructure is pivotal. Nine-Sigma representing Eskom Holding SOC limited, South Africa has a major problem on proactive detection of fallen power lines and real time sagging measurement together with slipping of such conductors. The main objective of this research is to innovate RFID technology to solve this challenge. Various options and technologies such as GPS, PLC, image processing, MR sensors and etc., have been reviewed and draw backs were made. The potential of RFID to give precision measurement will be observed and presented. The future research will look at magnetic and electrical interference as well as corona effect on the technology.

Keywords: Precision Measurement, RFID and Sag.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2427
9806 Robust Minutiae Watermarking in Wavelet Domain for Fingerprint Security

Authors: Rajlaxmi Chouhan, Pritee Khanna

Abstract:

In this manuscript, a wavelet-based blind watermarking scheme has been proposed as a means to provide security to authenticity of a fingerprint. The information used for identification or verification of a fingerprint mainly lies in its minutiae. By robust watermarking of the minutiae in the fingerprint image itself, the useful information can be extracted accurately even if the fingerprint is severely degraded. The minutiae are converted in a binary watermark and embedding these watermarks in the detail regions increases the robustness of watermarking, at little to no additional impact on image quality. It has been experimentally shown that when the minutiae is embedded into wavelet detail coefficients of a fingerprint image in spread spectrum fashion using a pseudorandom sequence, the robustness is observed to have a proportional response while perceptual invisibility has an inversely proportional response to amplification factor “K". The DWT-based technique has been found to be very robust against noises, geometrical distortions filtering and JPEG compression attacks and is also found to give remarkably better performance than DCT-based technique in terms of correlation coefficient and number of erroneous minutiae.

Keywords: Fingerprint watermarking, minutiae, discrete wavelet transform, PN sequence

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021
9805 Processing the Medical Sensors Signals Using Fuzzy Inference System

Authors: S. Bouharati, I. Bouharati, C. Benzidane, F. Alleg, M. Belmahdi

Abstract:

Sensors possess several properties of physical measures. Whether devices that convert a sensed signal into an electrical signal, chemical sensors and biosensors, thus all these sensors can be considered as an interface between the physical and electrical equipment. The problem is the analysis of the multitudes of saved settings as input variables. However, they do not all have the same level of influence on the outputs. In order to identify the most sensitive parameters, those that can guide users in gathering information on the ground and in the process of model calibration and sensitivity analysis for the effect of each change made. Mathematical models used for processing become very complex. In this paper a fuzzy rule-based system is proposed as a solution for this problem. The system collects the available signals information from sensors. Moreover, the system allows the study of the influence of the various factors that take part in the decision system. Since its inception fuzzy set theory has been regarded as a formalism suitable to deal with the imprecision intrinsic to many problems. At the same time, fuzzy sets allow to use symbolic models. In this study an example was applied for resolving variety of physiological parameters that define human health state. The application system was done for medical diagnosis help. The inputs are the signals expressed the cardiovascular system parameters, blood pressure, Respiratory system paramsystem was done, it will be able to predict the state of patient according any input values.

Keywords: Sensors, Sensivity, fuzzy logic, analysis, physiological parameters, medical diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
9804 On Generalizing Rough Set Theory via using a Filter

Authors: Serkan Narlı, Ahmet Z. Ozcelik

Abstract:

The theory of rough sets is generalized by using a filter. The filter is induced by binary relations and it is used to generalize the basic rough set concepts. The knowledge representations and processing of binary relations in the style of rough set theory are investigated.

Keywords: Rough set, fuzzy set, membership function, knowledge representation and processing, information theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
9803 Face Tracking using a Polling Strategy

Authors: Rodrigo Montufar-Chaveznava

Abstract:

The colors of the human skin represent a special category of colors, because they are distinctive from the colors of other natural objects. This category is found as a cluster in color spaces, and the skin color variations between people are mostly due to differences in the intensity. Besides, the face detection based on skin color detection is a faster method as compared to other techniques. In this work, we present a system to track faces by carrying out skin color detection in four different color spaces: HSI, YCbCr, YES and RGB. Once some skin color regions have been detected for each color space, we label each and get some characteristics such as size and position. We are supposing that a face is located in one the detected regions. Next, we compare and employ a polling strategy between labeled regions to determine the final region where the face effectively has been detected and located.

Keywords: Tracking, face detection, image processing, colorspaces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
9802 Variance Based Component Analysis for Texture Segmentation

Authors: Zeinab Ghasemi, S. Amirhassan Monadjemi, Abbas Vafaei

Abstract:

This paper presents a comparative analysis of a new unsupervised PCA-based technique for steel plates texture segmentation towards defect detection. The proposed scheme called Variance Based Component Analysis or VBCA employs PCA for feature extraction, applies a feature reduction algorithm based on variance of eigenpictures and classifies the pixels as defective and normal. While the classic PCA uses a clusterer like Kmeans for pixel clustering, VBCA employs thresholding and some post processing operations to label pixels as defective and normal. The experimental results show that proposed algorithm called VBCA is 12.46% more accurate and 78.85% faster than the classic PCA.

Keywords: Principal Component Analysis; Variance Based Component Analysis; Defect Detection; Texture Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
9801 Scene Adaptive Shadow Detection Algorithm

Authors: Mohammed Ibrahim M, Anupama R.

Abstract:

Robustness is one of the primary performance criteria for an Intelligent Video Surveillance (IVS) system. One of the key factors in enhancing the robustness of dynamic video analysis is,providing accurate and reliable means for shadow detection. If left undetected, shadow pixels may result in incorrect object tracking and classification, as it tends to distort localization and measurement information. Most of the algorithms proposed in literature are computationally expensive; some to the extent of equalling computational requirement of motion detection. In this paper, the homogeneity property of shadows is explored in a novel way for shadow detection. An adaptive division image (which highlights homogeneity property of shadows) analysis followed by a relatively simpler projection histogram analysis for penumbra suppression is the key novelty in our approach.

Keywords: homogeneity, penumbra, projection histogram, shadow correction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
9800 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: Lexicon of disasters, modelling, Petri nets, text annotation, social disasters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1160
9799 Optical Analysis of Variable Aperture Mechanism for a Solar Reactor

Authors: Akanksha Menon, Nesrin Ozalp

Abstract:

Solar energy is not only sustainable but also a clean alternative to be used as source of high temperature heat for many processes and power generation. However, the major drawback of solar energy is its transient nature. Especially in solar thermochemical processing, it is crucial to maintain constant or semiconstant temperatures inside the solar reactor. In our laboratory, we have developed a mechanism allowing us to achieve semi-constant temperature inside the solar reactor. In this paper, we introduce the concept along with some updated designs and provide the optical analysis of the concept under various incoming flux.

Keywords: Aperture, Solar reactor, Optical analysis, Solar thermal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
9798 Plasma Chemical Gasification of Solid Fuel with Mineral Mass Processing

Authors: V. E. Messerle, O. A. Lavrichshev, A. B. Ustimenko

Abstract:

The article presents a plasma chemical technology for processing solid fuels, using examples of bituminous and brown coals. Thermodynamic and experimental investigation of the technology was made. The technology allows producing synthesis gas from the coal organic mass and valuable components (technical silicon, ferrosilicon, aluminum, and carbon silicon, as well as microelements of rare metals, such as uranium, molybdenum, vanadium, etc.) from the mineral mass. The thusly produced highcalorific synthesis gas can be used for synthesis of methanol, as a high-calorific reducing gas instead of blast-furnace coke as well as power gas for thermal power plants.

Keywords: Gasification, mineral mass, organic mass, plasma, processing, solid fuel, synthesis gas, valuable components.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1969
9797 Hybrid Color-Texture Space for Image Classification

Authors: Hassan El Maia, Ahmed Hammouch, Driss Aboutajdine

Abstract:

This work presents an approach for the construction of a hybrid color-texture space by using mutual information. Feature extraction is done by the Laws filter with SVM (Support Vectors Machine) as a classifier. The classification is applied on the VisTex database and a SPOT HRV (XS) image representing two forest areas in the region of Rabat in Morocco. The result of classification obtained in the hybrid space is compared with the one obtained in the RGB color space.

Keywords: Color, texture, laws filter, mutual information, SVM, hybrid space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
9796 An Agent-Based Modelling Simulation Approach to Calculate Processing Delay of GEO Satellite Payload

Authors: V. Vicente E. Mujica, Gustavo Gonzalez

Abstract:

The global coverage of broadband multimedia and internet-based services in terrestrial-satellite networks demand particular interests for satellite providers in order to enhance services with low latencies and high signal quality to diverse users. In particular, the delay of on-board processing is an inherent source of latency in a satellite communication that sometimes is discarded for the end-to-end delay of the satellite link. The frame work for this paper includes modelling of an on-orbit satellite payload using an agent model that can reproduce the properties of processing delays. In essence, a comparison of different spatial interpolation methods is carried out to evaluate physical data obtained by an GEO satellite in order to define a discretization function for determining that delay. Furthermore, the performance of the proposed agent and the development of a delay discretization function are together validated by simulating an hybrid satellite and terrestrial network. Simulation results show high accuracy according to the characteristics of initial data points of processing delay for Ku bands.

Keywords: Terrestrial-satellite networks, latency, on-orbit satellite payload, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 892
9795 Reduced Dynamic Time Warping for Handwriting Recognition Based on Multidimensional Time Series of a Novel Pen Device

Authors: Muzaffar Bashir, Jürgen Kempf

Abstract:

The purpose of this paper is to present a Dynamic Time Warping technique which reduces significantly the data processing time and memory size of multi-dimensional time series sampled by the biometric smart pen device BiSP. The acquisition device is a novel ballpoint pen equipped with a diversity of sensors for monitoring the kinematics and dynamics of handwriting movement. The DTW algorithm has been applied for time series analysis of five different sensor channels providing pressure, acceleration and tilt data of the pen generated during handwriting on a paper pad. But the standard DTW has processing time and memory space problems which limit its practical use for online handwriting recognition. To face with this problem the DTW has been applied to the sum of the five sensor signals after an adequate down-sampling of the data. Preliminary results have shown that processing time and memory size could significantly be reduced without deterioration of performance in single character and word recognition. Further excellent accuracy in recognition was achieved which is mainly due to the reduced dynamic time warping RDTW technique and a novel pen device BiSP.

Keywords: Biometric character recognition, biometric person authentication, biometric smart pen BiSP, dynamic time warping DTW, online-handwriting recognition, multidimensional time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2408
9794 Video Matting based on Background Estimation

Authors: J.-H. Moon, D.-O Kim, R.-H. Park

Abstract:

This paper presents a video matting method, which extracts the foreground and alpha matte from a video sequence. The objective of video matting is finding the foreground and compositing it with the background that is different from the one in the original image. By finding the motion vectors (MVs) using a sliced block matching algorithm (SBMA), we can extract moving regions from the video sequence under the assumption that the foreground is moving and the background is stationary. In practice, foreground areas are not moving through all frames in an image sequence, thus we accumulate moving regions through the image sequence. The boundaries of moving regions are found by Canny edge detector and the foreground region is separated in each frame of the sequence. Remaining regions are defined as background regions. Extracted backgrounds in each frame are combined and reframed as an integrated single background. Based on the estimated background, we compute the frame difference (FD) of each frame. Regions with the FD larger than the threshold are defined as foreground regions, boundaries of foreground regions are defined as unknown regions and the rest of regions are defined as backgrounds. Segmentation information that classifies an image into foreground, background, and unknown regions is called a trimap. Matting process can extract an alpha matte in the unknown region using pixel information in foreground and background regions, and estimate the values of foreground and background pixels in unknown regions. The proposed video matting approach is adaptive and convenient to extract a foreground automatically and to composite a foreground with a background that is different from the original background.

Keywords: Background estimation, Object segmentation, Blockmatching algorithm, Video matting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
9793 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: Coupled Markov random field, environment, object-based analysis, Polarimetric SAR images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
9792 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: Queueing network, discrete-event simulation, health applications, SPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
9791 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom

Authors: S. Sookpeng, C. J. Martin, D. J. Gentle

Abstract:

Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.

Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2626
9790 Scatterer Density in Nonlinear Diffusion for Speckle Reduction in Ultrasound Imaging: The Isotropic Case

Authors: Ahmed Badawi

Abstract:

This paper proposes a method for speckle reduction in medical ultrasound imaging while preserving the edges with the added advantages of adaptive noise filtering and speed. A nonlinear image diffusion method that incorporates local image parameter, namely, scatterer density in addition to gradient, to weight the nonlinear diffusion process, is proposed. The method was tested for the isotropic case with a contrast detail phantom and varieties of clinical ultrasound images, and then compared to linear and some other diffusion enhancement methods. Different diffusion parameters were tested and tuned to best reduce speckle noise and preserve edges. The method showed superior performance measured both quantitatively and qualitatively when incorporating scatterer density into the diffusivity function. The proposed filter can be used as a preprocessing step for ultrasound image enhancement before applying automatic segmentation, automatic volumetric calculations, or 3D ultrasound volume rendering.

Keywords: Ultrasound imaging, Nonlinear isotropic diffusion, Speckle noise, Scattering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1954
9789 Device for 3D Analysis of Basic Movements of the Lower Extremity

Authors: Jiménez Villanueva Mayra Alejandra, Ortíz Casallas Diana Carolina, Luengas Contreras Lely Adriana

Abstract:

This document details the process of developing a wireless device that captures the basic movements of the foot (plantar flexion, dorsal flexion, abduction, adduction.), and the knee movement (flexion). It implements a motion capture system by using a hardware based on optical fiber sensors, due to the advantages in terms of scope, noise immunity and speed of data transmission and reception. The operating principle used by this system is the detection and transmission of joint movement by mechanical elements and their respective measurement by optical ones (in this case infrared). Likewise, Visual Basic software is used for reception, analysis and signal processing of data acquired by the device, generating a 3D graphical representation in real time of each movement. The result is a boot in charge of capturing the movement, a transmission module (Implementing Xbee Technology) and a receiver module for receiving information and sending it to the PC for their respective processing. The main idea with this device is to help on topics such as bioengineering and medicine, by helping to improve the quality of life and movement analysis.

Keywords: abduction, adduction, A / D converter, Autodesk 3DMax, Infrared Diode, Driver, extension, flexion, Infrared LEDs, Interface, Modeling OPENGL, Optical Fiber, USB CDC(Communications Device Class), Virtual Reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
9788 Localization of Anatomical Landmarks in Head CT Images for Image to Patient Registration

Authors: M. Ovinis, D. Kerr, K. Bouazza-Marouf, M. Vloeberghs

Abstract:

The use of anatomical landmarks as a basis for image to patient registration is appealing because the registration may be performed retrospectively. We have previously proposed the use of two anatomical soft tissue landmarks of the head, the canthus (corner of the eye) and the tragus (a small, pointed, cartilaginous flap of the ear), as a registration basis for an automated CT image to patient registration system, and described their localization in patient space using close range photogrammetry. In this paper, the automatic localization of these landmarks in CT images, based on their curvature saliency and using a rule based system that incorporates prior knowledge of their characteristics, is described. Existing approaches to landmark localization in CT images are predominantly semi-automatic and primarily for localizing internal landmarks. To validate our approach, the positions of the landmarks localized automatically and manually in near isotropic CT images of 102 patients were compared. The average difference was 1.2mm (std = 0.9mm, max = 4.5mm) for the medial canthus and 0.8mm (std = 0.6mm, max = 2.6mm) for the tragus. The medial canthus and tragus can be automatically localized in CT images, with performance comparable to manual localization, based on the approach presented.

Keywords: Anatomical Landmarks, CT, Localization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3330
9787 Re-Presenting the Egyptian Informal Urbanism in Films between 1994 and 2014

Authors: R. Mofeed, N. Elgendy

Abstract:

Cinema constructs mind-spaces that reflect inherent human thoughts and emotions. As a representational art, Cinema would introduce comprehensive images of life phenomena in different ways. The term “represent” suggests verity of meanings; bring into presence, replace or typify. In that sense, Cinema may present a phenomenon through direct embodiment, or introduce a substitute image that replaces the original phenomena, or typify it by relating the produced image to a more general category through a process of abstraction. This research is interested in questioning the type of images that Egyptian Cinema introduces to informal urbanism and how these images were conditioned and reshaped in the last twenty years. The informalities/slums phenomenon first appeared in Egypt and, particularly, Cairo in the early sixties, however, this phenomenon was completely ignored by the state and society until the eighties, and furthermore, its evident representation in Cinema was by the mid-nineties. The Informal City represents the illegal housing developments, and it is a fast growing form of urbanization in Cairo. Yet, this expanding phenomenon is still depicted as the minority, exceptional and marginal through the Cinematic lenses. This paper aims at tracing the forms of representations of the urban informalities in the Egyptian Cinema between 1994 and 2014, and how did that affect the popular mind and its perception of these areas. The paper runs two main lines of inquiry; the first traces the phenomena through a chronological and geographical mapping of the informal urbanism has been portrayed in films. This analysis is based on an academic research work at Cairo University in Fall 2014. The visual tracing through maps and timelines allowed a reading of the phases of ignorance, presence, typifying and repetition in the representation of this huge sector of the city through more than 50 films that has been investigated. The analysis clearly revealed the “portrayed image” of informality by the Cinema through the examined period. However, the second part of the paper explores the “perceived image”. A designed questionnaire is applied to highlight the main features of that image that is perceived by both inhabitants of informalities and other Cairenes based on watching selected films. The questionnaire covers the different images of informalities proposed in the Cinema whether in a comic or a melodramatic background and highlight the descriptive terms used, to see which of them resonate with the mass perceptions and affected their mental images. The two images; “portrayed” and “perceived” are then to be encountered to reflect on issues of repetitions, stereotyping and reality. The formulated stereotype of informal urbanism is finally outlined and justified in relation to both production consumption mechanisms of films and the State official vision of informalities.

Keywords: Cairo, cinema, informal urbanism, representation, stereotype.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
9786 RoboWeedSupport-Sub Millimeter Weed Image Acquisition in Cereal Crops with Speeds up till 50 Km/H

Authors: Morten Stigaard Laursen, Rasmus Nyholm Jørgensen, Mads Dyrmann, Robert Poulsen

Abstract:

For the past three years, the Danish project, RoboWeedSupport, has sought to bridge the gap between the potential herbicide savings using a decision support system and the required weed inspections. In order to automate the weed inspections it is desired to generate a map of the weed species present within the field, to generate the map images must be captured with samples covering the field. This paper investigates the economical cost of performing this data collection based on a camera system mounted on a all-terain vehicle (ATV) able to drive and collect data at up to 50 km/h while still maintaining a image quality sufficient for identifying newly emerged grass weeds. The economical estimates are based on approximately 100 hectares recorded at three different locations in Denmark. With an average image density of 99 images per hectare the ATV had an capacity of 28 ha per hour, which is estimated to cost 6.6 EUR/ha. Alternatively relying on a boom solution for an existing tracktor it was estimated that a cost of 2.4 EUR/ha is obtainable under equal conditions.

Keywords: Weed mapping, integrated weed management, weed recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
9785 Bleeding Detection Algorithm for Capsule Endoscopy

Authors: Yong-Gyu Lee, Gilwon Yoon

Abstract:

Automatic detection of bleeding is of practical importance since capsule endoscopy produces an extremely large number of images. Algorithm development of bleeding detection in the digestive tract is difficult due to different contrasts among the images, food dregs, secretion and others. In this study, were assigned weighting factors derived from the independent features of the contrast and brightness between bleeding and normality. Spectral analysis based on weighting factors was fast and accurate. Results were a sensitivity of 87% and a specificity of 90% when the accuracy was determined for each pixel out of 42 endoscope images.

Keywords: bleeding, capsule endoscopy, image analysis, weighted spectrum

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2125
9784 Iterative Image Reconstruction for Sparse-View Computed Tomography via Total Variation Regularization and Dictionary Learning

Authors: XianYu Zhao, JinXu Guo

Abstract:

Recently, low-dose computed tomography (CT) has become highly desirable due to increasing attention to the potential risks of excessive radiation. For low-dose CT imaging, ensuring image quality while reducing radiation dose is a major challenge. To facilitate low-dose CT imaging, we propose an improved statistical iterative reconstruction scheme based on the Penalized Weighted Least Squares (PWLS) standard combined with total variation (TV) minimization and sparse dictionary learning (DL) to improve reconstruction performance. We call this method "PWLS-TV-DL". In order to evaluate the PWLS-TV-DL method, we performed experiments on digital phantoms and physical phantoms, respectively. The experimental results show that our method is in image quality and calculation. The efficiency is superior to other methods, which confirms the potential of its low-dose CT imaging.

Keywords: Low dose computed tomography, penalized weighted least squares, total variation, dictionary learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841
9783 Velocity Distribution in Open Channels with Sand: An Experimental Study

Authors: E. Keramaris

Abstract:

In this study, laboratory experiments in open channel flows over a sand bed were conducted. A porous bed (sand bed) with porosity of ε=0.70 and porous thickness of s΄=3 cm was tested. Vertical distributions of velocity were evaluated by using a two-dimensional (2D) Particle Image Velocimetry (PIV). Velocity profiles are measured above the impermeable bed and above the sand bed for the same different total water heights (h= 6, 8, 10 and 12 cm) and for the same slope S=1.5. Measurements of mean velocity indicate the effects of the bed material used (sand bed) on the flow characteristics (Velocity distribution and Reynolds number) in comparison with those above the impermeable bed.

Keywords: Particle image velocimetry, sand bed, velocity distribution, Reynolds number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
9782 SEM Image Classification Using CNN Architectures

Authors: G. Türkmen, Ö. Tekin, K. Kurtuluş, Y. Y. Yurtseven, M. Baran

Abstract:

A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.

Keywords: Convolutional Neural Networks, deep learning, image classification, scanning electron microscope.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207
9781 A Molding Surface Auto-Inspection System

Authors: Ssu-Han Chen, Der-Baau Perng

Abstract:

Molding process in IC manufacturing secures chips against the harms done by hot, moisture or other external forces. While a chip was being molded,defects like cracks, dilapidation, or voids may be embedding on the molding surface. The molding surfaces the study poises to treat and the ones on the market, though, differ in the surface where texture similar to defects is everywhere. Manual inspection usually passes over low-contrast cracks or voids; hence an automatic optical inspection system for molding surface is necessary. The proposed system is consisted of a CCD, a coaxial light, a back light as well as a motion control unit. Based on the property of statistical textures of the molding surface, a series of digital image processing and classification procedure is carried out. After training of the parameter associated with above algorithm, result of the experiment suggests that the accuracy rate is up to 93.75%, contributing to the inspection quality of IC molding surface.

Keywords: Molding surface, machine vision, statistical texture, discrete Fourier transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2747
9780 Shape-Based Image Retrieval Using Shape Matrix

Authors: C. Sheng, Y. Xin

Abstract:

Retrieval image by shape similarity, given a template shape is particularly challenging, owning to the difficulty to derive a similarity measurement that closely conforms to the common perception of similarity by humans. In this paper, a new method for the representation and comparison of shapes is present which is based on the shape matrix and snake model. It is scaling, rotation, translation invariant. And it can retrieve the shape images with some missing or occluded parts. In the method, the deformation spent by the template to match the shape images and the matching degree is used to evaluate the similarity between them.

Keywords: shape representation, shape matching, shape matrix, deformation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
9779 Concealed Objects Detection in Visible, Infrared and Terahertz Ranges

Authors: M. Kowalski, M. Kastek, M. Szustakowski

Abstract:

Multispectral screening systems are becoming more popular because of their very interesting properties and applications. One of the most significant applications of multispectral screening systems is prevention of terrorist attacks. There are many kinds of threats and many methods of detection. Visual detection of objects hidden under clothing of a person is one of the most challenging problems of threats detection. There are various solutions of the problem; however, the most effective utilize multispectral surveillance imagers. The development of imaging devices and exploration of new spectral bands is a chance to introduce new equipment for assuring public safety. We investigate the possibility of long lasting detection of potentially dangerous objects covered with various types of clothing. In the article we present the results of comparative studies of passive imaging in three spectrums – visible, infrared and terahertz.

Keywords: Infrared, image processing, object detection, screening camera, terahertz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3095
9778 MONARC: A Case Study on Simulation Analysis for LHC Activities

Authors: Ciprian Dobre

Abstract:

The scale, complexity and worldwide geographical spread of the LHC computing and data analysis problems are unprecedented in scientific research. The complexity of processing and accessing this data is increased substantially by the size and global span of the major experiments, combined with the limited wide area network bandwidth available. We present the latest generation of the MONARC (MOdels of Networked Analysis at Regional Centers) simulation framework, as a design and modeling tool for large scale distributed systems applied to HEP experiments. We present simulation experiments designed to evaluate the capabilities of the current real-world distributed infrastructure to support existing physics analysis processes and the means by which the experiments bands together to meet the technical challenges posed by the storage, access and computing requirements of LHC data analysis within the CMS experiment.

Keywords: Modeling and simulation, evaluation, large scale distributed systems, LHC experiments, CMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814