Search results for: LANDSAT images
1746 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 81745 Multimedia Container for Autonomous Car
Authors: Janusz Bobulski, Mariusz Kubanek
Abstract:
The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.Keywords: an autonomous car, image processing, lidar, obstacle detection
Procedia PDF Downloads 2251744 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images
Authors: Jameela Ali Alkrimi, Abdul Rahim Ahmad, Azizah Suliman, Loay E. George
Abstract:
Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. The lack of RBCs is a condition characterized by lower than normal hemoglobin level; this condition is referred to as 'anemia'. In this study, a software was developed to isolate RBCs by using a machine learning approach to classify anemic RBCs in microscopic images. Several features of RBCs were extracted using image processing algorithms, including principal component analysis (PCA). With the proposed method, RBCs were isolated in 34 second from an image containing 18 to 27 cells. We also proposed that PCA could be performed to increase the speed and efficiency of classification. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network ANN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained for a short time period with more efficient when PCA was used.Keywords: red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC
Procedia PDF Downloads 4051743 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 901742 Derivation of Bathymetry Data Using Worldview-2 Multispectral Images in Shallow, Turbid and Saline Lake Acıgöl
Authors: Muhittin Karaman, Murat Budakoglu
Abstract:
In this study, derivation of lake bathymetry was evaluated using the high resolution Worldview-2 multispectral images in the very shallow hypersaline Lake Acıgöl which does not have a stable water table due to the wet-dry season changes and industrial usage. Every year, a great part of the lake water budget has been consumed for the industrial salt production in the evaporation ponds, which are generally located on the south and north shores of Lake Acıgöl. Therefore, determination of the water level changes from a perspective of remote sensing-based lake water by bathymetry studies has a great importance in the sustainability-control of the lake. While the water table interval is around 1 meter between dry and wet season, dissolved ion concentration, salinity and turbidity also show clear differences during these two distinct seasonal periods. At the same time, with the satellite data acquisition (June 9, 2013), a field study was conducted to collect the salinity values, Secchi disk depths and turbidity levels. Max depth, Secchi disk depth and salinity were determined as 1,7 m, 0,9 m and 43,11 ppt, respectively. Eight-band Worldview-2 image was corrected for atmospheric effects by ATCOR technique. For each sampling point in the image, mean reflectance values in 1*1, 3*3, 5*5, 7*7, 9*9, 11*11, 13*13, 15*15, 17*17, 19*19, 21*21, 51*51 pixel reflectance neighborhoods were calculated separately. A unique image has been derivated for each matrix resolution. Spectral values and depth relation were evaluated for these distinct resolution images. Correlation coefficients were determined for the 1x1 matrix: 0,98, 0,96, 0,95 and 0,90 for the 724 nm, 831 nm, 908 nm and 659 nm, respectively. While 15x5 matrix characteristics with 0,98, 0,97 and 0,97 correlation values for the 724 nm, 908 nm and 831 nm, respectively; 51x51 matrix shows 0,98, 0,97 and 0,96 correlation values for the 724 nm, 831 nm and 659 nm, respectively. Comparison of all matrix resolutions indicates that RedEdge band (724 nm) of the Worldview-2 satellite image has the best correlation with the saline shallow lake of Acıgöl in-situ depth.Keywords: bathymetry, Worldview-2 satellite image, ATCOR technique, Lake Acıgöl, Denizli, Turkey
Procedia PDF Downloads 4471741 Potential Effects of Green Infrastructures on the Land Surface Temperatures in Arid Areas
Authors: Adila Shafqat
Abstract:
Climate change and urbanization has changed the face of many cities in developing countries. Urbanization is linked with land use and land cover change, that is further intensify by the effects of changing climates. Green infrastructures provide numerous ecosystem services which effect the physical set up of the cities in the long run. Land surface temperatures is considered as defining parameter in the studies of the thermal impact on the land cover. Current study is conducted in the semi-arid urban areas of the Bahawalpur region. Accordingly, Land Surface Temperatures and land cover maps are derived from Landsat image through remote sensing techniques. The cooling impact of green infrastructure is determined by calculating land surface temperature of buffered zones around green infrastructures. A regression model is applied for results. It is seen that land surface temperature around green infrastructures in 1 to 3 degrees lower than the built up surroundings. The result indicates that the urban green infrastructures should be planned according to the local needs and characteristics of landuse so that they can effectively tackle land surface temperatures of urban areas.Keywords: climate change, surface temperatures, green spaces, urban planning
Procedia PDF Downloads 1201740 Automating 2D CAD to 3D Model Generation Process: Wall pop-ups
Authors: Mohit Gupta, Chialing Wei, Thomas Czerniawski
Abstract:
In this paper, we have built a neural network that can detect walls on 2D sheets and subsequently create a 3D model in Revit using Dynamo. The training set includes 3500 labeled images, and the detection algorithm used is YOLO. Typically, engineers/designers make concentrated efforts to convert 2D cad drawings to 3D models. This costs a considerable amount of time and human effort. This paper makes a contribution in automating the task of 3D walls modeling. 1. Detecting Walls in 2D cad and generating 3D pop-ups in Revit. 2. Saving designer his/her modeling time in drafting elements like walls from 2D cad to 3D representation. An object detection algorithm YOLO is used for wall detection and localization. The neural network is trained over 3500 labeled images of size 256x256x3. Then, Dynamo is interfaced with the output of the neural network to pop-up 3D walls in Revit. The research uses modern technological tools like deep learning and artificial intelligence to automate the process of generating 3D walls without needing humans to manually model them. Thus, contributes to saving time, human effort, and money.Keywords: neural networks, Yolo, 2D to 3D transformation, CAD object detection
Procedia PDF Downloads 1441739 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries
Authors: Ahmed Elaksher
Abstract:
Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.Keywords: UAV, photogrammetry, SfM, DEM
Procedia PDF Downloads 2941738 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models
Authors: Lucille Alonso, Florent Renard
Abstract:
The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island
Procedia PDF Downloads 1371737 Electrode Performance of Carbon Coated Nanograined LiFePO4 in Lithium Batteries
Authors: Princess Stephanie P. Llanos, Rinlee Butch M. Cervera
Abstract:
Lithium iron phosphate (LiFePO4) is a potential cathode material for lithium-ion batteries due to its promising characteristics. In this study, carbon-coated nanograined LiFePO4 is synthesized via wet chemistry method at a low temperature of 400 °C and investigated its performance as a cathode in Lithium battery. The X-ray diffraction pattern of the synthesized samples can be indexed to an orthorhombic LiFePO4 structure. Agglomerated particles that range from 200 nm to 300 nm are observed from scanning electron microscopy images. Transmission electron microscopy images confirm the crystalline structure of LiFePO4 and coating of amorphous carbon layer. Elemental mapping using Energy dispersive spectroscopy analysis revealed the homogeneous dispersion of Fe, P, O, and C elements. On the other hand, the electrochemical performances of the synthesized cathodes were investigated using cyclic voltammetry, galvanostatic charge/discharge tests with different C-rates, and cycling performances. Galvanostatic charge and discharge measurements revealed that the sample sintered at 400 °C for 3 hours with carbon coating demonstrated the highest capacity among the samples which reaches up to 160 mAhg⁻¹ at 0.1C rate.Keywords: cathode, charge-discharge, electrochemical, lithium batteries
Procedia PDF Downloads 3311736 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions
Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin
Abstract:
In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography
Procedia PDF Downloads 2691735 Carbon-Doped TiO2 Nanofibers Prepared by Electrospinning
Authors: ChoLiang Chung, YuMin Chen
Abstract:
C-doped TiO2 nanofibers were prepared by electrospinning successfully. Different amounts of carbon were added into the nanofibers by using chitosan, aiming to shift the wave length that is required to excite the photocatalyst from ultraviolet light to visible light. Different amounts of carbon and different atmosphere fibers were calcined at 500oC, and the optical characteristic of C-doped TiO2 nanofibers had been changed. characterizes of nanofibers were identified by X-Ray Diffraction (XRD), Field Emission Scanning Electron Microscope (FE-SEM), UV-vis, Atomic Force Microscope (AFM), and Fourier Transform Infrared Spectroscopy (FTIR). The XRD is used to identify the phase composition of nanofibers. The morphology of nanofibers were explored by FE-SEM and AFM. Optical characteristics of absorption were measured by UV-Vis. Three dimension surface images of C-doped TiO2 nanofibers revealed different effects of processing. The results of XRD showed that the phase of C-doped TiO2 nanofibers transformed to rutile phase and anatase phase successfully. The results of AFM showed that the surface morphology of nanofibers became smooth after high temperature treatment. Images from FE-SEM revealed the average size of nanofibers. UV-vis results showed that the band-gap of TiO2 were reduced. Finally, we found out C-doped TiO2 nanofibers can change countenance of nanofiber and make it smoother.Keywords: carbon, TiO2, chitosan, electrospinning
Procedia PDF Downloads 2571734 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies
Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi
Abstract:
Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.Keywords: Bag of Visual Words (BOVW), classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar (PolSAR)
Procedia PDF Downloads 2091733 Ensemble of Deep CNN Architecture for Classifying the Source and Quality of Teff Cereal
Authors: Belayneh Matebie, Michael Melese
Abstract:
The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, this study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data is collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including FVGG16, FINCV3, QSCTC, EMQSCTC, SVM, and RF, are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms.Keywords: Teff, ensemble learning, max-voting, CNN, SVM, RF
Procedia PDF Downloads 531732 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images
Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann
Abstract:
FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design
Procedia PDF Downloads 2781731 Gynocentrism and Self-Orientalization: A Visual Trend in Chinese Fashion Photography
Authors: Zhen Sun
Abstract:
The study adopts the method of visual social semiotics to analyze a sample of fashion photos that were recently published in Chinese fashion magazines that target towards both male and female readers. It identifies a new visual trend in fashion photography, which is characterized by two features. First, the photos represent young, confident, and stylish female models with lower-class sloppy old men. The visual inharmony between the sexually desirable women and the aged men has suggested an impossibly accomplished sexuality and eroticism. Though the women are still under the male gaze, they are depicted as unreachable objects of voyeurism other than sexual objects subordinated to men. Second, the represented people are usually put in the backdrop of tasteless or vulgar Chinese town life, which is congruent with the images of men but makes the modern city girls out of place. The photographers intentionally contrast the images of women with that of men and with the background, which implies an imaginary binary division of modern Orientalism and the photographers’ self-orientalization strategy. Under the theoretical umbrella of neoliberal postfeminism, this study defines a new kind of gynocentric stereotype in Chinese fashion photography, which challenges the previous observations on gender portrayals in fashion magazines.Keywords: fashion photography, gynocentrism, neoliberal postfeminism, self-orientalization
Procedia PDF Downloads 4231730 Analysis of Facial Expressions with Amazon Rekognition
Authors: Kashika P. H.
Abstract:
The development of computer vision systems has been greatly aided by the efficient and precise detection of images and videos. Although the ability to recognize and comprehend images is a strength of the human brain, employing technology to tackle this issue is exceedingly challenging. In the past few years, the use of Deep Learning algorithms to treat object detection has dramatically expanded. One of the key issues in the realm of image recognition is the recognition and detection of certain notable people from randomly acquired photographs. Face recognition uses a way to identify, assess, and compare faces for a variety of purposes, including user identification, user counting, and classification. With the aid of an accessible deep learning-based API, this article intends to recognize various faces of people and their facial descriptors more accurately. The purpose of this study is to locate suitable individuals and deliver accurate information about them by using the Amazon Rekognition system to identify a specific human from a vast image dataset. We have chosen the Amazon Rekognition system, which allows for more accurate face analysis, face comparison, and face search, to tackle this difficulty.Keywords: Amazon rekognition, API, deep learning, computer vision, face detection, text detection
Procedia PDF Downloads 1041729 Role of Television in Constructing Gender for Young Women
Authors: Bhavna Negi
Abstract:
Several studies highlight the significance of media in constructing realities around us. According to Forbes magazine the demand of televisions has increased several times in the developing nations. A recent survey reveals that 112 million Indian households have a television, with 61 percent accessing cable. The space and visibility of television has enormously grown over the last decade in Indian homes. This small box has indeed taken a large place in their daily routines. The multi channel viewing and TRPs puzzle the Indian audience. This medium creates and constructs social images and roles which form internal representation about societal functioing. According to National Council of Applied Economic Research about twenty seven percent Indian literate youth watches TV for recreation. The present study finds about the role of television and its impact on young college going women with reference to family based serials shown on television. It is interesting to see how young women perceive the popular family soaps and define norms, roles and spaces for a woman and a man. The paper further examines the subtle messages given to young women through television serials. It draws insights into the relationship between the contemporary Indian women and the images conceptualized and communicated on television.Keywords: media, women, gender, social roles
Procedia PDF Downloads 3791728 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition
Authors: Latha Subbiah, Dhanalakshmi Samiappan
Abstract:
In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.Keywords: curvelet, decomposition, levelset, ultrasound
Procedia PDF Downloads 3401727 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 1941726 Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model
Authors: M. Varin, A. M. Dubois, R. Gadbois-Langevin, B. Chalghaf
Abstract:
Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site.Keywords: very high spatial resolution, satellite imagery, WorlView-3, canopy height models, CHM, LiDAR, unmanned aerial vehicle, UAV
Procedia PDF Downloads 1261725 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran
Authors: Reza Zakerinejad
Abstract:
Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.Keywords: TreeNet model, terrain analysis, Golestan Province, Iran
Procedia PDF Downloads 5351724 Diagnosis of Alzheimer Diseases in Early Step Using Support Vector Machine (SVM)
Authors: Amira Ben Rabeh, Faouzi Benzarti, Hamid Amiri, Mouna Bouaziz
Abstract:
Alzheimer is a disease that affects the brain. It causes degeneration of nerve cells (neurons) and in particular cells involved in memory and intellectual functions. Early diagnosis of Alzheimer Diseases (AD) raises ethical questions, since there is, at present, no cure to offer to patients and medicines from therapeutic trials appear to slow the progression of the disease as moderate, accompanying side effects sometimes severe. In this context, analysis of medical images became, for clinical applications, an essential tool because it provides effective assistance both at diagnosis therapeutic follow-up. Computer Assisted Diagnostic systems (CAD) is one of the possible solutions to efficiently manage these images. In our work; we proposed an application to detect Alzheimer’s diseases. For detecting the disease in early stage we used the three sections: frontal to extract the Hippocampus (H), Sagittal to analysis the Corpus Callosum (CC) and axial to work with the variation features of the Cortex(C). Our method of classification is based on Support Vector Machine (SVM). The proposed system yields a 90.66% accuracy in the early diagnosis of the AD.Keywords: Alzheimer Diseases (AD), Computer Assisted Diagnostic(CAD), hippocampus, Corpus Callosum (CC), cortex, Support Vector Machine (SVM)
Procedia PDF Downloads 3841723 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications
Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha
Abstract:
Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization
Procedia PDF Downloads 3541722 High-Resolution Computed Tomography Imaging Features during Pandemic 'COVID-19'
Authors: Sahar Heidary, Ramin Ghasemi Shayan
Abstract:
By the development of new coronavirus (2019-nCoV) pneumonia, chest high-resolution computed tomography (HRCT) has been one of the main investigative implements. To realize timely and truthful diagnostics, defining the radiological features of the infection is of excessive value. The purpose of this impression was to consider the imaging demonstrations of early-stage coronavirus disease 2019 (COVID-19) and to run an imaging base for a primary finding of supposed cases and stratified interference. The right prophetic rate of HRCT was 85%, sensitivity was 73% for all patients. Total accuracy was 68%. There was no important change in these values for symptomatic and asymptomatic persons. These consequences were besides free of the period of X-ray from the beginning of signs or interaction. Therefore, we suggest that HRCT is a brilliant attachment for early identification of COVID-19 pneumonia in both symptomatic and asymptomatic individuals in adding to the role of predictive gauge for COVID-19 pneumonia. Patients experienced non-contrast HRCT chest checkups and images were restored in a thin 1.25 mm lung window. Images were estimated for the existence of lung scratches & a CT severity notch was allocated separately for each patient based on the number of lung lobes convoluted.Keywords: COVID-19, radiology, respiratory diseases, HRCT
Procedia PDF Downloads 1421721 Flood-Induced River Disruption: Geomorphic Imprints and Topographic Effects in Kelantan River Catchment from Kemubu to Kuala Besar, Kelantan, Malaysia
Authors: Mohamad Muqtada Ali Khan, Nor Ashikin Shaari, Donny Adriansyah bin Nazaruddin, Hafzan Eva Bt Mansoor
Abstract:
Floods play a key role in landform evolution of an area. This process is likely to alter the topography of the earth’s surface. The present study area, Kota Bharu is very prone to floods extends from upstream of Kelantan River near Kemubu to the downstream area near Kuala Besar. These flood events which occur every year in the study area exhibit a strong bearing on river morphological set-up. In the present study, three satellite imageries of different time periods have been used to manifest the post-flood landform changes. The pre-processing of the images such as subset, geometric corrections and atmospheric corrections were carried-out using ENVI 4.5 followed by the analysis processes. Twenty sets of cross sections were plotted using software Erdas 9.2, ERDAS and ArcGis 10 for the all three images. The results show a significant change in the length of the cross section which suggest that the geomorphological processes play a key role in carving and shaping the river banks during the floods.Keywords: flood induced, geomorphic imprints, Kelantan river, Malaysia
Procedia PDF Downloads 5451720 Content-Based Image Retrieval Using HSV Color Space Features
Authors: Hamed Qazanfari, Hamid Hassanpour, Kazem Qazanfari
Abstract:
In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.Keywords: content-based image retrieval, color difference histogram, efficient features selection, entropy, correlation
Procedia PDF Downloads 2491719 Integrating Wound Location Data with Deep Learning for Improved Wound Classification
Authors: Mouli Banga, Chaya Ravindra
Abstract:
Wound classification is a crucial step in wound diagnosis. An effective classifier can aid wound specialists in identifying wound types with reduced financial and time investments, facilitating the determination of optimal treatment procedures. This study presents a deep neural network-based classifier that leverages wound images and their corresponding locations to categorize wounds into various classes, such as diabetic, pressure, surgical, and venous ulcers. By incorporating a developed body map, the process of tagging wound locations is significantly enhanced, providing healthcare specialists with a more efficient tool for wound analysis. We conducted a comparative analysis between two prominent convolutional neural network models, ResNet50 and MobileNetV2, utilizing a dataset of 730 images. Our findings reveal that the RestNet50 outperforms MovileNetV2, achieving an accuracy of approximately 90%, compared to MobileNetV2’s 83%. This disparity highlights the superior capability of ResNet50 in the context of this dataset. The results underscore the potential of integrating deep learning with spatial data to improve the precision and efficiency of wound diagnosis, ultimately contributing to better patient outcomes and reducing healthcare costs.Keywords: wound classification, MobileNetV2, ResNet50, multimodel
Procedia PDF Downloads 321718 Optimized and Secured Digital Watermarking Using Entropy, Chaotic Grid Map and Its Performance Analysis
Authors: R. Rama Kishore, Sunesh
Abstract:
This paper presents an optimized, robust, and secured watermarking technique. The methodology used in this work is the combination of entropy and chaotic grid map. The proposed methodology incorporates Discrete Cosine Transform (DCT) on the host image. To improve the imperceptibility of the method, the host image DCT blocks, where the watermark is to be embedded, are further optimized by considering the entropy of the blocks. Chaotic grid is used as a key to reorder the DCT blocks so that it will further increase security while selecting the watermark embedding locations and its sequence. Without a key, one cannot reveal the exact watermark from the watermarked image. The proposed method is implemented on four different images. It is concluded that the proposed method is giving better results in terms of imperceptibility measured through PSNR and found to be above 50. In order to prove the effectiveness of the method, the performance analysis is done after implementing different attacks on the watermarked images. It is found that the methodology is very strong against JPEG compression attack even with the quality parameter up to 15. The experimental results are confirming that the combination of entropy and chaotic grid map method is strong and secured to different image processing attacks.Keywords: digital watermarking, discreate cosine transform, chaotic grid map, entropy
Procedia PDF Downloads 2521717 MITOS-RCNN: Mitotic Figure Detection in Breast Cancer Histopathology Images Using Region Based Convolutional Neural Networks
Authors: Siddhant Rao
Abstract:
Studies estimate that there will be 266,120 new cases of invasive breast cancer and 40,920 breast cancer induced deaths in the year of 2018 alone. Despite the pervasiveness of this affliction, the current process to obtain an accurate breast cancer prognosis is tedious and time consuming. It usually requires a trained pathologist to manually examine histopathological images and identify the features that characterize various cancer severity levels. We propose MITOS-RCNN: a region based convolutional neural network (RCNN) geared for small object detection to accurately grade one of the three factors that characterize tumor belligerence described by the Nottingham Grading System: mitotic count. Other computational approaches to mitotic figure counting and detection do not demonstrate ample recall or precision to be clinically viable. Our models outperformed all previous participants in the ICPR 2012 challenge, the AMIDA 2013 challenge and the MITOS-ATYPIA-14 challenge along with recently published works. Our model achieved an F- measure score of 0.955, a 6.11% improvement in accuracy from the most accurate of the previously proposed models.Keywords: breast cancer, mitotic count, machine learning, convolutional neural networks
Procedia PDF Downloads 223